text
stringlengths 18
160k
| meta
dict |
---|---|
Udzungwa red colobus
The Uzungwa red colobus (Piliocolobus gordonorum), also known as the Udzungwa red colobus or Iringa red colobus, is a species of primate in the family Cercopithecidae. It is endemic to riverine and montane forest in the Udzungwa Mountains in Tanzania. It is threatened by habitat loss.
References
Uzungwa red colobus
Category:Endemic fauna of Tanzania
Category:Mammals of Tanzania
Category:Endangered fauna of Africa
Uzungwa red colobus
Category:Taxonomy articles created by Polbot
Category:Primates of Africa
|
{
"pile_set_name": "wikipedia_en"
}
|
San Francisco is a city of rich history and culture, and as anyone planning a visit to the City by the Bay realizes, it can be difficult to narrow down all the places to visit and thing to do while there. Aside from the usual tourist spots like the Golden Gate Bridge, Alcatraz, and Fisherman's Wharf, San Francisco also offers historic architecture on nearly every corner, a serene Japanese Tea Gardens, the glorious Golden Gate Park, alongside countless cultural and artistic institutions. Need help fitting it all into one vacation? You might need an app for your smartphone (or tablet) to serve as your guide.
Getting Around & Accommodations
Left: MobileMuni - Right: TripAdvisor
MobileMuni
What's a trip to San Francisco that doesn't include a ride on the infamous cable car going down Powell Street? MobileMuni is a complete guide to getting around San Franciscos transit service that lets you know when busses or street cars will be arriving as well as assisting you in getting around the city. Free
Available for iOS ($2) / Android / Windows Phone
TripAdvisorWhen researching your vacation, Trip Advisor has probably appeared multiple times. This popular service is an all in one guide for flights, restaurants, points of interests and most helpful, hotel reviews. TripAdvisor delivers intuitive options to take the stress out of finding a hotel room in any given city with reviews, prices, detail breakdowns and photos. FreeAvailable for iOS / Android / Windows Phone
City Maps & Shopping Guide
Left: Tourist Eye - Right: ShopNear.me
Tourist EyeThose who plan out their trips beforehand will more than likely get the most out of their time, for those Tourist Eye is an absolute must download. Featuring offline map download ability, you can also pin point places you want to visit beforehand, journal every part of your day and flawlessly execute a preplanned itinerary. For those occasions when the unforeseeable gets in the way Tourist Eye will help find restaurants, tourist sites and more on the fly. FreeAlso available for Android
ShopNear.meThis app was designed for trendy shoppers and at the moment features the best places to get your shop on in the city by the bay with promise for more cities in the future. You can search either by item (shoes, blouses, dresses and more) or by shop and use the sale tab to find savings nearby. Fashionistas who live or are visiting San Francisco will find ShopNear.me an essential part of their app library. Free
Tourist Attractions
Left: San Francisco Guide - Right: San Francisco Travel Guide
San Francisco GuideSan Francisco Guide from mTrips gives us an app with an incredible UI. Although the priciest on this list, it manages to fit in a trip planner, offline map, nightlife guide as well but most incredible is the offline augmented reality function. You can see shops, restaurants, hotels and more around you by holding up your smartphone like a camera and seeing points of interest closest to you. $6Also available for iOS
San Francisco Travel GuideAn app that neatly wraps up the must see locations of San Francisco is Triposos own travel guide. Although it has an offline map, weather info, nightlife and restaurant locales like some of the others on this list, it's the rich background information behind sites, museums and San Francisco itself that makes it fantastic for sightseeing. A unique ability SFTG possesses is a mini guide for day trips to optimize the full potential of a 24 hour vacation. FreeAlso available for iOS
Wine and Dine
Left: MenuPages - Right: Top 100 Bay Area Restaurants
MenuPagesAre you in the mood for Japanese? Or maybe you'd like to try something vegan? Are you looking for a kid friendly place to eat? MenuPages will help find the perfect eatery for any meals of the day whatever the situation. Armed with menus for 30,000 restaurants in 8 major cities, it's easy to pick and choose your next meal with broad search criteria, user reviews, prices, hours of operation and current location at your fingertips. FreeAlso available for Android
Top 100 Bay Area RestaurantsWhen travelling it's understandable to try the best of what a new location has to offer. The San Francisco Chronicler has compiled 100 of the best of eateries in San Francisco. Each restaurant entry includes a brief synopsis, prices, specialities, parking and noise level to tantalize your taste buds. FreeAlso available for iOS
City Guides, Nightlife, and Traveling Necessities
The Bold Italic: the most difficult task in visiting a city so rich in culture like San Francisco is cutting through the layers of tourist-y spots and experiencing the SF the locals enjoy on a day to day basis. The Bold Italic is all about local discovery...for locals, written by locals (Bold Locals), spotlighting the best and most relevant people and spots in San Francisco on a daily basis. This app is like having a good friend who lives in SF to guide you to all the best spots all the others might overlook or miss.iOS (Free)
Left: San Francisco Hot List - Right: WeatherBug
San Francisco Hot List
Whether you're looking for a night of dancing or are looking to soak in the evenings atmosphere over drinks, San Francisco Hot List has your number. Supporting a robust list of 150 of the best bars, nightclubs, restaurants and handouts, you're guaranteed nothing but the creme de la creme of San Franciscan nightlife. Additional features include search criteria based on tidbits such as the best Bloody Mary in town or best Rooftop bar and up to date guides on weekly special events. $3
Also available for iOS
WeatherbugWhat's worse than getting caught in the rain? Getting caught in the rain while on vacation. While we can't control the weather we can at least anticipate it's ups and downs before leaving the sanctuary of our hotel rooms. Weatherbug is a fantastic lightweight weather app that neatly wraps up daily weather forecasts so you'll know weather or not to dress warm or to take that umbrella along just in case. FreeAvailable for iOS / Android / Windows Phone
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We consider the initial value problem $u_t = \Delta \log u$, $u(x,0) = u_0(x)\ge 0$ in ${\mathbb R}^2$, corresponding to the Ricci flow, namely conformal evolution of the metric $u \, (dx_1^2 + dx_2^2)$ by Ricci curvature. It is well known that the maximal (complete) solution $u$ vanishes identically after time $T= \frac 1{4\pi} \int_{{\mathbb R}^2} u_0 $. Assuming that $u_0$ is compactly supported we describe precisely the Type II vanishing behavior of $u$ at time $T$: we show the existence of an inner region with exponentially fast vanishing profile, which is, up to proper scaling, a [*soliton cigar solution*]{}, and the existence of an outer region of persistence of a logarithmic cusp. This is the only Type II singularity which has been shown to exist, so far, in the Ricci Flow in any dimension. It recovers rigorously formal asymptotics derived by J.R. King [@K].'
address:
- 'Department of Mathematics, Columbia University, New York, USA'
- 'Department of Mathematics, Columbia University, New York, USA'
author:
- 'Panagiota Daskalopoulos$^*$'
- Natasa Sesum
title: 'Type II extinction profile of maximal Solutions to the Ricci flow in ${\mathbb R}^2$'
---
[^1]
Introduction
============
We consider the Cauchy problem $$\begin{cases} \label{eqn-u}
u_t = \Delta \log u & \mbox{in} \,\, {\mathbb R}^2
\times (0,T)\\
u(x,0) = u_0(x) & x \in {\mathbb R}^2 ,
\end{cases}$$ for the [*logarithmic fast diffusion*]{} equation in ${\mathbb R}^2$, with $T >0$ and initial data $u_0 $ non-negative, bounded and compactly supported.
It has been observed by S. Angenent and L. Wu [@W1; @W2] that equation represents the evolution of the conformally equivalent metric $g_{ij} = u\, dx_i\, dx_j$ under the [*Ricci Flow*]{} $$\label{eqn-ricci}
\frac{\partial g_{ij}}{\partial t} = -2 \, R_{ij}$$ which evolves $g_{ij}$ by its Ricci curvature. The equivalence easily follows from the observation that the conformal metric $g_{ij} = u\, I_{ij}$ has scalar curvature $R = -
(\Delta \log u) /u$ and in two dimensions $R_{ij} = \frac 12 \,
R\, g_{ij}$.
Equation arises also in physical applications, as a model for long Van-der-Wals interactions in thin films of a fluid spreading on a solid surface, if certain nonlinear fourth order effects are neglected, see [@dG; @B; @BP].
It is shown in [@DD1] that given an initial data $u_0 \geq 0$ with $\int_{{\mathbb R}^2} u_0 \, dx < \infty$ and a constant $\gamma \geq 2 $, there exists a solution $u_\gamma$ of with $$\label{eqn-intc}
\int_{{\mathbb R}^2} u_\lambda (x,t) \, dx = \int_{{\mathbb R}^2} u_0 \, dx -
2\pi \gamma \, t.$$ The solution $u_\gamma$ exists up to the exact time $T=T_\gamma$, which is determined in terms of the initial area and $\gamma$ by $T_\gamma= \frac{1}{2\pi\, \gamma} \, \int_{{\mathbb R}} u_0 \, dx.$
We restrict our attention to [*maximal solutions*]{} $u$ of , corresponding to the value $\gamma =2$ in , which vanish at time $$\label{eqn-mvt}
T= \frac{1}{4 \pi} \, \int_{{\mathbb R}^2} u_0 (x) \, dx.$$ It is shown in [@DD1] and [@RVE] that if $u_0$ is compactly supported, then the maximal solution $u$ which extincts at time $T$ satisfies the asymptotic behavior $$\label{eqn-agc}
u(x,t) = \frac {2 t}{|x|^2\log^2 |x|} \left ( 1 + o(1)
\right ), \qquad \mbox{as \,\, $|x| \to \infty$}, \quad 0 \leq t <
T.$$ This bound, of course, deteriorates as $t \to T$. Geometrically corresponds to the condition that the conformal metric is complete. The manifold can be visualized as a surface of finite area with an unbounded cusp.
J.R. King [@K] has formally analyzed the extinction behavior of maximal solutions $u$ of , as $t \to T^-$. His analysis, for radially symmetric and compactly supported initial data, suggests the existence of two regions of different behavior: in the [*outer region*]{} $(T-t)\, \log r > T$ the “logarithmic cusp” exact solution $2 t\,
/|x|^2 \, \log^2 |x|$ of equation $u_t = \Delta \log u$ persists. However, in the [*inner region*]{} $(T-t)\, \log r \leq T$ the solution vanishes exponentially fast and approaches, after an appropriate change of variables, one of the soliton solutions $U$ of equation $U_\tau = \Delta \log U$ on $-\infty < \tau <
\infty$ given by $U(x,\tau) = 1/( \lambda |x|^2 + e^{4\lambda \tau})$, with $\tau=1/(T-t)$ and $\lambda=T/2$.
This behavior was established rigorously in the radially symmetric case by the first author and M. del Pino in [@DD2]. The precise asymptotics of the Ricci flow neckpinch in the compact case, on $S^n$, has been established by Knopf and Angenent in [@AngKn].
Our goal in this paper is to remove the assumption of radial symmetry and establish the vanishing behavior of maximal solutions of for any non-negative compactly supported initial data.
To state the inner region behavior in a precise manner, we perform the change of variables $$\label{eqn-rbu}
\bar u(x,\tau) = \tau^2 \, u(x, t), \qquad \tau = \frac 1{T-t}$$ and $$\label{eqn-rtu}
\qquad \tilde u(y,\tau) = \alpha(\tau) \, \bar
u(\alpha(\tau)^{1/2} y,\tau),$$ with $$\label{eqn-atau}
\alpha(\tau) = [\bar u(0,\tau)]^{-1} = [(T-t)^{-2} u(0,t)]^{-1}$$ so that $\tilde u(0,\tau)=1$.
A direct computation shows that the rescaled solution $\tilde u$ satisfies the equation
$$\label{eqn-tu}
\tilde u_\tau = \Delta \log \tilde u + \frac {\alpha
'(\tau)}{2\alpha (\tau)} \, \nabla ( y \cdot \tilde u) +\frac{2
\tilde u}{\tau}.$$
Then, following result holds:
\[Mth1\] (Inner behavior) Assume that the initial data $u_0$ is non-negative, bounded and compactly supported. Then, $$\label{eqn-lim}
\lim_{\tau \to \infty} \frac{ \alpha ' (\tau)}{2\, \alpha(\tau)} = T$$ and the rescaled solution $\tilde u$ defined by - converges, uniformly on compact subsets of ${\mathbb R}^2$, to the solution $$U(x) = \frac 1{ \frac T2 \, |y|^2 +
1}$$ of the steady state equation $$\Delta \log U + T\cdot \nabla ( y \cdot U)=0.$$
Since for any maximal solution $T= (1/{4\pi}) \int_{{\mathbb R}^2} u_0(x)\, dx$, this theorem shows, in particular, that the limit of the rescaled solution is uniquely determined by the area of the initial data. The uniqueness of the limit has not previously shown in [@DD2] even under the assumption of radial symmetry.
To describe the vanishing behavior of $u$ in the outer region we first perform the cylindrical change of variables $$\label{eqn-rv}
v(\zeta,\theta,t) = r^2\, u(r,\theta,t), \qquad \zeta =\log r$$ with $(r,\theta)$ denoting the polar coordinates. Equation $u_t=\Delta \log u$ in cylindrical coordinates takes the form $$\label{eqn-v}
v_t = \Delta_c \log v$$ with $\Delta_c$ denoting the Laplacian in cylindrical coordinates defined as $$\Delta_c \log v=(\log v)_{\zeta\zeta} + (\log v)_{\theta\theta}.$$
We then perform a further scaling setting $$\label{eqn-rtv}
\tilde v(\xi ,\theta, \tau) = \tau^2\, v (\tau \xi,\theta,t), \qquad \tau=
\frac 1{T-t}.$$ A direct computation shows that $\tilde v$ satisfies the equation $$\label{eqn-tv}
\tau \, \tilde v_{\tau} = \frac 1{\tau} (\log \tilde v)_{\xi\xi} + \tau \, (\log \tilde v)_{\theta\theta}
+ \xi \, \tilde v_{\xi} + 2\tilde v.$$ The extinction behavior of $u$ (or equivalently of $v$) in the outer region $\xi \geq T$, is described in the following result.
\[Mth2\] (Outer behavior). Assume that the initial data $u_0$ is non-negative, bounded and compactly supported. Then, the rescaled solution $\tilde v$ defined by converges, as $\tau \to \infty$, to the $\theta-$independent steady state solution $V(\xi)$ of equation given by $$\label{eqn-dV}
V(\xi) =
\begin{cases}
\frac{ 2 T}{\xi^2}, \qquad &\xi > T \\
0, \qquad &\xi < T.
\end{cases}$$ Moreover, the convergence is uniform on the set $(-\infty, \xi^-]\times [0,2\pi]$, for any $-\infty < \xi^- < T$, and on compact subsets of $(T, +\infty) \times [0,2\pi]$.
Under the assumption of radial symmetry this result follows from the work of the first author and del Pino [@DD2].
The proof of Theorems \[Mth1\] and \[Mth2\] rely on sharp estimates on the geometric [*width*]{} $W$ and on the [*maximum curvature*]{} $R_{\max}$ of maximal solutions near their extinction time $T$ derived in [@DH] by the first author and R. Hamilton. In particular, it is found in [@DH] that the maximum curvature is proportional to $1/{(T-t)^2}$, which does not go along with the natural scaling of the problem which would entail blow-up of order $1/{(T-t)}$. One says that the vanishing behavior is [*of type II*]{}. The proof also makes an extensive use of the Harnack estimate on the curvature $R = -\Delta \log u /u$ shown by Hamilton and Yau in [@HY]. Although the result in [@HY] is shown only for a compact surface evolving by the Ricci flow, we shall observe in section \[sec-prelim\] that the result remains valid in our case as well. Finally, let us remark that the proof of the inner-region behavior is based on the classification of eternal complete solutions of the 2-dimensional Ricci flow, recently shown by the authors in [@DS].
Preliminaries {#sec-prelim}
=============
In this section we will collect some preliminary results which will be used throughout the rest of the paper. For the convenience of the reader, we start with a brief description of the geometric estimates in [@DH] on which the proofs of Theorems \[Mth1\] and \[Mth2\] rely upon.
Geometric Estimates. {#sec-ge}
--------------------
In [@DH] the first author and R. Hamilton established upper and lower bounds on the geometric width $W(t) $ of the maximal solution $u$ of and on the maximum curvature $R_{\max}(t)=
\max_{x \in {\mathbb R}^2} R(x,t)$, with $R= - (\Delta \log u) /u $.
Let $F:{\mathbb R}^2\to[0,\infty)$ denote a proper function $F$, such that $F^{-1}(a)$ is compact for every $a\in [0,\infty)$. The width of $F$ is defined to be the supremum of the lengths of the level curves of $F$, namely $w(F) = \sup_c L\{F=c\}.$ The width $w$ of the metric $g$, as introduced in [@DH], is defined to be the infimum $$w(g) = \inf_F w(F).$$
The estimates in [@DH] depend on the time to collapse $T-t$. However, they do not scale in the usual way. More precisely:
\[thm-DH1\] There exist positive constants $c$ and $C$ for which $$\label{eqn-w}
c \, (T-t) \leq W(t) \leq C\, (T-t)$$ and $$\label{eqn-c}
\frac{c}{(T-t)^2} \leq R_{\max}(t) \leq \frac{C}{(T-t)^2}$$ for all $0< t < T$.
The Hamilton-Yau Harnack estimate.
----------------------------------
In [@HY] Hamilton and Yau established a Harnack estimate on the curvature $R$ of a compact surface evolving by the Ricci flow, in the case where the curvature $R$ changes sign. Since the proof in [@HY] uses only local quantities, the result and its proof can be carried over to the complete, non-compact case.
\[thm-harnack\] For any constants $E$ and $L$ we can find positive constants $A, B, C, D$ such that for any complete solution to the Ricci flow on ${\mathbb R}^2$ which at the initial time $t=0$ satisfies $$\label{equation-lower-R}
R \ge 1-E$$ and $$\frac{1}{R+E}\frac{\partial R}{\partial t} - \frac{|\nabla R|^2}{(R+E)^2} \ge -L$$ then, for all $t\ge 0$ we have $$\label{equation-harnack0}
\frac{1}{R+E}\frac{\partial R}{\partial t} - \frac{|\nabla R|^2}{(R+E)^2}
+ F(\frac{|\nabla R|^2}{(R+E)^2}, R+E) \ge 0$$ where $$F(X,Y) = A + \sqrt{2B(X+Y) + C} + D\log Y.$$
Integrating the above estimate along paths we obtain:
Under the assumptions of Theorem \[Mth1\], there exist uniform constants $E >0$ and $C_1, C_2 > 0$ so that for every $x_1, x_2 \in {\mathbb R}^2$ and $T/2 < t_1 < t_2$ we have $$\label{equation-harnack}
\frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}}- C_1(t_2-t_1) - C_2\frac{{\mathrm{dist}}_{t_1}^2(x_1,x_2)}{t_2-t_1}.$$
By the Aronson-Benilán inequality $R \ge -1/t \ge -2/T = 1-E$ for all $t\in [T/2,T)$. Hence the estimate and the lower curvature bound on $R$ give $$\begin{aligned}
\label{equation-partial}
\frac{\partial R}{\partial t} &\ge& \frac{|\nabla R|^2}{R+E} - (2A+\sqrt{C})(R+E) - \sqrt{2B}|\nabla R|
\nonumber \\&-&\sqrt{2B}(R+E)\sqrt{R+E} - D\log(R+E)(R+E)\nonumber \\
&\ge& \frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E} - \frac{1}{4}\frac{|\nabla R|^2}{R+E} \\
&=& \frac{3}{4}\frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E}\nonumber.\end{aligned}$$ Take any two points $x_1, x_2 \in {\mathbb R}^2$ and $T/2 \leq t_1 \le t_2 <T$ and let $\gamma$ be a curve connecting $x_1$ and $x_2$, such that $\gamma(t_1) = x_1$ and $\gamma(t_2) = x_2$. Since $$\frac{d}{dt}R(\gamma(t),t) = \frac{\partial R}{\partial t} + \langle \nabla R,\dot{\gamma}\rangle$$ using also (\[equation-partial\]) we find $$\begin{aligned}
\frac{d}{dt}R(\gamma(t),t) &\ge& \frac{3}{4}\frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E} - C|\dot{\gamma}|^2(R+E)
- \frac{1}{4}\frac{|\nabla R|^2}{R+E} \\
&\ge& -A_3(R+E)^{3/2}(1 + |\dot{\gamma}|^2).\end{aligned}$$ Integrating the previous equation along the path $\gamma$, gives $$\frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}} - C(t_2-t_1) - \int_{t_1}^{t_2}|\dot{\gamma}|_{g(t)}^2dt.$$ Due to the bound $R \ge 1-E$ we have for $t\ge s$ $$|\dot{\gamma}|^2_{g(t)} \le (1-E) \, e^{s t}|\dot{\gamma}|^2_{g(s)}$$ and if we choose the curve $\gamma$ to be the minimal geodesic with respect to metric $g(t_1)$ connecting $x_1$ and $x_2$ we obtain $$\frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}} - C_1(t_2-t_1) - C_2\frac{{\mathrm{dist}}_{t_1}^2(x_1,x_2)}{t_2-t_1}$$ as desired.
Monotonicity of Solutions.
--------------------------
Our solution $u(x,t)$ to (\[eqn-u\]) has compactly supported initial data. The classical argument based on reflection, due to Alexandrov and Serrin, proves that such solutions enjoy the following monotonicity in the radial direction:
\[lemma-monotonicity\] Under the assumptions of Therorem \[Mth1\], if ${\mathrm{supp}}\, u_0(\cdot) \subset B_\rho(0)$, then $$\label{equation-monotonicity}
u(x,t) \ge u(y,t),$$ for all $t\in (0,T)$ and every pair of points $x,y\in{\mathbb R}^2$ such that $|y| \ge |x|+\rho$.
The proof of Lemma \[lemma-monotonicity\] is the same as the proof of Proposition $2.1$ in [@AC]. For the reader’s convenience we will briefly sketch it.
Assume that $\rho=1$. By the comparison principle for maximal solutions it easily follows that if $K = {\mathrm{supp}}u(\cdot,0)$ and $K \subset \{x\in {\mathbb R}^2:\,\,x_2 > 0\}$, then $u(x_1,x_2,t) \ge u(x_1,-x_2,t)$ for $x_1\in {\mathbb R}^+$ and $t\in
[0,T)$. Fix $x^0\in B_1$ and $x^1\in \partial B_{1+\delta}$ for $\delta > 0$. Let $\Pi$ be a hyperplane of points in ${\mathbb R}^2$ which are equidistant from $x^0$ and $x^1$. Then, it easily follows $${\mathrm{dist}}(\Pi,\{0\}) \ge 1$$ which implies $x^0$ and ${\mathrm{supp}}u(\cdot,0)$ are in the same half-space with respect to $\Pi$. Since $x^1$ is the reflection of $x^0$ in $\Pi$, it follows $u(x^0,t) \ge u(x^1,t)$. We can now let $\delta\to 0$ to get the claim.
Notice that due to (\[eqn-agc\]), for every $t\in (0,T)$ we can define $x_t$ to be such that $u(x_t,t) = \max_{{\mathbb R}^2}u(\cdot,t)$. An easy consequence of Lemma \[lemma-monotonicity\] is the following result about $\{x_t\}_{t\in(0,T)}$.
\[cor-maximums\] For every $t\in (0,T)$, $x_t\in B_{2\rho}(0)$.
Inner Region Convergence {#sec-irc}
========================
This section is devoted to the proof of the inner region convergence, Theorem \[Mth1\] stated in the Introduction. We assume, throughout this section, that $u$ is a smooth, maximal solution of with compactly supported initial data $u_0$ and $u$ a maximal solution that vanishes at time $$T= \frac 1{4\pi} \int_{{\mathbb R}^2} u_0 \, dx.$$
Scaling and convergence {#sec-sc}
-----------------------
We introduce a new scaling on the solution $u$ namely $$\label{eqn-rbu2}
\bar u(x,\tau) = \tau^2 \, u(x,t), \qquad \tau = \frac 1{T-t},
\quad \tau \in (1/T, \infty).$$ Then $\bar u$ satisfies the equation $$\label{eqn-bu}
\bar u_\tau = \Delta \log \bar u + \frac{2\bar u}{\tau}, \qquad
\mbox{on \,\, $1/T \leq \tau < \infty.$}$$ Notice that under this transformation, $ \bar R := - \Delta \log
\bar u/ \bar u$ satisfies the estimate $$\label{eqn-c1}
\bar R_{\max}(\tau) \leq C$$ for some constant $C < \infty$. This is a direct consequence of Theorem \[thm-DH1\], since $\bar R_{\max} (\tau)
= (T-t)^2\, R_{\max}(t)$.
For an increasing sequence $\tau_k \to \infty$ we set $$\label{eqn-ruk}
\bar u_k(y,\tau) = \alpha_k \, \bar u(\alpha_k^{1/2}\, y, \tau
+\tau_k), \qquad (y,\tau) \in {\mathbb R}^2 \times (- \tau_k + 1/T, \infty)$$ where $$\alpha_k = [\bar u(0,\tau_k)]^{-1}$$ so that $\bar u_k(0,0)=1$, for all $k$. Then, $\bar u_k$ satisfies the equation $$\label{eqn-uk}
\bar u_\tau = \Delta \log \bar u + \frac{2\bar u}{\tau+\tau_k}.$$ Let $$\bar R_k := - \frac{\Delta \log \bar u_k}{\bar u_k}.$$ Then, by , we have $$\label{eqn-Rkk}
\max_{y \in {\mathbb R}^2} \bar R_k(y,\tau) \leq C, \qquad
-\tau_k + 1/T < \tau < +\infty.$$ We will also derive a global bound from bellow on $\bar R_k$. The Aronson-Benilán inequality $u_t \leq u/t$, on $ 0 \leq t <
T$, gives the bound $ R(x,t) \geq - 1/t$ on $ 0 \leq t < T$. In particular, $ R(x,t)
\geq - C$ on $ T/2 \leq t < T$, which in the new time variable $\tau=1/(T-t)$ implies the bound $$\bar R(x,\tau) \geq - \frac{C}{ \tau^2}, \qquad 2/T < \tau <
\infty.$$ Hence $$\bar R_k(y,\tau) \geq - \frac C{(\tau+\tau_k)^2}, \qquad -\tau_k +
2/T < \tau < +\infty.$$ Combining the above inequalities we get $$\label{eqn-Rk}
- \frac{C}{(\tau+\tau_k)^2} \leq \bar R_k(y,\tau) \leq C, \qquad
\forall (y,\tau) \in {\mathbb R}^2 \times (-\tau_k + 2/T, +\infty).$$
Based on the above estimates we will now show the following convergence result.
\[lem-ick\] For each sequence $\tau_k \to \infty$, there exists a subsequence $\tau_{k_l}$ of $\tau_k$, for which the rescaled solution $\bar
u_{\tau_{k_l}}$ defined by converges, uniformly on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, to an eternal solution $U$ of equation $U_\tau = \Delta \log U$ on ${\mathbb R}^2 \times {\mathbb R}$ with uniformly bounded curvature and uniformly bounded width. Moreover, the convergence is in $C^\infty(K)$, for any $K \subset {\mathbb R}^2 \times {\mathbb R}$ compact.
Denote by $x_k = x_{t_k}$ the maximum point of $u(\cdot,t_k)$. First, instead of rescaling our solution by $\alpha_k$ we can rescale it by $\beta_k =
[\bar{u}(x_k,\tau_k)]^{-1}$, that is, consider $$\tilde{u}_k(y,\tau) =
\beta_k\bar{u}(\beta_k^{1/2}\, y,\tau+\tau_k), \qquad
\tau \in (-\tau_k+1/T,\infty).$$ For $y_k = \beta_k^{-1/2}\, x_k$ we have $\tilde{u}_k(y_k,0) = 1$ and $\tilde{u}_k(\cdot,0) \le 1$ since $x_k$ is the maximum point of $u(\cdot,t_k)$. Notice that $|y_k| \leq 2\rho \beta_k^{-1/2}$, because $x_k \in B_{2\rho}$, by Corollary \[cor-maximums\]. Since $\tilde u_k$ satisfies , standard arguments imply that $\tilde{u}_k$ is uniformly bounded from above and below away from zero on any compact subset of ${\mathbb R}^2\times {\mathbb R}$. In particular, there are uniform constants $C_1 >0$ and $C_2 < \infty$ so that $$\label{equation-quotient}
C_1 \le \frac{\alpha_k}{\beta_k} \le C_2.$$ Let $K\subset {\mathbb R}^2$ be a compact set. By (\[equation-quotient\]), for every compact set $K$ there is a compact set $K'$ so that for all $y \in K$ we have $y \, (\frac{\alpha_k}{\beta_k})^{1/2} \in K'$, for all $k$. Also, by the previous estimates we have $$C_1(K') \le \frac{\bar{u}(\beta_k^{1/2}\,z,\tau_k+\tau)}{\bar{u}(x_k,\tau_k)} = \tilde u_k(z,\tau) \le C_2(K')$$ for all $z\in K'$ and $\tau$ belonging to a compact subset of $
(-\infty ,\infty)$. Therefore, using and remembering that $\alpha_k = [\bar u(0,\tau_k)]^{-1}$ we find $$\bar u_k(y,\tau)=\frac{\bar{u}(\alpha_k^{1/2}y,\tau+ \tau_k)}{\bar{u}(0,\tau_k)} \le
\frac{1}{C_1}\frac{\bar{u}(\beta_k^{1/2}[(\frac{\alpha_k}{\beta_k})^{1/2}y],\tau_k+\tau)}{\bar{u}(x_k,\tau_k)}
\le \frac{C_2(K')}{C_1} = C_2(K).$$ Similarly, $$C_1(K) = \frac{C_1(K')}{C_2} \le \frac{\bar{u}(\alpha_k^{1/2} \, y,\tau_k+\tau)}{\bar{u}(0,\tau_k)}= \bar u_k (y,\tau_k)$$ for $y \in K$ and $\tau$ belonging to a compact set. Hence, by the classical regularity theory the sequence $\{ \bar u_k \}$ is equicontinuous on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$. It follows that there exists a subsequence $\tau_{k_l}$ of $\tau_k$ such that $\bar
u_{k_l} \to U$ on compact subsets of $ {\mathbb R}^2 \times {\mathbb R}$, where $U$ is an eternal solution of equation $$\label{eqn-U}
U_\tau = \Delta \log U, \qquad \mbox{on}\,\, {\mathbb R}^2 \times {\mathbb R}$$ with infinite area $\int_{{\mathbb R}^2} U(y,\tau)\, dy = \infty$ (since $\int_{{\mathbb R}^2} \bar u_k(y,\tau) = 2(\tau + \tau_k)$). In addition the classical regularity theory of quasilinear parabolic equations implies that $\{u_{k_l} \}$ can be chosen so that $u_{k_l} \to U$ in $C^\infty(K)$, for any compact set $K \subset
{\mathbb R}^2 \times {\mathbb R}$, with $U(0,0) = 1$.
It then follows that $\bar R_{k_l} \to \bar R:= -( \Delta \log
U)/U$. Taking the limit $k_l \to \infty$ on both sides of we obtain the bounds $$\label{eqn-cU}
0 \leq \bar R \leq C, \qquad \mbox{on \,\, ${\mathbb R}^2 \times
{\mathbb R}$.}$$ Finally, to show that $U$ has uniformly bounded width, we take the limit $k_l \to \infty$ in .
As direct consequence of Lemma \[lem-ick\] and the classification result of eternal solutions to the complete Ricci flow on ${\mathbb R}^2$, recently showed in [@DS], we obtain the following convergence result.
\[thm-ick\] For each sequence $\tau_k \to \infty$, there exists a subsequence $\tau_{k_l}$ of $\tau_k$ and numbers $\lambda, \bar{\lambda} >0$ for which the rescaled solution $\bar u_{\tau_{k_l}}$ defined by converges, uniformly on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, to the soliton solution $U$ of the Ricci Flow given by $$\label{eqn-soliton}
U(y,\tau) = \frac 1{\lambda \, |y|^2 + e^{4 \bar{\lambda} \tau}}.$$ Moreover, the convergence is in $C^\infty(K)$, for any $K \subset {\mathbb R}^2 \times {\mathbb R}$, compact.
>From Lemma \[lem-ick\], $\bar u_{\tau_{k_l}} \to U$, where $U$ is an eternal solution of $U_t= \Delta\log U$, on ${\mathbb R}^2 \times {\mathbb R}$, with uniformly bounded width, such that $\sup_{{\mathbb R}^2}R(\cdot,t) \le C(t) <
\infty$ for every $t\in (-\infty,\infty)$. The main result in [@DS] shows that the limiting solution $U$ is a soliton of the form $U(x,\tau) = \frac 2{\beta \, (|x-x_0|^2 + \delta \,
e^{2\beta t})}$, with $\beta >0$, $\delta >0$, which under the condition $U(0,0) =1$ takes the form $U(x,\tau) = \frac{1}{\lambda|x-x_0|^2 + e^{4\bar{\lambda}\tau}}$, with $\lambda, \bar{\lambda} >0$.
It remains to show that the limit $U(\cdot,\tau)$ is rotationally symmetric around the origin, that is, $x_0 = 0$. This will follow from Lemma \[lemma-monotonicity\] and Lemma \[lem-ick\]. Notice that $\lim_{k\to\infty}\alpha_k = \infty$. Since $\bar{u}_k(\cdot,\tau_k)$ converges uniformly on compact subsets of ${\mathbb R}^2\times {\mathbb R}$ to a cigar soliton $U(y,0)$, we have that $$\begin{aligned}
\bar{u}(0,\tau_k) &=& \tau_k^2u(0,t_k)\approx \frac{1}{\lambda|x_0|^2 + e^{4\bar{\lambda}\tau_k}} \\
&\le& e^{-4\bar{\lambda}\tau_k} \to 0,\end{aligned}$$ as $k\to\infty$ and therefore $\lim_{k\to\infty}\alpha_k = \infty$. Lets us express $\bar u=\bar u (r,\theta,\tau)$ in polar coordinates. For every $r >0 $ there is $k_0$ so that $\alpha_k^{1/2}\, r > 1$ for $k\ge k_0$. By Lemma \[lemma-monotonicity\] $$\begin{aligned}
\min_{\theta}\bar{u}(\alpha_k^{1/2} \, r,\theta,\tau_k) &\ge& \max_{\theta}
\bar{u}(\alpha_k^{1/2}\, r+1,\theta,\tau_k) \\
&=& \max_{\theta}\bar{u}(\alpha_k^{1/2}(r+\alpha_k^{-1/2}),\theta,\tau_k)\end{aligned}$$ which implies $$\min_{\theta}\bar{u}_k(r,\theta,0) \ge \max_{\theta}\bar{u}_k(r+\alpha_k^{-1/2},\theta,0).$$ Let $k\to\infty$ to obtain $$\min_{\theta}U(r,\theta,0) \ge \max_{\theta} U(r,\theta,0)$$ which yields the limit $U(r,\theta,0)$ is radially symmetric with respect to the origin and therefore $x_0=0$, implying that $U$ is of the form (\[eqn-soliton\]).
Further behavior {#sec-fb}
----------------
We will now use the geometric properties of the rescaled solutions and their limit, to further analyze their vanishing behavior. Our analysis will be similar to that in [@DD2], applicable to the nonradial case as well. However, the uniqueness of the limit along sequences $\tau_k \to \infty$ which will be shown in Theorem \[thm-curvature-limit\], is an improvement of the results in [@DD2], even in the radial case.
We begin by observing that rescaling back in the original $(x,t)$ variables, Theorem \[thm-ick\] gives the following asymptotic behavior of the maximal solution $u$ of .
\[cor-ick1\] Assuming that along a sequence $t_k \to T$, the sequence $\bar
u_k$ defined by with $\tau_k = (T-t_k)^{-1}$ converges to the soliton solution $U_\lambda$, on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, then along the sequence $t_k$ the solution $u(x,t)$ of satisfies the asymptotics $$\label{eqn-asu1}
u(x,t_k) \approx \frac{(T-t_k)^2} {\lambda \, |x|^2 + \alpha_k},
\qquad \mbox{on} \quad |x| \leq \alpha_k^{1/2} \, M$$ for all $M >0$. In addition, the curvature $R(0, t_k) = - \Delta
\log u(0,t_k)/u(0,t_k)$ satisfies $$\label{eqn-limc}
\lim_{t_k \to T} (T-t_k)^2 \, R(0,t_k) = 4\, \lambda.$$
The proof of Lemma above is the same as the proof of Lemma $3.3$ in [@DD2]. The following Lemma provides a sharp bound from below on the maximum curvature $4\, \lambda$ of the limiting solitons.
\[lem-bl\] Under the assumptions of Theorem \[Mth1\] the constant $\lambda$ in each limiting solution satisfies $$\lambda \geq \frac{T}{2}.$$
We are going to use the estimate proven in Section 2 of [@DH]. It is shown there that if at time $t$ the solution $u$ of satisfies the scalar curvature bound $R(t) \geq -
2\, k(t)$, then the width $W(t)$ of the metric $u(t)\,
(dx_1^2+dx_2^2)$ (c.f. in Section \[sec-ge\] for the definition) satisfies the bound $$W(t) \leq \sqrt{k(t)} \, A(t) = 4 \pi \, \sqrt{k(t)} \, (T-t).$$ Here $A(t) = 4 \pi (T-t)$ denotes the area of the plane with respect to the conformal metric $u(t)\, (dx_1^2+dx_2^2)$. Introducing polar coordinates $(r,\theta)$, let $$\bar{U}(r,t)
= \max_{\theta}u(r,\theta,t) \quad \mbox{and} \quad \underbar{U}(r,t) =\min_{\theta}u(r,\theta,t).$$ Then $$\underbar{U}(r,t) \le u(r,\theta,t) \le \bar{U}(r,t)$$ implying the bound $$\label{eqn-111}
W(\underbar{U}(t)) \le W(t) \le 4 \pi \, \sqrt{k(t)} \, (T-t).$$ Observe next that the Aronson-Benilán inequality on $u$ implies the bound $R(x,t) \ge -{1}/{t}.$ Hence we can take $k(t) = \frac{1}{2t}$ in (\[eqn-111\]). Observing that for the radially symmetric solution $\underbar{U}$ the width $W(\underbar{U}) = \max_{r\ge 0}2\pi r\sqrt{\underbar{U}}(r,t)$, we conclude the pointwise estimate $$\label{equation-222}
2 \pi r \sqrt{\underbar{U}}(r,t) \le \frac{4\pi(T-t)}{\sqrt{2t}}, \qquad r\ge 0, \,\, 0<t<T.$$ By Lemma (\[lemma-monotonicity\]), $$\label{equation-333}
r\sqrt{u}(r+\rho,\theta,t) \le r\sqrt{\underbar{U}}(r,t), \qquad \mbox{for} \,\, r > 0.$$ For a sequence $t_k\to T$, let $\alpha_k = [\bar u(0,\tau_k)]^{-1}$, $\tau_k = 1/(T-t_k)$, as before. Using (\[eqn-asu1\]), (\[equation-222\]) and (\[equation-333\]) we find $$\frac{r(T-t_k)}{\sqrt{\lambda (r+\rho)^2+\alpha_k}} \le \frac{2(T-t_k)}{\sqrt{2t_k}}, \qquad
r\le M\alpha_k^{1/2},$$ for any positive number $M$. Hence, when $r = M\,\alpha_k^{1/2}$ we obtain the estimate $$\frac{M\, \alpha_k^{1/2} }{\sqrt{\lambda (M+\rho \, \alpha_k^{-1/2})^2 \, \alpha_k +
\alpha_k}} \leq \frac{2\, }{\sqrt{2\,t_k}}$$ or $$\frac{M}{\sqrt{\lambda \, (M+\rho \, \alpha_k^{-1/2})^2 + 1}} \leq
\frac{2 }{\sqrt{2t_k}}.$$ Letting $t_k \to T$ and taking squares on both sides, we obtain $$\frac{1}{\lambda + 1/M^2} \leq
\frac{2}{T}.$$ Since $M >0$ is an arbitrary number, we finally conclude $\lambda \geq T/2$, as desired.
We will next provide a bound on the behavior of $\alpha(\tau)=
\tau^2 \, \bar u(0,\tau)$, as $\tau \to \infty.$ In particular, we will prove . We begin by a simple consequence of Lemma \[lem-bl\].
\[lem-altau1\] Under the assumptions of Theorem \[thm-ick\] we have $$\label{eqn-altau}
\liminf_{\tau \to \infty} \frac{\alpha'(\tau)}{\alpha(\tau)} \geq
4\, \lambda_0$$ with $\lambda_0 = T/2$.
The proof of Lemma \[lem-altau1\] is the same as the proof of Lemma $3.5$ in [@DD2].
\[cor5\] Under the hypotheses of Theorem \[thm-ick\], we have $$\label{eqn-asa}
\alpha(\tau) \geq e^{2T \tau + o(\tau)}, \qquad
\mbox{as \,\, $\tau \to \infty$}.$$
The next Proposition will be crucial in establishing the outer region behavior of $u$.
\[prop-1\] Under the hypotheses of Theorem \[Mth1\], we have $$\label{eqn-asa4}
\lim_{\tau \to \infty} \frac{\log \alpha(\tau)}{\tau} = 2T.$$
See Proposition $3.7$ in [@DD2].
A consequence of Lemma \[cor-ick1\] and Proposition \[prop-1\] is the following result, which will be used in the next section.
\[cor-astv1\] Under the assumptions of Lemma \[lem-ick\] the rescaled solution $\tilde v$ defined by satisfies $$\lim_{\tau \to \infty} \tilde v(\xi,\theta,\tau) =0, \qquad \mbox{uniformly
on} \,\, (\xi,\theta) \in (-\infty, \xi^-] \times [0,2\pi]$$ for all $\xi^- < T$.
So far we have showed that $\bar{\lambda} = \lim_{\tau\to\infty}\frac{\log\alpha(\tau)}{\tau} = T/2$ and that $\lambda \ge T/2$. In the next theorem we will show that actually $\lambda = T/2$. Theorem \[thm-curvature-limit\] is an improvement of the results in [@DD2], since it leads to the uniqueness of a cigar soliton limit.
\[thm-curvature-limit\] $\lim_{\tau\to\infty}\bar{R}(0,\tau) = 2T$.
We will first prove the following lemma.
\[lemma-xi\] For every $\beta > 1$ and for every sequence $\tau_i\to\infty$ there is a sequence $s_i\in (\tau_i,\beta\tau_i)$ such that $\lim_{i\to\infty}\bar{R}(0,s_i) = 2T$.
By definition $$(\log\alpha(\tau))_{\tau} = \bar{R}(0,\tau) - \frac{2}{\tau}.$$ Therefore $$\log\alpha(\beta\tau_i) - \log\alpha(\tau_i) = (\bar{R}(0,s_i)-\frac{2}{s_i})(\beta - 1)\tau_i.$$ Since $\log \alpha(\tau) = 2T\tau + o(\tau)$, by Proposition \[prop-1\], we conclude $$(\bar{R}(0,s_i) - \frac 2{s_i})(\beta - 1) =
\frac{(2T\beta\tau_i + o(\beta\tau_i)) - (2T\tau_i + o(\tau_i))}{\tau_i}$$ which yields $$\bar{R}(0,s_i) = 2T + \frac 2{s_i} + \frac{o(\beta\tau_i) + o(\tau_i)}{\tau_i}$$ readily implying the Lemma.
By Lemma \[lem-bl\], we have $\lambda \ge T/2$. Assume there is a sequence $\tau_i\to\infty$ such that $\lim_{i\to\infty}\bar{R}(0,\tau_i) = 4\lambda$, where $4\lambda = 2T + \delta$ for some $\delta > 0$. We know that $\bar{R}(0,\tau) \le \tilde{C}$ for a uniform constant $\tilde{C}$. Choose $\beta > 1$ so that the following two conditions hold $$\label{equation-first}
\frac{1}{\beta\sqrt{2\tilde{C}}} > C(\beta - 1)$$ and $$\label{equation-second}
2T + \frac \delta2 > 2T \left (\frac{\beta}{1-C(\beta - 1)\sqrt{2T}} \right )^2$$ for some uniform constant $C$ to be chosen later. Notice that both (\[equation-first\]) and (\[equation-second\]) are possible by choosing $\beta > 1$suffciently close to $1$. By Lemma \[lemma-xi\] find a sequence $s_i\in (\tau_i,\beta\tau_i)$ so that $\lim_{i\to\infty}\bar{R}(0,s_i) = 2T$. Let $T/2 < t < T$. Then $R(x,t) \ge -\frac{2}{T} = 1-E$. Hamilton-Yau Harnack estimate (\[equation-harnack\]), applied to $t_i$ (where $\tau_i = \frac{1}{T-t_i}$) and $\bar t_i>t_i$ (where $\frac{1}{T-\bar t_i} = \beta\tau_i$ for $\beta > 1$), yields $$\frac{1}{\sqrt{\bar{R}(0,\tau_i)+ \frac{E}{\tau_i^2}}} \ge \
\frac{\tau_i}{s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} - C\frac{s_i-\tau_i}{s_i}.$$ Notice that due to our choice of $\beta$ in (\[equation-first\]) we have $$\begin{aligned}
\frac{\tau_i}{s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} &\ge& \frac{\tau_i}{s_i\sqrt{2\tilde{C}}} \\
&\ge& \frac{1}{\beta\sqrt{2\tilde{C}}} \ge C(\beta - 1) \ge C\frac{s_i-\tau_i}{s_i}.\end{aligned}$$ Therefore $$\begin{aligned}
\sqrt{\bar{R}(0,\tau_i) + \frac{E}{\tau_i^2}} &\le& \frac{1}{\frac{\tau_i}
{s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} - C\frac{s_i-\tau_i}{s_i}} \\
&=& \frac{s_i\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{\tau_i - C(s_i-\tau_i)
\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}.\end{aligned}$$ Denote by $A = \sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}$. Since the function $f(x) = \frac{Ax}{\tau_i - CA(x-\tau_i)}$, for $x\in [\tau_i,\beta\tau_i]$ is increasing, we conclude $$\begin{aligned}
\label{equation-opposite}
\sqrt{\bar{R}(0,\tau_i) + \frac{E}{\tau_i^2}} &\le& \frac{\beta\tau_i\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{\tau_i -
C(\beta\tau_i-\tau_i)\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}} \\
&=& \frac{\beta\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{1 -
C(\beta - 1)\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}. \end{aligned}$$ Letting $i\to\infty$ in (\[equation-opposite\]) we get $$\sqrt{4\lambda} \le \frac{\beta\sqrt{2T}}{1-C(\beta-1)\sqrt{2T}}$$ which implies $$2T + \delta = 2T \left (\frac{\beta}{{1-C(\beta-1)\sqrt{2T}}} \right )^2$$ contradicting our choice of $\beta$ in (\[equation-second\]).
Proof of Theorem \[Mth1\]
-------------------------
We finish this section with the proof of Theorem \[Mth1\] which easily follows from the results in Sections \[sec-sc\] and \[sec-fb\]. Take any sequence $\tau_k \to \infty$. Observe that by Theorem \[thm-curvature-limit\] $$\label{eqn-altk}
\lim_{k \to \infty} \frac{\alpha'(\tau_k)}{\alpha(\tau_k)} = 2T.$$ By the definitions of $\tilde u$ and $\bar u_k$ ( and respectively) we have $\tilde u(y,\tau_k) =
\bar u_k(y,0)$. By Theorem \[thm-ick\], we have $\bar u_k \to U_{\frac T2}$ and therefore $$\tilde
u(y,\tau_k) \to U_{\frac T2}(y,0) = \frac 1{\frac{T}{2} \, |y|^2
+1}.$$ The limit $U_{\frac T2}$ does not depend on the sequence $t_k\to T$ and the proof of Theorem \[Mth1\] is now complete.
Outer Region Asymptotic Behavior {#sec-orc}
================================
We assume, throughout this section, that $u$ is a positive, smooth, maximal solution of satisfying the assumptions of Theorem \[Mth2\] which vanishes at time $$T=\frac 1{4\pi} \int_{{\mathbb R}^2} u_0 \, dx.$$ As in Introduction we consider the solution $ v(\zeta,\theta,t) = r^2\, u(r,\theta,t)$, $\zeta =\log r$, of the equation in cylindrical coordinates. We next set $$\label{eqn-rbv1}
\bar v(\zeta,\theta, \tau) = \tau^2 \, v(\zeta,\theta, t), \qquad \tau = \frac 1{T-t}.$$ and $$\label{eqn-rtv1}
\tilde v(\xi ,\tau) = \bar v (\tau \xi,\tau).$$ The function $\tilde v$ satisfies the equation $$\label{eqn-tilv}
\tau \, \tilde v_{\tau} = \frac 1{\tau} (\log \tilde v)_{\xi\xi} +\tau (\log \tilde v)_{\theta\theta} + \xi \, \tilde v_{\xi} + 2\tilde v.$$ Note that the curvature $R=-\Delta_c \log v/v$, is given in terms of ${\tilde v}$ by $$\label{eqn-tcurvature}
R(\tau \xi,\theta,t) =- \frac{( \log {\tilde v})_{\xi\xi} (\xi,\theta,t)+ \tau^2 ( \log {\tilde v})_{\theta\theta}(\xi,\theta,t)}{{\tilde v}}.$$ Moreover, the area of $\tilde v$ is constant, in particular $$\label{eqn-mtv2}
\int_{-\infty}^\infty \int_0^{2\pi} \tilde v(\xi,\theta, \tau)\, d\theta \, d\xi =4\pi , \qquad
\quad\forall \tau.$$
We shall show that, $\tilde v(\cdot, \tau)$ converges, as $\tau
\to \infty$, to a $\theta$-independent steady state of equation , namely to a solution of the linear first order equation $$\label{eqn-V2}
\xi \, V_{\xi} + 2 V =0.$$ The area condition shall imply that $$\label{eqn-mV}
\int_{-\infty}^{\infty} V(\xi)\, d\xi =2.$$ Positive solutions of equation are of the form $$\label{eqn-rV2}
V(\xi) = \frac{\eta }{\xi^2}$$ where $\eta >0$ is any constant. These solutions become singular at $\xi=0$ and in particular are non-integrable at $\xi=0$, so that they do not satisfy the area condition . However, it follows from Corollary \[cor-astv1\] that $V$ must vanish in the interior region $ \xi < T$. We will show that while $\tilde v(\xi,\theta,\tau) \to 0$, as $\tau
\to \infty$ on $(-\infty, T)$, we have $\tilde v(\xi,\theta,\tau) \geq c
>0$, for $\xi > T$ and that actually $\tilde v(\xi,\theta,\tau)
\to 2\, T /\xi^2$, on $(T, \infty)$, as stated in Theorem \[Mth2\].
The rest of the section is devoted to the proof of Theorem \[Mth2\]. We begin by showing the following properties of the rescaled solution $\tilde v$.
\[lem-ptv\] The rescaled solution $\tilde v$ given by - has the following properties:\
i. $\tilde v(\cdot,\tau) \leq C$, for a constant $C$ independent of $\tau$.\
ii. For any $\xi^- < T$, $ \tilde v(\cdot,\tau) \to 0$, as $\tau \to \infty$, uniformly on $(-\infty,\xi^-] \times [0,2\pi]$.\
iii. Let $\xi (\tau) = (\log \alpha(\tau))/2\tau$, with $\alpha(\tau) = [\tau^2 \, u(0,t)]^{-1}.$ Then, there exists $\tau_0 >0$ and a constant $\eta >0$, independent of $\tau$, such that $$\label{eqn-bbtilv}
\tilde v(\xi,\theta,\tau) \geq \frac \eta{\xi^2}, \qquad {\mbox on}\,\,
\xi \geq \xi (\tau),\, \, \tau \geq \tau_0.$$ In addition $$\label{eqn-xitau}
\xi(\tau) = T + o(1), \qquad \mbox{as} \,\, \tau \to \infty.$$ iv. $\tilde v(\xi,\theta,\tau)$ also satisfies the upper bound $$\tilde v(\xi,\theta,\tau) \leq \frac {C}{\xi^2}, \qquad {\mbox on}\,\,
\xi >0, \, \, \tau \geq \tau_0$$ for some constants $C >0$ and $\tau_0 >0$.
\(i) One can easily show using the maximum principle that $v(\zeta,\theta,t) \leq C/s^2$, for $s_0$ sufficiently large, with $C$ independent of $t$. This implies the bound $\tilde v(\xi,\theta,\tau) \leq C/\xi^2$, for $\xi \, \tau > s_0$. On the other hand, by Corollary \[cor-astv1\], we have $\tilde v (\xi,\theta,\tau) \leq C$, on $\xi < \xi^- <T$, with $C$ independent of $\tau$. Combining the above, the desired estimate follows.
\(ii) This is shown in Corollary \[cor-astv1\].
\(iii) We have shown in the previous section that the rescaled solution $\bar u(x,\tau) = \tau^2 \, u(x,t)$, $\tau=1/(T-t)$, defined by satisfies the asymptotics $\bar u(x,\tau) \approx 1/(\frac T2 \, |x|^2 +
\alpha(\tau))$, when $|x| \leq \sqrt{\alpha (\tau)}$. Hence $$\tilde v (\xi(\tau), \theta, \tau) = \bar v (\xi(\tau)\, \tau,\theta,\tau) \approx
\frac{e^{2\xi(\tau) \tau}}{\lambda \, e^{2\xi(\tau) \tau} + \alpha(\tau)} \approx \frac 1{\frac T2 +1}$$ if $\xi(\tau)=\frac{\log \alpha(\tau) }{2\tau}.$
Observe next that readily follows from Proposition \[prop-1\]. Hence, it remains to show $\tilde v \geq
\eta /\xi^2$, for $\xi \in [\xi(\tau), \infty)$, $\tau_0 \leq \tau < \infty$. To this end, we will compare $\tilde v$ with the subsolution $V_\eta(\xi,\theta) = {\eta}/{\xi^2}$ of equation . According to our claim above, there exists a constant $\eta
>0$, so that $$V_\eta(\xi(\tau),\theta) = \frac{\eta}{\xi(\tau)^2} \leq \tilde
v(\xi(\tau),\theta, \tau).$$ Moreover, by the growth condition , we can make $$\tilde v (\xi, \theta, \tau_0) > \frac {\eta}{\xi^2}, \qquad \mbox{on \,\, }
\xi \geq \xi (\tau_0)$$ by choosing $\tau_0 >0$ and $\eta$ sufficiently small. By the comparison principle, follows.
\(iv) Since $u_0$ is compactly supported and bounded, it follows that $u_0(r) \leq 2\, A/ (r^2\, \log^2 r)$, on $r>1$ for some $A >0$. Since $2(t+A)/(r^2 \, \log^2 r)$ is an exact solution of equation , it follows by the comparison principle on that $u(r,t) \leq 2(t+A)/(r^2 \, \log^2 r$), for $r >1$, which readily implies the desired bound on $\tilde v$, with $C = 2(A+T)$.
\[lem-first-spherical\] For any compact set $K \subset (T,\infty)$, there is a constant $C(K)$ for which $$\label{equation-first-spherical}
\max_{\xi \in K} \left | \int_0^{2\pi}(\log\tilde{v})_{\xi} (\xi,\theta,\tau) \, d\theta \, \right | \le C(K), \qquad \forall \tau \ge 2/T.$$
We integrate in $\theta$ variable and use the bounds $R \ge - {1}/{t} \geq - 2/T$, for $t \geq T/2$, and ${\tilde v}(\tau\xi,t) \le C$ shown in Lemma \[lem-ptv\], to get $$\int_0^{2\pi} (\log\tilde{v} )_{\xi\xi}(\xi,\theta,\tau) \, d\theta \le C$$ for all $\tau = 1/(T-t) \geq 2/T$. We can now proceed as in the proof of Lemma $4.2$ in [@DD2] to show (\[equation-first-spherical\]).
To simplify the notation, we set $$R_c ( \zeta, \theta, t) = R(r, \theta,t), \qquad r=\log \zeta.$$
\[lem-inf\] For any compact set $K \subset (T,\infty)$, there is a constant $C(K)$ such that for any $\xi_0 \in K$ and $\gamma >0$ $$\min_{[\xi_0,\xi_0 + \gamma \, \frac{\log\tau}{\tau}]\times[0,2\pi]} R_c(\xi \tau,\theta,t)\le \frac{C(K) \, \tau}{\gamma \, \log\tau}.$$
Assume that for some $K$ and $\gamma$, $\min_{[\xi_0,\xi_0 + \gamma\, \frac{\log\tau}{\tau}] \times [0,2\pi]} R_c(\xi\tau,\theta,t) \ge \frac{M\tau}{\gamma \log \tau}$, for $M$ large. Then, it follows from and the bound $\tilde v \leq C$ shown in Lemma \[lem-ptv\], that for every $\xi \in [\xi_0,\xi_0 + \gamma\, \frac{\log\tau}{\tau}]$ we have $$\begin{aligned}
\int_0^{2\pi}(\log\tilde{v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta &=& - \int_0^{2\pi}R_c(\xi\tau,\theta,t)) \, \tilde v(\tau\xi,\theta,t) \, d\theta \\
&\le& - \frac{C}{\xi^2} \, \min_{[\xi_0,\xi_0 + \gamma \, \frac{\log\tau}{\tau}] \times[0,2\pi]} R_c(\xi\tau,\theta,t) \\
&\le& -C_1(K) \, \frac{M \tau}{ \gamma \log \tau}\end{aligned}$$ which combined with Lemma \[lem-first-spherical\] implies $$\begin{aligned}
-C(K) &\le& \int_0^{2\pi} (\log\tilde{v})_\xi(\xi,\theta,\tau) \, d\theta \\
&\le& \int_0^{2\pi} (\log\tilde{v})_\xi (\xi_0,\theta,\tau) \, d\theta
- C_1(K) \, \frac{\gamma \log\tau}{\tau} \frac{M\tau}{\gamma \log \tau} \\
&=& \int_0^{2\pi} (\log\tilde{v})_\xi (\xi_0,\theta,\tau) \, d\theta - C_1(K)\, M \le C(K) - C_1(K) \, M \end{aligned}$$ impossible if $M$ is chosen sufficiently large.
\[prop-bcurv\] For every $K \subset (T,\infty)$ compact, there is a constant $C(K)$ depending only on $K$, such that for any $\xi_0 \in K$ $$\max_{[\xi_0,\xi_0 + \frac{\log\tau}{\tau}] \times[0,2\pi]} R_c(\xi\tau,\theta,t)\le \frac{C(K)\, \tau}{\log\tau}.$$
Let $\xi_1\in K$, $\theta\in [0,2\pi]$ and $\tau_1$ be arbitrary. Choose $\xi_2$ such that $T < \xi_2 < \min K$ and $\tau_2 $ such that $\xi_1\tau_1 = \xi_2\tau_2$. Since $\xi_2 < \xi_1$, then $\tau_2 > \tau_1$. Set $t_i= T - 1/{\tau_i}$, $i=1,2$. We next define the set $A_{\xi_2} = \{\xi: \,\, \xi_2 \le \xi \le \xi_2 + \gamma\frac{\log\tau_2}{\tau_2}\}$. Let $\xi_0 \in
A_{\xi_2}$ and $\theta_2\in [0,2\pi]$ be such that $$R_c(\xi_0\tau_2,\theta_2,t_2) = \min_{ (\xi,\theta) \in
A_{\xi_2}\times [0,2\pi] } \, R_c(\xi\tau_2,\theta,t_2)$$ and set $x_1 =
(e^{\xi_1\tau_1},\theta_1)$ and $x_2 = (e^{\xi_0\tau_2},\theta_2)$. Since $\xi_1 \tau_1 = \xi_2 \tau_2 \leq \xi_0 \tau_2$, then $|x_1| \leq |x_2|$. Denoting by ${\mathrm{dist}}_{t_1}(x_1,x_2)$ the distance with respect to the metric $g_{t_1} = u(\cdot, t_1) \, (dx^2 + dy^2)$, we have:
For any $0 < \gamma <1$, there is a constant $C=C(K,\gamma)$ so that $${\mathrm{dist}}_{t_1}(x_1,x_2) \le \frac{C(K,\gamma)}{\tau_1^{1-\gamma}}.$$
[*Proof of Claim.*]{} We have seen in the proof of Lemma \[lem-ptv\] that $u(x,t) \le \frac{C}{|x|^2\log^2|x|}$, for all $|x| \ge 1$ and all $t\in [0,T)$. If $\sigma$ is a euclidean geodesic with respect to $g_{t_1}= u(\cdot, t_1) \, (dx^2 + dy^2)$, connecting $x_1$ and $x_2$, this implies $$\begin{aligned}
\label{equation-dist}
{\mathrm{dist}}_{t_1}(x_1,x_2) &\le& \int_{\sigma} \sqrt{u}(\cdot,t_1)\,d\sigma \nonumber \\
&\le& C \, \frac{|e^{\xi_1\tau_1} - e^{\xi_0\tau_2}|}{e^{\xi_1\tau_1}\xi_1\tau_1} \nonumber \\
&\le& C\, \frac{e^{\xi_2\tau_2 + \gamma\log\tau_2} - e^{\xi_1\tau_1}}{e^{\xi_1\tau_1}\xi_1\tau_1} \nonumber \\
&\le& \frac{C}{\xi_1\tau_1}(e^{\gamma\log\tau_2} - 1) = \frac{C \, \xi_1^{\gamma -1}}{\xi_2^{\gamma}\tau_1^{1-\gamma}} - \frac C{\xi_1\tau_1} \nonumber \\
&\le& \frac{C \, \xi_1^{\gamma -1}}{\xi_2^{\gamma}\tau_1^{1-\gamma}} \le \frac{A(K,\gamma)}{\tau_1^{1-\gamma}}.\end{aligned}$$
To finish the proof of the Proposition, we first apply the Harnack estimate to obtain the inequality $$\frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} \ge \frac{1}{\sqrt{R_c(\xi_0\tau_2,\theta_2,t_2)+E}} - C\, (t_2-t_1) -
C\, \frac{{\mathrm{dist}}^2_{t_1}(x_1,x_2)}{t_2-t_1}.$$ By Lemma \[lem-inf\], using also that $\xi_1\tau_1 = \xi_2\tau_2$ (since $\tau_2 > \tau_1$, we have $\xi_1 > \xi_2$), we get
$$\begin{aligned}
\frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} &\ge& \frac{C(K,\gamma)\, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \, - \frac{C\, (\xi_1 - \xi_2)}{\xi_2\tau_2} \,
- \frac{C(K,\gamma)}{\tau_1^{1-2\gamma}} \, \frac {\tau_2} {(\tau_2-\tau_1)} \\
&=&\frac{C(K,\gamma)\, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \, - \frac{C_1(K,\gamma)}{\tau_2}
- \frac{C_2(K,\gamma)}{\tau_1^{1-2\gamma}}. \end{aligned}$$
In the last inequality we used that $$\frac {\tau_2} {(\tau_2-\tau_1)} = \frac {1} {(1-\tau_1/\tau_2)}
= \frac {1} {(1-\xi_2/\xi_1)} = \frac {\xi_1} {(\xi_1-\xi_2)}$$ and that $(\xi_1-\xi_2)/\xi_2$ and $\xi_1/(\xi_1-\xi_2)$ depend only on the set $K$.
Take $\gamma = \frac{1}{4}$. Using that $\tau_2/\tau_1=\xi_1/\xi_2$ depends only on $K$, we conclude the inequalities $$\begin{aligned}
\label{equation-curv1}
\frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} &\ge& \frac{\tilde C (K) \, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \nonumber \\
&\ge& \frac{\tilde C_1 (K) \sqrt{\log\tau_1}}{\sqrt{\tau_1}}\end{aligned}$$ for $\tau_1$ sufficiently large, depending only on $K$. Estimate (\[equation-curv1\]) yields the bound $$R_c(\xi_1\tau_1,\theta_1,t_1) \le \frac{C(K) \,\tau_1}{\log\tau_1}$$ finishing the proof of the Proposition.
\[cor-error\] Under the assumptions of Theorem \[Mth2\], we have $$\lim_{\tau \to \infty} \frac 1{\tau} \int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau) \, d\theta
=0$$ uniformly on compact subsets of $(T,\infty)$.
We begin by integrating in $\theta$ which gives $$\int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta = - \int_0^{2\pi}
R_c(\xi \tau,\theta,t)\, {\tilde v}(\xi,\theta,\tau)\, d\theta, \quad \tau=\frac1{T-t}.$$ Let $K \subset (T,\infty)$ compact. By the Aronson-Benilán inequality and Proposition \[prop-bcurv\] $$- \frac 1t \leq R_c(\xi\, \tau,\theta,t) \leq \frac{C(K) \, \tau}{\log \tau}.$$ Since $\tilde v \leq C$ (by Lemma \[lem-ptv\]), we conclude $$\label{eqn-000}
\left | \frac 1\tau \int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta
\right | \leq \frac{C(K)}{\log \tau}$$ from which the lemma directly follows.
We next introduce the new time variable $$s = \log \tau = - \log (T-t), \qquad s \geq -\log T.$$ To simplify the notation we still call $\tilde v(\xi,\theta, s)$ the solution $\tilde
v$ in the new time scale. Then, it is easy to compute that $\tilde v(\xi,\theta, s)$ satisfies the equation $$\label{eqn-tilvs}
\tilde v_s = e^{-s} \, (\log \tilde v)_{\xi\xi} + e^s (\log \tilde v)_{\theta\theta}+ \xi\, \tilde v_\xi + 2\, \tilde v.$$ For an increasing sequence of times $s_k \to \infty$, we let $$\tilde v_k (\xi,s) = \tilde v (\xi, s+s_k), \qquad - \log T-s_k <
s < \infty.$$ Each $\tilde v_k$ satisfies the equation $$\label{eqn-vkk}
(\tilde v_k)_s = e^{-(s+s_k)} (\log \tilde v_k)_{\xi\xi} + e^{s+s_k}(\log\tilde{v}_k)_{\theta\theta}
+ \xi \, (\tilde v_k)_{\xi} + {2\tilde v_k}$$ and the area condition $$\label{eqn-mcvk}
\int_{-\infty}^\infty\int_0^{2\pi} \tilde v_k(\xi,\theta,s)\, d\theta \, d\xi = 2 .$$ Defining the functions $$W_k(\eta,s) = \int_{\eta}^{\infty}
\int_0^{2\pi}\tilde{v}_k(\xi,\eta,s)\,d\theta\,d\xi, \quad \eta \in (T,\infty), \,
- \log T -s_k < s < \infty$$ we have:
\[prop-ock2\] Passing to a subsequence, $\{W_k\}$ converges uniformly on compact subsets of $\eta\in (T,\infty)$ to the time-independent steady state ${2T}/{\eta}$. In addition, for any $p \ge 1$ and $\xi_0\in (T,\infty)$, the solution $\tilde v_k(\xi,\theta,s)$ of converges in $L^p([\xi_0,\infty)\times[0,2\pi])$ norm to ${2T}/{\xi^2}$.
We first integrate in $\theta$ and $\xi\in [\eta,\infty)$, for $\eta\in (T,\infty)$, to find that each $W_k$ satisfies the equation $$(W_k)_s = -\int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi
+ \int_{\eta}^{\infty}\int_0^{2\pi} \xi \, (\tilde{v}_k)_{\xi}\,d\theta\,d\xi + 2 \, W_k$$ with $\tau_k(s)= e^{-(s+s_k)}$. Integrating by parts the second term yields $$\begin{aligned}
\int_{\eta}^{\infty}\int_0^{2\pi}\xi \, (\tilde{v}_k)_{\xi}\,d\theta\,d\xi &=&
-W_k(\eta,s) - \eta\int_0^{2\pi}\tilde{v}_k(\eta,\theta,s)\,d\theta + \int_0^{2\pi}\lim_{\xi\to\infty}\xi \tilde{v}_k(\xi,\theta,s)\, d\theta \\
&=& -W_k(\eta,s) - \eta\int_0^{2\pi}\tilde{v}_k(\eta,\theta,s)\,d\theta \end{aligned}$$ since due to our estimates on $\tilde{v}$ in Lemma \[lem-ptv\], we have $\lim_{\xi\to\infty} \xi \tilde{v}_k(\xi,\theta,s) = 0$, uniformly in $k$ and $\theta$. We conclude that $$(W_k)_s = - \int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi \,
+ W_k + \eta \, (W_k)_{\eta}.$$ Let $K \subset (T,\infty)$ compact. Then, by $$\left | \int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi \, \right | \leq \frac{C(K)}{s+s_k}.$$ Also, by Lemma \[lem-ptv\] and Proposition \[prop-bcurv\], there exists a constant $C=C(K)$ for which the bounds $$\label{equation-uniform-est}
|W_k(\eta,s)| \le C, \quad |(W_k)_s(\eta,s)| \leq C, \quad |(W_k)_{\eta}(\eta,s)| \le C$$ hold, for $s\ge -\log T$. Hence, passing to a subsequence, $W_k(\eta,s)$ converges uniformly on compact subsets of $(T,\infty)\times {\mathbb R}$ to a solution $W$ of the equation $$W_s = \eta \, W_{\eta} + W = (\eta \, W)_{\eta} \qquad \mbox{on} \,\,\, (T,\infty)\times {\mathbb R}$$ with $$\lim_{\eta\to T} W(\eta,s) = 2, \qquad s\in {\mathbb R}$$ and $$\lim_{\eta\to\infty}W(\eta,s) = 0, \qquad s\in {\mathbb R}.$$ As in [@DD2], one can show that $W$ is completely determined by its boundary values at $T$, and it is is the steady state $$W(\eta,s) = \frac{2\, T}{\eta}, \qquad \eta > T, \,\, s \in {\mathbb R}.$$ To show the $L^p$ convergence, we first notice that by the comparison principle $$v(\zeta,t) \le \frac{2\, T}{(\zeta - \zeta_0)^2}, \qquad \zeta \ge \zeta_0,\,\, 0 < t < T$$ for $\zeta_0=\log \rho$, with $\rho$ denoting the radius of the support of $u_0$. This yields the bound $$\tilde{v}_k(\xi,\theta,s) \le \frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2}, \qquad \xi \ge T.$$ By the triangle inequality we have $$\begin{aligned}
\int_{\eta}^{\infty} \int_0^{2\pi} |\frac{2\, T}{\xi^2} - \tilde{v}_k |\, d\theta\,d\xi &\le&
\int_{\eta}^{\infty} \int_0^{2\pi} \left ( \frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2} - \tilde{v}_k \right )\,d\theta\,d\xi \\
&+& \int_{\eta}^{\infty}\int_0^{2\pi} \left (\frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2} - \frac{2T}{\xi^2} \right ) \, d\theta\,d\xi\end{aligned}$$ where the second integral converges to zero, as $k \to \infty$, by the first part of the Lemma. It is easy to see that the third integral converges as well. This gives us the desired $L^1$ convergence, which immediately implies the $L^p$ convergence, since $|\tilde{v}_k(\xi,\theta,s) - {2T}/{\xi^2}|$ is uniformly bounded on $[\xi_0,\infty)\times [0,2\pi]$, for $\xi_0 \ge T$ and $s\ge -\log T$.
\[rem-pointwise\] The $L^p$ convergence in the previous Lemma, implies that there is a subsequence $k_l$ so that $\tilde{v}_{k_l} (\xi,\theta,s)
\to {2T}/{\xi^2}$ pointwise, almost everywhere on $(T,\infty)\times[0,2\pi]$.
Set $\tau_k(s) = e^{s+s_k}$. Since $$\frac{(\log\tilde{v})_{\xi\xi}(\xi, \theta,\tau)}{\tau} + \tau \, (\log\tilde{v})_{\theta\theta}(\xi,
\theta,\tau)
= -\tau \, R_c(\xi\, \tau,
\theta,\tau) \, \tilde v(\xi,\theta,t),$$ we can rewrite (\[eqn-vkk\]) as $$\label{equation-rewrite}
(\tilde{v}_k)_s = -\frac{R_c }{\tau_k(s)}\, \tilde v_k + \xi \, (\tilde v_k)_{\xi} + {2 \, \tilde v_k}.$$ We divide the equation by $\tilde{v}_k$ and integrate it in $\theta$. Denoting by $Z_k(\xi,s) = \int_0^{2\pi} \log \tilde v_k(\xi,\theta,s)d\theta$ we get $$(Z_k)_s = -\int_0^{2\pi}\frac{R_c}{\tau_k(s)} \, d\theta + \xi (Z_k)_{\xi} + 4\pi.$$ Notice that by Proposition (\[prop-bcurv\]), we have $$\label{equation-curv-lim}
|\frac{R_c}{\tau_k(s)}| \le \frac{1}{\log\tau_k(s)}$$ and that by Lemma \[lem-first-spherical\] $$|(Z_k)_{\xi}(\xi,s)| = \left |\int_0^{2\pi}(\log\tilde{v}_k)_{\xi}(\xi,\theta,s) \, d\theta
\right | \le C(K)$$ for $\xi\in K$, a compact subset of $(T,\infty)$ and $s\ge -s_k - \log T$. This also implies the bound $$|(Z_k)_s(\xi,s)| \le C(K).$$
\[prop-ock3\] Passing to a subsequence, $\tilde Z_k(\xi,s)$ converges uniformly on compact subsets of $(T,\infty) \times {\mathbb R}$ to a solution $Z$ of the equation $$\label{eqn-V10}
Z_s = \xi \, Z_{\xi} + 4\, \pi \qquad \mbox{on} \,\, (T, \infty) \times {\mathbb R}.$$
Let $ E \subset (T, \infty) \times {\mathbb R}$ compact. Then according to the previous estimates, the sequence $\tilde Z_k$ is equicontinuous on $E$, hence passing to a subsequence it converges to a function $Z$. In addition, the estimate readily implies that $Z$ is a solution of the first order equation .
\[claim-right-thing\] The function $Z$ is given by $$\label{eqn-Z}
Z(\xi,s) = 2\pi \, \log \frac{2T}{\xi^2}, \qquad (\xi,s)\in (T,\infty)\times {\mathbb R}.$$
Since $\int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s) \, d\theta \to Z(\xi,s)$, uniformly in $\xi$ on compact subsets of $(T,\infty)$, then for any $A > 0$ we have $$\int_{\eta}^{\eta +A}\int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s)\,d\theta\,d\xi \to
\int_{\eta}^{\eta+A} Z(\xi,s)\,d\xi.$$ By Remark \[rem-pointwise\] and the dominated convergence theorem it follows that for every $A > 0$ we have $$\int_{\eta}^{\eta + A} 2\pi\, \log\frac{2T}{\xi^2}\, d\xi = \int_{\eta}^{\eta + A} Z(\xi,s)\,d\xi$$ implying that $Z$ is given by .
We are finally in position to conclude the proof of Theorem \[Mth2\].
[*Proof of Theorem \[Mth2\].*]{} We begin by observing that by Lemma \[lem-ptv\] $$\tilde v_k(\xi,\theta,\tau) \to 0, \qquad \mbox{as} \,\, \tau \to \infty$$ uniformly on $(-\infty,\xi^-] \times [0,2\pi]$, for any $-\infty < \xi^- < T$.
To show the convergence on the outer region, observe that by Lemma \[prop-ock3\] and Lemma \[claim-right-thing\] $$\label{equation-uniform}
\int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s) \, d\theta \to 2\pi\log\frac{2T}{\xi^2}$$ uniformly on compact subsets of $(T, \infty) \times (-\infty,\infty).$ Set $$\underline{v}_k(\xi,s) = \min_{\theta\in [0,2\pi]}\tilde{v}_k(\xi,\theta,s) \quad \mbox{and} \quad \overline{v}_k(\xi,s) = \max_{\theta\in [0,2\pi]}\tilde{v}_k(\xi,\theta,s).$$ Let us recall that $u_0 \subset B_\rho(0)$. By the monotonicity property of the solutions shown in Lemma \[lemma-monotonicity\], we have $$\begin{aligned}
\label{equation-squeeze}
2\pi \, \log \underline{v}_k(\xi,s) &\le& \int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s)\, d\theta
\le 2\pi\, \log\overline {v}_k(\xi,s) \nonumber \\
&\le& 2\pi\, \log\frac{e^{2\xi\tau_k(s)}}{(e^{\xi\tau_k(s)}-1)^2} +
2\pi \log\underline{v}_k(\xi + \frac{\log ( 1- \rho \, e^{-\xi\tau_k(s)})}{\tau_k(s)},s) \nonumber \\
&\le& 2\pi\log\frac{e^{2\xi\tau_k(s)}}{(e^{\xi\tau_k(s)}-\rho)^2} +
\int_0^{2\pi}\log\tilde{v}_k(\xi + \frac{\log (1 - \rho \, e^{-\xi\tau_k(s)})}{\tau_k(s)},\theta,s)\, d\theta.\end{aligned}$$ Combining (\[equation-uniform\]) and (\[equation-squeeze\]) yields that $$\label{equation-radial-uniform}
2\pi\log\overline{v}_k(\xi,s) \to 2\pi\log\frac{2T}{\xi^2} \quad \mbox{and} \quad 2\pi\log \underline{v}_k(\xi,s) \to 2\pi\log\frac{2T}{\xi^2}$$ uniformly on compact subsets of $(T,\infty)\times {\mathbb R}$. Since $$\underline{v}_k(\xi,s) \le \tilde{v}_k(\xi,\theta, s) \le \bar{v}_k(\xi,s)$$ the above readily implies that $\tilde v_k(\xi,\theta,s) \to {2T}/{\xi^2}$ uniformly on compact subsets of $(T,\infty)\times [0,2\pi] \times (-\infty,\infty)$. Since the limit is independent of the sequence $s_k \to \infty$, we conclude that $$\tilde v(\xi,\theta,\tau) \to \frac {2T}{\xi^2},
\qquad \mbox{as} \,\, \tau \to \infty$$ uniformly on compact subsets of $(T,\infty)\times [0,2\pi]$, which finishes the proof of Theorem \[Mth2\].
[99]{}
Angenent, S. The zero set of a solution of a parabolic equation, [*J. Reine Angew. Math.*]{} 390 (1988), 79–96.
Angenent, S., Knopf, D., Precise asymptotics of the Ricci flow neckpinch; preprint available on: www.ma.utexas.edu/ danknopf.
Aronson, D.G., Bénilan P., Régularité des solutions de l’équation de milieux poreux dans ${\bf R}^n$, [*C.R. Acad. Sci. Paris, 288*]{}, 1979, pp 103-105.
Aronson, D.G., Caffarelli,L.A., The initial trace of a solution of the porous medium equation, [*Transactions of the Amer. Math. Soc. 280*]{} (1983), 351–366.
Bertozzi, A.L., The mathematics of moving contact lines in thin liquid films, [*Notices Amer. Math. Soc. 45*]{} (1998), no. 6, pp 689–697.
Bertozzi, A.L., Pugh M., The lubrication approximation for thin viscous films: regularity and long-time behavior of weak solutions, [*Comm. Pure Appl. Math. 49*]{} (1996), no. 2, pp 85–123.
Cao, H.-D., Chow, B., Recent developments on the Ricci flow, [*Bull. Amer. Math. Soc. (N.S.) 36*]{} (1999), no. 1, pp 59–74.
Chow, B., The Ricci flow on the $2$-sphere. J. Differential Geom. 33 (1991), no. 2, 325–334.
Chow, B., On the entropy estimate for the Ricci flow on compact $2$-orbifolds. J. Differential Geom. 33 (1991), no. 2, 597–600.
de Gennes, P.G., Wetting: statics and dynamics, [*Reviews of Modern Physics, 57 No 3*]{}, 1985, pp 827-863.
Daskalopoulos,P., del Pino M.A., On a Singular Diffusion Equation, [*Comm. in Analysis and Geometry, Vol. 3*]{}, 1995, pp 523-542.
Daskalopoulos,P., del Pino M.A., Type II collapsing of maximal Solutions to the Ricci flow in ${\mathbb R}^2$, to appear in Ann. Inst. H. Poincar Anal. Non Linaire.
Daskalopoulos, P., Hamilton, R., Geometric Estimates for the Logarithmic Fast Diffusion Equation, [*Comm. in Analysis and Geometry*]{}, 2004, to appear.
Daskalopoulos, P., Sesum, N., Eternal solutions to the Ricci flow on ${\mathbb R}^2$, preprint.
Esteban, J.R., Rodríguez, A., Vazquez, J.L., A nonlinear heat equation with singular diffusivity, [*Arch. Rational Mech. Analysis, 103*]{}, 1988, pp. 985-1039.
Galaktionov,V.A., Peletier, L.A., Vazquez, J.L., Asymptotics of the fast-diffusion equation with critical exponent; [*Siam. J. Math. Anal., 31*]{}(1999), 1157–1174.
Galaktionov, Victor A.; Vazquez, Juan Luis A stability technique for evolution partial differential equations. A dynamical systems approach. Progress in Nonlinear Differential Equations and their Applications, 56. Birkhauser Boston, Inc., Boston, MA, 2004.
Hamilton, R., Yau, S-T, The Harnack estimate for the Ricci flow on a surface - Revisited, [Asian J. Math]{}, Vol 1, No 3, pp. 418-421.
Hamilton, R., The Ricci flow on surfaces, [*Contemp. Math., 71*]{}, Amer. Math. Soc., Providence, RI, 1988, pp 237-262.
Hamilton, R. The formation of singularities in the Ricci flow, [*Surveys in differential geometry, Vol. II*]{} pp 7–136, Internat. Press, Cambridge, MA, 1995.
Hamilton, R., The Harnack estimate for the Ricci Flow, J. Differential Geometry [**37**]{} (1993) pp 225-243.
Herrero, M. and Pierre, M., The Cauchy problem for $u_t = \Delta
u^m$ when $0<m<1$, [*Trans. Amer. Math. Soc., 291*]{}, 1985, pp. 145-158.
Hsu, S.-Y; Dynamics of solutions of a singular diffusion equation; [*Adv. Differential Equations*]{} [**7** ]{} (2002), no. 1, 77–97.
Hsu, Shu-Yu; Asymptotic profile of solutions of a singular diffusion equation as $t\to\infty$, [*Nonlinear Anal.*]{} [**48**]{} (2002), no. 6, Ser. A: Theory Methods, 781–790.
Hsu, S.-Y; Large time behaviour of solutions of the Ricci flow equation on $R\sp 2$, [*Pacific J. Math.*]{} [**197**]{} (2001), no. 1, 25–41.
Hsu, S.-Y; Asymptotic behavior of solutions of the equation $u_t=\Delta \log u$ near the extinction time, [*Advances in Differential Equations*]{}, [**8**]{}, No 2, (2003), pp 161–187.
Hui, K.-M. Singular limit of solutions of the equation $u\sb
t=\Delta({u\sp m}/m)$ as $m\to0$. Pacific J. Math. 187 (1999), no. 2, 297–316.
King, J.R., Self-similar behavior for the equation of fast nonlinear diffusion, [*phil. Trans. R. Soc., London, A 343*]{}, (1993), pp 337–375.
Rodriguez, A; Vazquez, J. L.; Esteban, J.R, The maximal solution of the logarithmic fast diffusion equation in two space dimensions, [*Adv. Differential Equations 2*]{} (1997), no. 6, pp 867–894.
Wu, L.-F., A new result for the porous medium equation derived from the Ricci flow, [*Bull. Amer. Math. Soc., 28*]{}, 1993, pp 90-94.
Wu, L.-F., The Ricci Flow on Complete ${\bf R}^2$, [*Communications in Analysis and Geometry, 1*]{}, 1993, pp 439-472.
[^1]: $*:$ Partially supported by the NSF grants DMS-01-02252, DMS-03-54639 and the EPSRC in the UK
|
{
"pile_set_name": "arxiv"
}
|
USA
The EU is a political system with a unique structure and functioning, incomparable to anything which has existed before, far away from any classical, either national or international model. In such supranational union that is neither a pure intergovernmental organization nor a true federal state, political institutions appear vague and somewhat obscure and indistinguishable.
Are Iran and Saudi Arabia going to war? They are already fighting – by proxy – all over the region. Relations between Saudi Arabia and Iran quickly deteriorated in January 2016 following Riyadh’s execution of Shiite cleric Nimr al-Nimi but their struggle for power dates back to Iran's Islamic Revolution in 1979. Tehran's influence extends today across a broad area of the Middle East from Iran in the east to Lebanon in the west.
UNESCO’s Director-General, Irina Bokova and the Italian Minister for Foreign Affairs, Paolo Gentiloni signed in February 2016 in Rome an agreement on the establishment of a Task Force of cultural heritage experts in the framework of UNESCO’s global coalition “Unite for Heritage”. Under the agreement, UNESCO will be able to ask the Italian Government to make experts of the Task Force available for deployment for the conservation of cultural heritage in areas affected by crises.
In October 2016 John Sawers, a former MI6 chief, told BBC that the world was entering an era possibly “more dangerous” than the Cold War, as “we do not have that focus on a strategic relationship between Moscow and Washington”.
Lt. Gen. Eugeny Buzhinsky, head of PIR Centre, a Moscow Think Tank, did maintain: “If we talk about the last Cold War, we are currently somewhere between the erection of the Berlin Wall and the Cuban Missile Crisis but without the mechanisms to manage the confrontation”.
|
{
"pile_set_name": "pile-cc"
}
|
Instead of attaching a complete file, could you please create a diff of your changes against the original file? If possible we'd also prefer it submitted as a commit change to our gerrit instance, see https://wiki.documentfoundation.org/Development/gerrit but that's not strictly necessary if you're not familiar with git and develop tools and such.
In any case we'll need your license agreement, apparently we don't have it on file, could you please send us a blanket statement that you contribute all your past and future patches under the MPLv2 and LGPLv3+ licenses? Best on the dev mailing list libreoffice@lists.freedesktop.org so we can link to it from https://wiki.documentfoundation.org/Development/Developers
Something like this does nicely:
All of my past & future contributions to LibreOffice may be
licensed under the MPLv2/LGPLv3+ dual license.
Best use Subject: <your full name> license statement
Sorry for the inconvenience and thank you for cooperating :-)
(In reply to kinfe from comment #0)
> this is modified VCL.xcu file including my language,Tigrigna, default fonts
Hi kinfe/Eike,
Any update regarding these files? In the past year have we merged any default fonts for Tigrigna language (these files or others) ?
As this bug has been dormant for over a year, I'll toss it into NEEDINFO and await further info. If the underlying issue has been resolved, let's close this bug.
Dear Bug Submitter,
This bug has been in NEEDINFO status with no change for at least
6 months. Please provide the requested information as soon as
possible and mark the bug as UNCONFIRMED. Due to regular bug
tracker maintenance, if the bug is still in NEEDINFO status with
no change in 30 days the QA team will close the bug as INSUFFICIENTDATA
due to lack of needed information.
For more information about our NEEDINFO policy please read the
wiki located here:
https://wiki.documentfoundation.org/QA/Bugzilla/Fields/Status/NEEDINFO
If you have already provided the requested information, please
mark the bug as UNCONFIRMED so that the QA team knows that the
bug is ready to be confirmed.
Thank you for helping us make LibreOffice even better for everyone!
Warm Regards,
QA Team
MassPing-NeedInfo-Ping-20170131
Dear Bug Submitter,
Please read this message in its entirety before proceeding.
Your bug report is being closed as INSUFFICIENTDATA due to inactivity and
a lack of information which is needed in order to accurately
reproduce and confirm the problem. We encourage you to retest
your bug against the latest release. If the issue is still
present in the latest stable release, we need the following
information (please ignore any that you've already provided):
a) Provide details of your system including your operating
system and the latest version of LibreOffice that you have
confirmed the bug to be present
b) Provide easy to reproduce steps – the simpler the better
c) Provide any test case(s) which will help us confirm the problem
d) Provide screenshots of the problem if you think it might help
e) Read all comments and provide any requested information
Once all of this is done, please set the bug back to UNCONFIRMED
and we will attempt to reproduce the issue. Please do not:
a) respond via email
b) update the version field in the bug or any of the other details
on the top section of our bug tracker
Warm Regards,
QA Team
MassPing-NeedInfo-20170328
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- 'Armeen Taeb[^1]'
- 'Arian Maleki[^2]'
- 'Christoph Studer[^3]'
- 'Richard G. Baraniuk'
bibliography:
- 'references2.bib'
title: Maximin Analysis of Message Passing Algorithms for Recovering Block Sparse Signals
---
Group sparsity; group LASSO; approximate message passing; phase transition.
Introduction
============
Background
==========
Main results
============
Proofs of the main results
==========================
[^1]: Dept. of Electrical, Computer, and Energy Engineering, University of Colorado at Boulder.
[^2]: Dept. of Statistics, Columbia University.
[^3]: Dept. of Electrical and Computer Engineering, Rice University.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'In three-dimensional turbulent flows, the flux of energy from large to small scales breaks time symmetry. We show here that this irreversibility can be quantified by following the relative motion of several Lagrangian tracers. We find by analytical calculation, numerical analysis and experimental observation that the existence of the energy flux implies that, at short times, two particles separate temporally slower forwards than backwards, and the difference between forward and backward dispersion grows as $t^3$. We also find the geometric deformation of material volumes, surrogated by four points spanning an initially regular tetrahedron, to show sensitivity to the time-reversal with an effect growing linearly in $t$. We associate this with the structure of the strain rate in the flow.'
author:
- Jennifer Jucha
- Haitao Xu
- Alain Pumir
- Eberhard Bodenschatz
title: 'Time-symmetry breaking in turbulence'
---
In turbulent flows, far from boundaries, energy flows from the scale at which it is injected, $l_I$, to the scale where it is dissipated, $l_D$. For intense three-dimensional turbulence, $l_D \ll l_I$, and the energy flux, ${\epsilon}$, is from large to small scales [@frisch95]. As a consequence, time symmetry is broken, since the time reversal $t \rightarrow -t$ would also reverse the direction of the energy flux. Exploring the implications of this time asymmetry on the relative motion between fluid particles is the aim of this Letter.
The simplest problem in this context concerns the dispersion of two particles whose positions, $\mathbf{r}_1(t)$ and $\mathbf{r}_2(t)$, are separated by $\mathbf{R}(t) = \mathbf{r}_2(t) - \mathbf{r}_1(t)$. The growth of the mean squared separation, $\langle \mathbf{R}^2 (t) \rangle$, forwards ($t>0$) and backwards in time ($t<0$) is a fundamental question in turbulence research [@R26] and is also related to important problems such as turbulent diffusion and mixing [@S01; @SC09]. At long times, both for $t > 0$ and $t < 0$, it is expected that the distance between particles increases according to the Richardson prediction as $ \langle \mathbf{R}^2 (t) \rangle \approx g_{f,b} {\epsilon}|t|^3$ [@SC09], with two constants, $g_f$ and $g_b$, for forward and backward dispersion, respectively. The lack of direct evidence for the Richardson $t^3$ regime in well-controlled laboratory flows [@B06a] or in Direct Numerical Simulations (DNS) [@SC09; @BIC14] makes the determination of the constants $g_f$ and $g_b$ elusive, although it is expected that $g_b > g_f$ [@SC09; @SYB05; @B06].
In this Letter we show that [for short times]{} the flow irreversibility imposes a quantitative relation between forward and backward particle dispersion. For particle pairs, the energy flux through scales is captured by $$\left\langle \frac{d}{dt} \left[\mathbf{v}_2(t) - \mathbf{v}_1(t) \right]^2 \Big|_0 \right\rangle = - 4 {\epsilon},
\label{eq:flux_lag}$$ where $\mathbf{v}_{1}(t) $ and $\mathbf{v}_{2}(t)$ are the Lagrangian velocities of the particles and the average is taken over all particle pairs with the same initial separation, $ | \mathbf{R}(0) | =R_0$, in the inertial subrange ($l_D \ll R_0 \ll l_I$). Equation (\[eq:flux\_lag\]) is exact in the limit of very large Reynolds number [@MOA99; @FGV01; @PSC01] and can be seen as the Lagrangian version of the Kolmogorov 4/5-law [@frisch95]. For short times, Eq. (\[eq:flux\_lag\]) implies that backward particle dispersion is faster than the forward case, with $$\langle \mathbf{R}^2(-t) \rangle - \langle \mathbf{R}^2(t) \rangle = 4 {\epsilon}t^3 + {\mathcal{O}}(t^5).
\label{eq:diff_bac_for}$$ The $t^3$ power in Eq. (\[eq:diff\_bac\_for\]) is strongly reminiscent of the Richardson prediction, with the expectation that $g_b > g_f$ at longer times. The relation between the irreversibility predicted by Eq. (\[eq:diff\_bac\_for\]) and the one expected at longer times ($g_b > g_f$), however, remains to be established.
Whereas the difference between backward and forward pair dispersion at short times is weak ($\propto t^3$), we found a strong manifestation of the time asymmetry when investigating multi-particle dispersion. The analysis of the deformation of an initially regular tetrahedron consisting of four tracer particles [@PSC00; @XOB08] reveals a stronger flattening of the shape forwards in time, but a stronger elongation backwards in time. We relate the observed time asymmetry in the shape deformation to a fundamental property of the flow [@Betchov56; @Siggia81; @Ashurst87; @Pumir13] by investigating the structure of the perceived rate of strain tensor based on the velocities of the four Lagrangian particles [@XPB11].
Our finding relies on analytical calculation, DNS, and data from 3D Lagrangian particle tracking in a laboratory flow. The experiments were conducted with a von Kármán swirling water flow. The setup consisted of a cylindrical tank with a diameter of $\unit{48.3}{\centi\meter}$ and a height of $\unit{60.5}{\centi\meter}$, with counterrotating impellers installed at the top and bottom. Its geometry is very similar to the one described in Ref. [@O06], but with a slightly different design of the impellers to weaken the global structure of the flow. At the center of the tank, where the measurements were performed, the flow is nearly homogeneous and isotropic. As tracers for the fluid motion, we used polystyrene microspheres of density $\rho=1.06\, \rho_{\text{water}} $ and a diameter close to the Kolmogorov length scale, $\eta$. We measured the trajectories of these tracers using Lagrangian particle tracking with sampling rates exceeding 20 frames per Kolmogorov time scale, $\tau_\eta$ [@O06a; @X08]. We obtained three data sets at $R_\lambda=270$, $350$ and $690$, with corresponding Kolmogorov scales $\eta=\unit{105}{\micro\meter}$, $\unit{66}{\micro\meter}$, and $\unit{30}{\micro\meter}$ and $\tau_\eta=\unit{11.1}{\milli\second}$, $\unit{4.3}{\milli\second}$, and $\unit{0.90}{\milli\second}$, respectively. The integral length scales of $L\approx\unit{5.5}{\centi\meter}$ for the first two and $L\approx\unit{7.0}{\centi\meter}$ for the last data set are both smaller than the size of the measurement volume, which is approximately $(\unit{8}{\centi\meter})^3$. Many independent, one-second recordings of $\sim 100$ particles where combined to generate sufficient statistics. For example, the $R_\lambda = 690$ dataset contains 555,479 particle trajectories lasting at least $20 \tau_\eta$. Our experimental results are compared to DNS data obtained from pseudo-spectral codes [@vosskuhle:2013; @li2008; @Y12].
To study the dispersion between two particles, it is more convenient to analyze the change in separation, $\delta \mathbf{R}(t) = \mathbf{R}(t) - \mathbf{R}(0)$, than the separation $\mathbf{R}(t)$ itself [@B50; @O06; @SC09]. We expand $\delta \mathbf{R}(t)$ in a Taylor series and average over many particle pairs with a fixed initial separation $ | \mathbf{R}(0)|=R_0$ to obtain $$\frac{\langle \delta \mathbf{R}(t)^2\rangle}{R_0^2} =
\frac{\langle \mathbf{u}(0)^2\rangle}{R_0^2} t^2
+ \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{R_0^2} t^3
+ {\mathcal{O}}(t^4) ,
\label{eq:evol_dR2}$$ where $\mathbf{u}(0)$ and $\mathbf{a}(0)$ are the relative velocity and acceleration between the two particles at time $t=0$. Using Eq. reduces the $t^3$ term in Eq. to $-2 (t/t_0)^3$, where $t_0 = (R_0^2/{\epsilon})^{1/3}$ is the (Kolmogorov) time scale characteristic of the motion of eddies of size $R_0$ [@frisch95]. Eq. can thus be expressed as $$\frac{\langle \delta \mathbf{R}(t)^2\rangle}{R_0^2} = \frac{\langle \mathbf{u}(0)^2\rangle}{({\epsilon}R_0)^{2/3}} \Bigl( \frac{t}{t_0} \Bigr)^2 - 2 \Bigl(\frac{t}{t_0} \Bigr)^3 + {\mathcal{O}}(t^4).
\label{eq:evol_dR2_nodim}$$ For short times, the dominant behavior is given by the $t^2$ term in Eq. [@B50], which is even in $t$, and thus reveals no asymmetry in time. The odd $t^3$ term is the first to break the $ t \rightarrow -t$ symmetry. This is better seen from the difference between the forward and backward dispersion, $$\begin{aligned}
\frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle}{R_0^2}
& = -2 \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{R_0^2} t^3 + {\mathcal{O}}(t^5) \nonumber \\
&= 4 ({t}/{t_0})^3 + {\mathcal{O}}(t^5),
\label{eq:Rb_Rf}\end{aligned}$$ which is equivalent to Eq. . We note that the simple form of Eq. , which suggests that the evolution of $\langle \delta \mathbf{R}^2(t) \rangle$ depends on $(t/t_0)$ alone, is accurate only up to ${\mathcal{O}}(t/t_0)^3$. Not all higher-order terms in the Taylor expansion can be reduced to functions of $(t/t_0)$ [@F13].
To test Eq. , we identified particle pairs from our large set of experimental and numerical trajectories with a given initial separation $R_0$ and studied the evolution of $\delta \mathbf{R}(t)^2$, both forwards and backwards in time. One of the difficulties of reliably measuring $\langle \delta \mathbf{R}(t)^2 \rangle$ in experiments comes from the finite size of the measurement volume in which particles are tracked. The residence time of particle pairs in the measurement volume decreases with the separation velocity, inducing a bias. We analyze how this affects the results in the Appendix and show that the effect is weak. The very good agreement between experiments and DNS convinces us that the finite-volume bias does not alter our results.
![(color online). The difference between the backward and forward mean squared relative separation, $\langle \delta \mathbf{R}(-t)^2 - \delta \mathbf{R}(t)^2 \rangle$, compensated using Eq. . The symbols correspond to experiments: circles for $R_\lambda = 690$ ($R_0/\eta = 267,\,333,\,400$), stars for $R_\lambda = 350$ ($R_0/\eta = 152,\,182,\,212$), and squares for $R_\lambda = 270$ ($R_0/\eta = 95,\,114,\,133$). The lines correspond to DNS at $R_\lambda=300$ ($R_0/\eta=19,\,38,\,58,\,77,\,92,\,123$).[]{data-label="figThirdOrder"}](./figure1.eps){height="\picheight"}
Fig. \[figThirdOrder\] shows the difference, $\langle \delta \mathbf{R}^2(-t) - \delta \mathbf{R}^2(t) \rangle$, compensated by $- \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{2 R_0^2} t^3 $, using Eq. , obtained from both experiments and DNS at 4 different Reynolds numbers. The DNS, $R_\lambda = 300$ data consisted of $32,768$ particle trajectories in a statistically stationary turbulent flow [@vosskuhle:2013] over $\sim 4.5$ large-eddy turnover times, allowing particle pairs with a prescribed size to be followed for a long period of time. The data all show a clear plateau up to $t\approx t_0/10$, in complete agreement with Eq. . At longer times, both experimental and DNS data decrease rapidly towards zero without any sign of the plateau expected from the Richardson prediction, $$\frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle }{R_0^2} =(g_b - g_f) \Bigl(\frac{t}{t_0} \Bigr)^3 .
\label{eqRichardson}$$ While the slightly faster decay of the experimental data for $t \gtrsim t_0$ could be due to a residual finite-volume bias, this should not affect the DNS data. Previous experiments at $R_\lambda = 172$ with initial separations in the range $4 \le R_0/\eta \le 28$ suggested a value of the difference of $(g_b - g_f) = 0.6 \pm 0.1$ [@B06]. Fig. \[figThirdOrder\] does not provide evidence for this value, although it does not rule out the existence of a plateau at a lower value of $(g_b - g_f)$. Note that Eq. predicts the time irreversibility caused by the energy flux to persist into the inertial range and remarkably to grow as $t^3$ as well. It is therefore tempting to draw an analogy between Eq. , which is exact and valid at short times, and the expected Richardson regime at longer times [@B12]. The fact that a plateau corresponding to $(g_b - g_f)$ would be substantially lower than the value of $4$ given by Eq. indicates that the connection between the short-time behavior, Eq. , and the longer-time behavior, Eq. , requires a deeper understanding.
The time irreversibility predicted by Eq. for particle pair separations grows slowly at small times, $\propto t^3$. We discuss below a stronger ($\propto t$) manifestation of the time irreversibility by analyzing the evolution of four particles initially forming a regular tetrahedron. Additionally, the motion of tetrahedra provides insight into the structure of a flow [@CPS99; @PSC00; @XOB08; @XPB11; @Pumir13] and in fact into the origin of the irreversibility observed in particle pair separation.
The geometry of a set of four points $({\mathbf}{x}_1, ... {\mathbf}{x}_4)$, i.e., a tetrahedron, can be effectively described by three vectors. The position of the tetrahedron is immaterial in a homogeneous flow. The shape tensor, $G_{ij} = \sum_a (x_{a,i} - x_{C,i})(x_{a,j} - x_{C,j})$, where $x_{a,i}$ is the $i^{th}$ component of ${\mathbf}{x}_a$, provides an effective description of the tetrahedron geometry. The radius of gyration of the tetrahedron, $R^2(t) = \text{tr}(\mathbf{G})=\frac14 \sum_{a<b} |{\mathbf}{x}_a(t) - {\mathbf}{x}_b(t)|^2$, is simply given by the trace of $\mathbf{G}$. The shape is described by the three eigenvalues $g_i$ of $G$, with $g_1\geq g_2 \geq g_3$. For a regular tetrahedron, where all edges have the same length, all three eigenvalues are equal. For $g_1 \gg g_2\approx g_3$, the tetrahedron is needle-like, while $g_1\approx g_2 \gg g_3$ represents a pancake-like shape.
![(color online). Eigenvalues of the perceived rate-of-strain tensor, $\lambda_{0,i} t_0$, $(i=1,\,2,\,3)$, defined on tetrahedra with different sizes $R_0 /\eta$. Open symbols are from experiments at $R_\lambda= 690$ and $350$ and filled symbols from DNS at $R_\lambda=300$. The solid lines are the corresponding averages for $i=1$ (top), $2$ (middle), and $3$ (bottom).[]{data-label="figStrain"}](./figure2.eps){height="\picheight"}
The evolution of $\mathbf{G}$ can be conveniently written in the compact form [@Pumir13] $$\frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathbf{G}(t) = \mathbf{M}(t) \mathbf{G}(t) + \mathbf{G}(t) \mathbf{M}^T(t) ,
\label{eq:dG_dt}$$ where $\mathbf{M}(t)$ is the perceived velocity gradient tensor that describes the turbulent flow field seen by the 4 points [@CPS99; @XPB11]. The perceived velocity gradient reduces to the usual velocity gradient when the tetrahedron becomes smaller than the Kolmogorov scale, $\eta$ [@Pumir13]. We solve Eq. for short times using a Taylor expansion around $t=0$ and taking $G_{ij}(0) = (R_0^2/2) \delta_{ij}$ as the initial condition, i.e., the tetrahedra are initially regular with edge lengths $R_0$. The solutions for the average size and shape are $$\begin{aligned}
\langle R^2(t) \rangle & = \frac{R_0^2}{2} \bigg[3 + 2 \text{tr} \langle \mathbf{S}_0^2\rangle t^2 \nonumber\\
& \quad + 2 \text{tr}\left( \frac23 \langle \mathbf{S}_0^3 \rangle +\langle\mathbf{S}_0 \mathbf{\dot{S}}_0 \rangle \right) t^3 + {\mathcal{O}}(t^4) \bigg]
\label{eqRadius}\end{aligned}$$ and $$\begin{aligned}
\langle g_i \rangle &= \frac{R_0^2}{2} \bigg[1 + 2 \langle \lambda_{0,i} \rangle t \nonumber\\
& \quad + \left( 2 \langle \lambda_{0,i}^2 \rangle + \langle \mathbf{\dot{S}}_{0,ii} \rangle \right) t^2 + {\mathcal{O}}(t^3) \bigg].
\label{eqEigen}\end{aligned}$$ At the orders considered, the evolution of the tetrahedron geometry depends only on the perceived rate-of-strain tensor, $\mathbf{S}_0 = \mathbf{S}(0) = \frac12 [ \mathbf{M}(0) +\mathbf{M}(0)^T]$, whose eigenvalues, $\lambda_{0,i}$, are sorted in decreasing order ($\lambda_{0,1} \ge \lambda_{0,2} \ge \lambda_{0,3}$), and on its time-derivative, $\mathbf{\dot{S}}_0 = \frac{{\mathrm{d}}}{{\mathrm{d}}t} \mathbf{S}(t)\big|_0$. In Eq. , all terms are in fact expressed in the eigenbasis of $\mathbf{S}_0$.
{height="\picheight"} {height="\picheight"}
We first note that the radius of gyration, $R^2(t)$, can also be expressed as an average over the squares of the edge lengths of the tetrahedron. Thus, Eq. must be consistent with Eq. . This implies that $\text{tr} \langle \mathbf{S}_0^2\rangle = \frac{3}{2 R_0^2} \langle \mathbf{u}(0)^2\rangle$ and $\text{tr}\big( \frac23 \langle \mathbf{S}_0^3 \rangle _t+\langle\mathbf{S}_0 \mathbf{\dot{S}}_0 \rangle \big) = \frac32 \left\langle \mathbf{u}(0)\cdot\mathbf{a}(0) \right\rangle$, which we explicitly confirmed with our data. Furthermore, the incompressibility of the flow imposes that $\mathbf{M}$ (and hence $\mathbf{S}$) is traceless [on average]{}, which means that $\langle \lambda_{0,1} \rangle \geq 0$ and $\langle \lambda_{0,3} \rangle \leq 0$. The generation of small scales by turbulent flows, which plays a key role in the energy cascade, implies that the intermediate eigenvalue of the rate of strain tensor is positive [@Betchov56]. This property also applies to the [perceived]{} velocity gradient tensor in the inertial range [@Pumir13] (Fig. \[figStrain\]). Remarkably, our data suggest that $\langle \lambda_{0,i} \rangle t_0 \approx \text{const}$ over the range of Reynolds numbers and inertial scales covered here. For initially regular tetrahedra of edge length $R_0$, Eq. predicts that $\langle g_i (t) \rangle = \frac12 R_0^2$ at $t=0$ and that $\langle g_i (t) \rangle$ grows [linearly]{} as $R_0^2 \langle \lambda_{0,i} \rangle t$ for small $t$. The tetrahedra obtained experimentally and numerically at $R_\lambda=300$, however, are not strictly regular, but correspond to a set of 4 points whose relative distances are equal to within a fixed relative tolerance in the range $2.5 - 10 \%$. Fig. \[figShape\](a) shows that the linear behavior predicted by Eq. is observed when the tetrahedra are regular, as obtained using the Johns Hopkins University database [@li2008; @Y12] ($R_\lambda = 430$), or when the tolerance is reduced. The time asymmetry in this shape evolution, seen from the eigenvalues of $\mathbf{G}$ in Fig. \[figShape\], originates from the positive value of $\langle \lambda_{0,2} \rangle $. For regular tetrahedra, Eq. shows that in the eigenbasis of $\mathbf{S}_0$, the largest eigenvalue of $\mathbf{G}$ is $g_1$ for $t > 0$, and $g_3$ for $t < 0$. The difference between the largest eigenvalues at $t > 0$ (forwards in time) and at $t<0$ (backwards in time) is thus $R_0^2 \langle (\lambda_{0,1} + \lambda_{0,3}) t \rangle = - R_0^2 \langle \lambda_{0,2} t \rangle$. In fact, the difference between the backward and forward growth rates of the intermediate eigenvalue, $\langle g_2 \rangle$, shows an even stronger asymmetry: $$\langle g_{2}(t) - g_{2}(-t) \rangle /[R_0^2 (t/t_0)] = 2 \langle \lambda_{0,2} \rangle t_0 + {\mathcal{O}}(t^2).
\label{eq:g2diff}$$ The expected plateau of $2 \langle \lambda_{0,2} \rangle t_0$ is seen in Fig. \[figShape\](b) when the tetrads are regular, or when the tolerance on the initial edge lengths is reduced.
In summary, we have shown that the relative motion between several Lagrangian particles reveals the fundamental irreversibility of turbulent flows. At short times, the time asymmetry of two-particle dispersion grows as $t^3$, which is deduced from an identity derived from the Navier-Stokes equations in the large $R_\lambda$ limit that expresses the existence of a downscale energy cascade. Our study, however, leaves open the question of the existence of two different constants governing the dispersion forwards and backwards in time in the Richardson regime [@SYB05; @B06]. A stronger manifestation of the time asymmetry, $\propto t$, was observed by studying the shape deformation of sets of four points. This asymmetry can be understood from another fundamental property of turbulence, namely the existence of a positive intermediate eigenvalue of the rate-of-strain tensor [@Betchov56; @Pumir13]. Thus, remarkably, the manifestations of irreversibility are related to fundamental properties of the turbulent flow field.
The time-symmetry breaking revealed by multi-particle statistics is a direct consequence of the energy flux through spatial scales (see also [@FF13]). The very recently observed manifestation of irreversibility [@XPFB14] when following only a single fluid particle, where an intrinsic length scale is lacking, thus presents an interesting challenge to extend the analysis presented here. We expect that further insights into the physics of turbulence can be gained by analyzing the motion of tracer particles.
[31]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, , ) @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [ ()](http://arxiv.org/pdf/1403.5502) @noop [****, ()]{} @noop [****, ()]{} @noop [**]{}, (, ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop (), @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**** ()]{} @noop [****, ()]{}
appendix
========
The measurement volume in our experiment is finite and particles are thus only tracked for a finite time. The larger the relative velocity between two particles, $| \mathbf{u}(0)|$, the shorter they reside in the measurement volume [@B06; @LBOM07]. The experimentally measured mean squared displacement, $\langle \delta \mathbf{R}^2(t) \rangle_m$, determined by particle pairs which could be tracked up to time $t$, is smaller than the true value $\langle \delta \mathbf{R}^2(t) \rangle$ (see Fig. \[figBiasSketch\]).
To quantitatively analyze this effect, we parametrize the bias in $\langle \delta \mathbf{R}^2(t) \rangle_m$ due to the loss of particles with large relative motions by generalizing Eq. to $$\frac{\langle \delta \mathbf{R}(t)^2\rangle_m}{R_0^2} =
\frac{\langle \mathbf{u}(0)^2\rangle}{({\epsilon}R_0)^{2/3}} f_1(t) \Bigl( \frac{t}{t_0} \Bigr)^2 -
2 f_2(t) \Bigl(\frac{t}{t_0} \Bigr)^3 + {\mathcal{O}}(t^4).
\label{eqBias1}$$ In Eq. , the functions $f_1(t)$ and $f_2(t)$ express that the values of the relative velocities of particles staying in the measurement volume for a time $t$ is [*smaller*]{} than the relative velocity of all particle pairs (see Fig. \[figBiasSketch\]). From our experimental data, we find that $f_i(t)>0.9$ for $t/t_0 <0.2$. Additionally, we restrict ourselves to particle pairs that can be tracked in the interval $[-t , t]$, ensuring that $f_i(t) = f_i(-t)$. We thus find that the time asymmetry between backward and forward dispersion is $$\frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle_m}{R_0^2} =4 f_2(t) \Bigl( \frac{t}{t_0} \Bigr)^3 +{\mathcal{O}}(t^5).
\label{eqBias2}$$ The bias in Eq. is due only to the $f_2(t)$ term, and not to the leading term in Eq. . Over the short time interval where Fig. \[figThirdOrder\] shows a plateau, the error due to $f_2(t)$ is smaller than $\sim 10\%$.
![(color online). The blue curve shows the ensemble average for an infinite volume, the red curve the average over a time dependent ensemble for a finite volume. Black curves show examples of single events from these ensembles, with the dashed part not accessible in the case of a finite measurement volume.[]{data-label="figBiasSketch"}](./figure4.eps){width="46.00000%"}
|
{
"pile_set_name": "arxiv"
}
|
Confederación Revolucionaria de Obreros y Campesinos
The Confederación Revolucionaria de Obreros y Campesinos (CROC) is a Mexican trade union confederation. It is one of the most important and influential trade unions in the History of Mexico.
It was founded in April 1952. during a congress made by four workers centrals. Until 1980 the CROC had 750 000 workers inside the union, in only 17 of the 31 states and the Federal District (Mexico City); in this year the statements change in order to change the organization of the union by changing the presidency of the union, that was rotative and with only one year of duration to a presidency headed by a National Secretary General (Secretario General del Comité Ejecutivo Nacional).
It currently has 4.5 million worker members throughout the 32 states in the country having also 17 National Industrial Confederacies; also 3.600 unions with 15 000 collective contracts.
External links
History of the Confederación Revolucionaria de Obreros y Campesinos (Campesinos an Workers Revolutionary Confederacy)
Category:National trade union centers of Mexico
Category:World Federation of Trade Unions
Category:1952 establishments in Mexico
Category:Trade unions established in 1952
|
{
"pile_set_name": "wikipedia_en"
}
|
Robinsons ready to roll
Twins Tyrell and Tyree Robinson making their marks in football, basketball
A quick look at twin brothers Tyree and Tyrell Robinson (San Diego/Lincoln) reveals one big misconception: They're not identical. Tyree, older by two minutes, is taller by a full inch and likes to distinguish himself by wearing headbands. Tyrell is bulkier and less flashy with his wardrobe choices.
And when the dual-sport athletes take the football field, their differences continue to stack up.
|
{
"pile_set_name": "pile-cc"
}
|
import { Component, Inject, Input } from '@angular/core';
import { MediaObserver } from '@angular/flex-layout';
import { Observable } from 'rxjs';
import { map, startWith } from 'rxjs/operators';
import { API_BASE_URL } from '../../app.tokens';
import { Product } from '../../shared/services';
@Component({
selector: 'nga-product-suggestion',
styleUrls: [ './product-suggestion.component.scss' ],
templateUrl: './product-suggestion.component.html'
})
export class ProductSuggestionComponent {
@Input() products: Product[];
readonly columns$: Observable<number>;
readonly breakpointsToColumnsNumber = new Map([
[ 'xs', 2 ],
[ 'sm', 3 ],
[ 'md', 5 ],
[ 'lg', 2 ],
[ 'xl', 3 ],
]);
constructor(
@Inject(API_BASE_URL) private readonly baseUrl: string,
private readonly media: MediaObserver
) {
// If the initial screen size is xs ObservableMedia doesn't emit an event
// In the older versions of flex-layout we used ObservableMedia, which is deprecated.
// Use MediaObserver instead
this.columns$ = this.media.media$
.pipe(
map(mc => <number>this.breakpointsToColumnsNumber.get(mc.mqAlias)),
startWith(3)
);
}
urlFor(product: Product): string {
return `${this.baseUrl}/${product.imageUrl}`;
}
}
|
{
"pile_set_name": "github"
}
|
/*
* Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp"
#include "oops/instanceKlass.hpp"
#include "oops/oop.inline.hpp"
#include "oops/symbol.hpp"
static JfrSymbolId::CStringEntry* bootstrap = NULL;
JfrSymbolId::JfrSymbolId() :
_sym_table(new SymbolTable(this)),
_cstring_table(new CStringTable(this)),
_sym_list(NULL),
_cstring_list(NULL),
_sym_query(NULL),
_cstring_query(NULL),
_symbol_id_counter(1),
_class_unload(false) {
assert(_sym_table != NULL, "invariant");
assert(_cstring_table != NULL, "invariant");
bootstrap = new CStringEntry(0, (const char*)&BOOTSTRAP_LOADER_NAME);
assert(bootstrap != NULL, "invariant");
bootstrap->set_id(1);
_cstring_list = bootstrap;
}
JfrSymbolId::~JfrSymbolId() {
clear();
delete _sym_table;
delete _cstring_table;
delete bootstrap;
}
void JfrSymbolId::clear() {
assert(_sym_table != NULL, "invariant");
if (_sym_table->has_entries()) {
_sym_table->clear_entries();
}
assert(!_sym_table->has_entries(), "invariant");
assert(_cstring_table != NULL, "invariant");
if (_cstring_table->has_entries()) {
_cstring_table->clear_entries();
}
assert(!_cstring_table->has_entries(), "invariant");
_sym_list = NULL;
_symbol_id_counter = 1;
_sym_query = NULL;
_cstring_query = NULL;
assert(bootstrap != NULL, "invariant");
bootstrap->reset();
_cstring_list = bootstrap;
}
void JfrSymbolId::set_class_unload(bool class_unload) {
_class_unload = class_unload;
}
void JfrSymbolId::on_link(const SymbolEntry* entry) {
assert(entry != NULL, "invariant");
const_cast<Symbol*>(entry->literal())->increment_refcount();
assert(entry->id() == 0, "invariant");
entry->set_id(++_symbol_id_counter);
entry->set_list_next(_sym_list);
_sym_list = entry;
}
bool JfrSymbolId::on_equals(uintptr_t hash, const SymbolEntry* entry) {
assert(entry != NULL, "invariant");
assert(entry->hash() == hash, "invariant");
assert(_sym_query != NULL, "invariant");
return _sym_query == entry->literal();
}
void JfrSymbolId::on_unlink(const SymbolEntry* entry) {
assert(entry != NULL, "invariant");
const_cast<Symbol*>(entry->literal())->decrement_refcount();
}
static const char* resource_to_cstring(const char* resource_str) {
assert(resource_str != NULL, "invariant");
const size_t length = strlen(resource_str);
char* const c_string = JfrCHeapObj::new_array<char>(length + 1);
assert(c_string != NULL, "invariant");
strncpy(c_string, resource_str, length + 1);
return c_string;
}
void JfrSymbolId::on_link(const CStringEntry* entry) {
assert(entry != NULL, "invariant");
assert(entry->id() == 0, "invariant");
entry->set_id(++_symbol_id_counter);
const_cast<CStringEntry*>(entry)->set_literal(resource_to_cstring(entry->literal()));
entry->set_list_next(_cstring_list);
_cstring_list = entry;
}
static bool string_compare(const char* query, const char* candidate) {
assert(query != NULL, "invariant");
assert(candidate != NULL, "invariant");
const size_t length = strlen(query);
return strncmp(query, candidate, length) == 0;
}
bool JfrSymbolId::on_equals(uintptr_t hash, const CStringEntry* entry) {
assert(entry != NULL, "invariant");
assert(entry->hash() == hash, "invariant");
assert(_cstring_query != NULL, "invariant");
return string_compare(_cstring_query, entry->literal());
}
void JfrSymbolId::on_unlink(const CStringEntry* entry) {
assert(entry != NULL, "invariant");
JfrCHeapObj::free(const_cast<char*>(entry->literal()), strlen(entry->literal() + 1));
}
traceid JfrSymbolId::bootstrap_name(bool leakp) {
assert(bootstrap != NULL, "invariant");
if (leakp) {
bootstrap->set_leakp();
}
return 1;
}
traceid JfrSymbolId::mark(const Symbol* symbol, bool leakp) {
assert(symbol != NULL, "invariant");
return mark((uintptr_t)symbol->identity_hash(), symbol, leakp);
}
traceid JfrSymbolId::mark(uintptr_t hash, const Symbol* data, bool leakp) {
assert(data != NULL, "invariant");
assert(_sym_table != NULL, "invariant");
_sym_query = data;
const SymbolEntry& entry = _sym_table->lookup_put(hash, data);
if (_class_unload) {
entry.set_unloading();
}
if (leakp) {
entry.set_leakp();
}
return entry.id();
}
traceid JfrSymbolId::mark(uintptr_t hash, const char* str, bool leakp) {
assert(str != NULL, "invariant");
assert(_cstring_table != NULL, "invariant");
_cstring_query = str;
const CStringEntry& entry = _cstring_table->lookup_put(hash, str);
if (_class_unload) {
entry.set_unloading();
}
if (leakp) {
entry.set_leakp();
}
return entry.id();
}
/*
* jsr292 anonymous classes symbol is the external name +
* the identity_hashcode slash appended:
* java.lang.invoke.LambdaForm$BMH/22626602
*
* caller needs ResourceMark
*/
uintptr_t JfrSymbolId::unsafe_anonymous_klass_name_hash(const InstanceKlass* ik) {
assert(ik != NULL, "invariant");
assert(ik->is_anonymous(), "invariant");
const oop mirror = ik->java_mirror_no_keepalive();
assert(mirror != NULL, "invariant");
return (uintptr_t)mirror->identity_hash();
}
static const char* create_unsafe_anonymous_klass_symbol(const InstanceKlass* ik, uintptr_t hash) {
assert(ik != NULL, "invariant");
assert(ik->is_anonymous(), "invariant");
assert(hash != 0, "invariant");
char* anonymous_symbol = NULL;
const oop mirror = ik->java_mirror_no_keepalive();
assert(mirror != NULL, "invariant");
char hash_buf[40];
sprintf(hash_buf, "/" UINTX_FORMAT, hash);
const size_t hash_len = strlen(hash_buf);
const size_t result_len = ik->name()->utf8_length();
anonymous_symbol = NEW_RESOURCE_ARRAY(char, result_len + hash_len + 1);
ik->name()->as_klass_external_name(anonymous_symbol, (int)result_len + 1);
assert(strlen(anonymous_symbol) == result_len, "invariant");
strcpy(anonymous_symbol + result_len, hash_buf);
assert(strlen(anonymous_symbol) == result_len + hash_len, "invariant");
return anonymous_symbol;
}
bool JfrSymbolId::is_unsafe_anonymous_klass(const Klass* k) {
assert(k != NULL, "invariant");
return k->is_instance_klass() && ((const InstanceKlass*)k)->is_anonymous();
}
traceid JfrSymbolId::mark_unsafe_anonymous_klass_name(const InstanceKlass* ik, bool leakp) {
assert(ik != NULL, "invariant");
assert(ik->is_anonymous(), "invariant");
const uintptr_t hash = unsafe_anonymous_klass_name_hash(ik);
const char* const anonymous_klass_symbol = create_unsafe_anonymous_klass_symbol(ik, hash);
return mark(hash, anonymous_klass_symbol, leakp);
}
traceid JfrSymbolId::mark(const Klass* k, bool leakp) {
assert(k != NULL, "invariant");
traceid symbol_id = 0;
if (is_unsafe_anonymous_klass(k)) {
assert(k->is_instance_klass(), "invariant");
symbol_id = mark_unsafe_anonymous_klass_name((const InstanceKlass*)k, leakp);
}
if (0 == symbol_id) {
Symbol* const sym = k->name();
if (sym != NULL) {
symbol_id = mark(sym, leakp);
}
}
assert(symbol_id > 0, "a symbol handler must mark the symbol for writing");
return symbol_id;
}
JfrArtifactSet::JfrArtifactSet(bool class_unload) : _symbol_id(new JfrSymbolId()),
_klass_list(NULL),
_total_count(0) {
initialize(class_unload);
assert(_klass_list != NULL, "invariant");
}
static const size_t initial_class_list_size = 200;
void JfrArtifactSet::initialize(bool class_unload, bool clear /* false */) {
assert(_symbol_id != NULL, "invariant");
if (clear) {
_symbol_id->clear();
}
_symbol_id->set_class_unload(class_unload);
_total_count = 0;
// resource allocation
_klass_list = new GrowableArray<const Klass*>(initial_class_list_size, false, mtTracing);
}
JfrArtifactSet::~JfrArtifactSet() {
_symbol_id->clear();
delete _symbol_id;
// _klass_list will be cleared by a ResourceMark
}
traceid JfrArtifactSet::bootstrap_name(bool leakp) {
return _symbol_id->bootstrap_name(leakp);
}
traceid JfrArtifactSet::mark_unsafe_anonymous_klass_name(const Klass* klass, bool leakp) {
assert(klass->is_instance_klass(), "invariant");
return _symbol_id->mark_unsafe_anonymous_klass_name((const InstanceKlass*)klass, leakp);
}
traceid JfrArtifactSet::mark(uintptr_t hash, const Symbol* sym, bool leakp) {
return _symbol_id->mark(hash, sym, leakp);
}
traceid JfrArtifactSet::mark(const Klass* klass, bool leakp) {
return _symbol_id->mark(klass, leakp);
}
traceid JfrArtifactSet::mark(const Symbol* symbol, bool leakp) {
return _symbol_id->mark(symbol, leakp);
}
traceid JfrArtifactSet::mark(uintptr_t hash, const char* const str, bool leakp) {
return _symbol_id->mark(hash, str, leakp);
}
bool JfrArtifactSet::has_klass_entries() const {
return _klass_list->is_nonempty();
}
int JfrArtifactSet::entries() const {
return _klass_list->length();
}
void JfrArtifactSet::register_klass(const Klass* k) {
assert(k != NULL, "invariant");
assert(_klass_list != NULL, "invariant");
assert(_klass_list->find(k) == -1, "invariant");
_klass_list->append(k);
}
size_t JfrArtifactSet::total_count() const {
return _total_count;
}
|
{
"pile_set_name": "github"
}
|
export const environment = {
production: true
};
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We present an atomistic self-consistent study of the electronic and transport properties of semiconducting carbon nanotube in contact with metal electrodes of different work functions, which shows simultaneous electron and hole doping inside the nanotube junction through contact-induced charge transfer. We find that the band lineup in the nanotube bulk region is determined by the effective work function difference between the nanotube channel and source/drain electrodes, while electron transmission through the SWNT junction is affected by the local band structure modulation at the two metal-nanotube interfaces, leading to an effective decoupling of interface and bulk effects in electron transport through nanotube junction devices.'
author:
- 'Yongqiang Xue$^{1,*}$ and Mark A. Ratner$^{2}$'
title: 'Electron transport in semiconducting carbon nanotubes with hetero-metallic contacts'
---
Devices based on single-wall carbon nanotubes (SWNTs) [@Dekker; @DeMc] have been progressing in a fast pace, e.g., the performance of carbon nanotube field-effect transistors (NTFET) is approaching that of the state-of-the-art silicon Metal-Oxide-Semiconductor field-effect transistors (MOSFET). [@AvFET; @DaiFET; @McFET] But a general consensus on the physical mechanisms and theoretical models remains to appear. A point of continuing controversy in NTFET has been the effect of Schottky barriers at the metal-SWNT interface. [@Barrier; @AvFET1] Since SWNTs are atomic-scale nanostructures in both the axial and the circumferential dimensions, any barrier that may form at the interface has a finite thickness and a finite width. [@AvFET; @XueNT; @AvFET2] In general a microscopic treatment of both the source/drain and gate field modulation effect will be needed to account for faithfully the atomistic nature of the electronic processes in NTFET.
Since the characteristics of the NTFETs depend sensitively on the gate geometry, [@AvFET] a thorough understanding of the Schottky barrier effect in the simpler two-terminal metal-SWNT-metal junction devices is essential in elucidating the switching effect caused by applying a finite gate voltage. [@XueNT] As a basic device building block, the two-terminal device is also of interests for applications in electromechanical and electrochemical sensors, where the conduction properties of the SWNT junctions are modulated by mechanical strain [@NTMe] or molecular adsorption respectively. [@NTCh] Previous works have considered symmetric SWNT junctions with different contact geometries. [@XueNT] Here we consider SWNT in contact with metallic electrodes of different work functions. Such hetero-metallic junctions are of interests since: (1) The electrode work function difference leads to a contact potential and finite electric field (built-in field) across the junction at equilibrium. A self-consistent analysis of the hetero-metallic junction can shed light on the screening of the applied field by the SWNT channel and the corresponding band bending effect even at zero bias; (2) For SWNTs not intentionally doped, electron and hole doping can be induced simultaneously inside the channel by contacting with high and low work function metals; (3) Since the metallurgy of the metal-SWNT contact is different at the two interfaces, the asymmetric device structure may facilitate separating interface effect on electron transport from the intrinsic property of the SWNT channel.
The hetero-metallic SWNT junction is shown schematically in Fig. \[xueFig1\], where the ends of an infinitely long SWNT wire are buried inside two semi-infinite metallic electrodes with different work functions. The embedded contact scheme is favorable for the formation of low-resistance contact. For simplicity, we assume the embedded parts of the SWNT are surrounded entirely by the metals with overall cylindrical symmetry around the SWNT axis. [@Note1] For comparison with previous work on symmetric SWNT junctions, we investigate $(10,0)$ SWNTs (with work function of $4.5$ eV [@Dekker]) in contact with gold (Au) and titanium (Ti) electrodes (with work functions of $5.1$ and $4.33$ eV respectively for polycrystalline materials [@CRC]). Choosing the electrostatic potential energy in the middle of the junction and far away from the cylindrical surface of the SWNT as the energy reference, the Fermi-level of the Au-SWNT-Ti junction is the negative of the average metal work functions $E_{F}=-4.715$ eV. The SWNT channel length investigated ranges from $L=2.0,4.1,8.4,12.6,16.9$ nm to $21.2$ nm, corresponding to number of unit cells of $5,10,20,30,40$ and $50$ respectively. We calculate the transport characteristics within the coherent transport regime, as appropriate for such short nanotubes. [@Phonon]
Using a Green’s function based self-consistent tight-binding (SCTB) theory, we analyze the Schottky barrier effect by examining the electrostatics, the band lineup and the transport characteristics of the hetero-metallic SWNT junction as a function of the SWNT channel length. The SCTB model is essentially the semi-empirical implementation of the self-consistent Matrix Green’s function method for *ab initio* modeling of molecular-scale devices, [@XueMol] which takes fully into account the three-dimensional electrostatics and the atomic-scale electronic structure of the SWNT junctions and has been described in detail elsewhere. [@XueNT; @Note2] The SCTB model starts with the semi-empirical Hamiltonian $H_{0}$ of the bare $(10,0)$ SWNT wire using the Extended Huckel Theory (EHT) with non-orthogonal ($sp$) basis sets $\phi_{m}(\vec r)$. [@Hoffmann88] We describe the interaction between the SWNT channel and the rest of the junction using matrix self-energy operators and calculate the density matrix $\rho_{ij}$ and therefore the electron density of the equilibrium SWNT junction from $$\begin{aligned}
\label{GE}
G^{R}
&=& \{ (E+i0^{+})S-H-\Sigma_{L}(E)-\Sigma_{L;NT}(E)
-\Sigma_{R}(E)-\Sigma_{R;NT}(E)\}^{-1}, \\
\rho &=& \int \frac{dE}{2\pi }Imag[G^{R}](E)f(E-E_{F}).\end{aligned}$$ Here $S$ is overlap matrix and $f(E-E_{F})$ is the Fermi distribution in the electrodes. Compared to the symmetric SWNT junctions, here the Hamiltonian describing the SWNT channel $H=H_{0}+\delta V[\delta \rho]+V_{ext}$ includes the contact potential $V_{ext}$ (taken as linear voltage ramp here) in addition to the charge-transfer induced electrostatic potential change $\delta V$ ($\delta \rho$ is the density of transferred charge).
The calculated charge transfer per atom and electrostatic potential change along the cylindrical surface of the SWNT for the Au-SWNT-Ti junction are shown in Fig. (\[xueFig2\]). Previous works [@XueNT] have shown that by contacting to the high (low)-work function metal Au (Ti), hole (electron) doping is induced inside the SWNT channel. Here we find simultaneous electron and hole doping inside the SWNT channel for the hetero-metallic Au-SWNT-Ti junction (lower figure in Fig. \[xueFig2\](a)). Since the magnitude of hole doping inside the Au-SWNT-Au junction ($\approx -5.6 \times 10^{-4}$/atom) is much larger than that of the electron doping inside the Ti-SWNT-Ti junction ($\approx 3 \times 10^{-5}$/atom) due to the larger work function difference, the majority of the channel remains hole-doped inside the Au-SWNT-Ti junction. Due to the localized nature of interface bonding, the charge transfer pattern immediately adjacent to the Au(Ti)-SWNT interface remains similar to that of the Au-SWNT-Au (Ti-SWNT-Ti) junction both in magnitude and shape. The short-wavelength oscillation in the transferred charge inside the SWNT channel reflects the atomic-scale variation of charge density within the unit cell of the SWNT. [@XueNT; @Tersoff02]
The contact-induced doping affects the transport characteristics by modulating the electrostatic potential profile along the SWNT junction. We find that inside the SWNT channel, the built-in electric field is screened effectively by the delocalized $\pi$-electron of carbon. So the net electrostatic potential change along the cylindrical surface ($V_{ext}+\delta V[\delta \rho]$) is much more flat than the linear voltage ramp denoting the contact potential except close to the metal-SWNT interface (lower figure of Fig. \[xueFig2\](b)), where its shape remains qualitatively similar to that at the Au (Ti)-SWNT interface of the Au-SWNT-Au (Ti-SWNT-Ti) junction. Due to the confined cylindrical structure of the SWNT channel, the charge-transfer induced electrostatic potential change $\delta V$ decays rapidly in the direction perpendicular to the SWNT axis. This has led to a different physical picture of band bending in symmetric SWNT junctions. [@XueNT] In particular, the band lineup inside the SWNT channel has been found to depend mainly on the metal work function, while interaction across the metal-SWNT interface modulates the band structure close to the interface without affecting the band lineup scheme in the middle of the channel. Similar physical picture applies to the hetero-metallic SWNT junction, where we find that the band lineup in the middle of the Au-SWNT-Ti junction is essentially identical to that of the SWNT junction with symmetric contact to metals with work function of $4.715$ eV. This is examined through the local-density-of-states (LDOS) of the SWNT channel as a function of position along the SWNT axis in Figs. \[xueFig3\] and \[xueFig4\].
The coupling across the metal-SWNT interface and the corresponding strong local field variation immediately adjacent to the Ti-SWNT interface has a strong effect on the SWNT band structure there, which extends to $\sim 4$ nm away from the interface (Fig. \[xueFig3\](a)). The band structure modulation at the Au side is weaker. For the 40-unit cell (16.9 nm) SWNT, the band structure in the middle of the SWNT junction remains essentially unaffected. This is shown in Fig. \[xueFig4\], where we compare the LDOS of the Au-SWNT-Ti junction in the left end, the right end and the middle of the SWNT channel with the corresponding LDOS of the Au-SWNT-Au, Ti-SWNT-Ti junction and the bulk (infinitely long) SWNT wire respectively. Since the magnitude of the build-in electric field is smaller than the charge-transfer induced local field at the metal-SWNT interface, the LDOS at the two ends of the SWNT channel remain qualitatively similar to that of the symmetric SWNT junction (Figs. \[xueFig4\](a) and \[xueFig4\](c)). Note that the LDOS plotted here has been energetically shifted so that the SWNT bands in the middle of the hetero-metallic junction line up with those of the symmetric SWNT junctions.
The above separation of band lineup scheme at the interface and in the interior of the SWNT junction implies that in NTFETs, the gate segments controlling the device interiors affect the device operation through effective modulation of the work function difference between the source/drain electrode and the bulk portion of the SWNT channel (applying a finite gate voltage to the SWNT bulk leads to an effective modulation of its work function relative to the source/drain electrodes), while the gate segments at the metal-SWNT interfaces affect the device operation by controlling charge injection into the device interior through local modulation of the SWNT band structure and Schottky barrier shapes including height, width and thickness, in agreement with recent lateral scaling analysis of gate-modulation effect [@AvFET3] and interface chemical treatment effect in Schottky barrier NTFETs. [@MolNT] Note that since the band structure modulation at the metal-SWNT interface can extend up to $\sim 4$ nm into the interior of the SWNT junction, it may be readily resolved using scanning nanoprobe techniques. [@NTSTM]
The Schottky barrier effect at the metal-SWNT interface can also be analyzed through the length-dependent conductance and current-voltage (I-V) characteristics of the Au-SWNT-Ti junction, which are calculated using the Landauer formula [@XueMol] $G=\frac{2e^{2}}{h}\int dE T(E)[-\frac{df}{dE}(E-E_{F})]
=G_{Tu}+G_{Th}$ and $I=\int_{-\infty}^{+\infty}dE \frac{2e}{h}T(E,V)
[f(E-(E_{F}+eV/2))-[f(E-(E_{F}-eV/2))]=I_{Tu}+I_{Th}$ and separated into tunneling and thermal-activation contributions as $G_{Tu}=\frac{2e^{2}}{h}T(E_{F}),G_{Th}=G-G_{Tu}$ and $I_{Tu}=\frac{2e}{h}\int_{E_{F}-eV/2}^{E_{F}+eV/2} T(E,V)dE,
I_{Th}=I-I_{Tu}$. [@XueNT; @XueMol03]
In general the transmission function is voltage-dependent due to the self-consistent screening of the source-drain field by the SWNT channel at each bias voltage. Since in the case of voltage dropping mostly across the interface, the transmission coefficient is approximately voltage-independent at low-bias, [@XueMol03] here we calculate the I-V characteristics using the equilibrium transmission coefficient instead of the full self-consistent calculation at each bias voltage. We find that the conductance of the Au-SWNT-Ti junction shows a transition from tunneling-dominated to thermal activation-dominated regime with increasing channel length, but the length where this occurs is longer than those of the symmetric Au/Ti-SWNT-Au/Ti junctions (Fig. \[xueFig5\](a)). This is partly due to the fact that the Fermi-level is closer to the mid-gap of the SWNT band inside the channel, partly due to the reduced transmission close to the valence-band edge (Fig. \[xueFig4\](d)) caused by the band structure modulation at the Ti-SWNT interface. Due to the finite number of conduction channels, the increase of the conductance with temperature is rather slow (Fig. \[xueFig5\](b)). [@XueNT] The relative contribution of tunneling and thermal-activation to the room-temperature I-V characteristics is shown in Figs. \[xueFig5\](c) and \[xueFig5\](d) for the 20- and 40-unit cell long (8.4 and 16.9 nm) SWNT respectively, where we see that thermal-activation contribution increases rapidly with bias voltage for the 20-unit cell SWNT junction while the thermal-activation contribution dominates the I-V characteristics at all bias voltages for the 40-unit cell SWNT.
In conclusion, we have presented an atomistic real-space analysis of Schottky barrier effect in the two-terminal SWNT junction with hetero-metallic contacts, which shows an effective decoupling of interface and bulk effects. Further analysis is needed that treat both the gate and source/drain fields self-consistently in the real space to achieve a thorough understanding of NTFETs.
Author to whom correspondence should be addressed. E-mail: yxue@uamail.albany.edu. URL: http://www.albany.edu/ yx152122. Dekker C 1999 *Phys. Today* [**52**]{}(5) 22 Bachtold A, Haley P, Nakanishi T and Dekker C 2001 *Science* [**294**]{} 1317 Avouris Ph, Appenzellaer J, Martel R and Wind S J 2003 *Proc. IEEE* [**91**]{} 1772 Javey A, Guo J, Wang Q, Lundstrom M and Dai H 2003 *Nature* [**424**]{} 654; Javey A, Guo J, Paulsson M, Wang Q, Mann D, Lundstrom M and Dai H 2004 *Phys. Rev. Lett.* [**92**]{} 106804 Yaish Y, Park J-Y, Rosenblatt S, Sazonova V, Brink M and McEuen P L 2004 *Phys. Rev. Lett.* [**92**]{} 46401 Xue Y and Datta S 1999 *Phys. Rev. Lett.* [**83**]{} 4844; Le[ó]{}nard F and Tersoff J 2000 *Phys. Rev. Lett.* [**84**]{} 4693; Odintsov A A 2000 *Phys. Rev. Lett.* [85]{} 150; Nakanishi T, Bachtold A and Dekker C 2002 *Phys. Rev. B* [**66**]{} 73307 Heinze S, Tersoff J, Martel R, Derycke V, Appenzeller J and Avouris Ph 2002 *Phys. Rev. Lett.* [**89**]{} 106801; Appenzeller J, Knoch J, Radosavljevi[ć]{} M and Avouris Ph 2004 *ibid.* [**92**]{} 226802 Xue Y and Ratner M A 2003 *Appl. Phys. Lett.* [**83**]{} 2429; Xue Y and Ratner M A 2004 *Phys. Rev. B* [**69**]{} 161402(R); Xue Y and Ratner M A 2004 *Phys. Rev. B* In Press (*Preprint* cond-mat/0405465) Appenzeller J, Radosavljevi[ć]{} M, Knoch J and Avouris Ph 2004 *Phys. Rev. Lett.* [**92**]{} 48301 Minot E D, Yaish Y, Sazonova V, Park J-Y, Brink M and McEuen P L 2003 *Phys. Rev. Lett.* [**90**]{} 156401; Cao J, Wang Q and Dai H 2003 *Phys.Rev. Lett.* [**90**]{} 157601 Chiu P-W, Kaempgen M, and Roth S 2004 *Phys. Rev. Lett.* [**92**]{} 246802; Chen G, Bandow S, Margine E R, Nisoli C, Kolmogorov A N, Crespi V H, Gupta R, Sumanasekera G U, Iijima S and Eklund P C 2003 *Phys. Rev. Lett.* [**90**]{} 257403 Experimentally low-resistance contacts can be obtained either by growing SWNT’s directly out of the predefined catalyst islands and subsequently covering the catalyst islands with metallic contact pads (Ref. ) or by using standard lithography and lift-off techniques with subsequent annealing at high-temperature (Ref. ). In both cases, the ends of the long SWNT wires are surrounded entirely by the metals with strong metal-SWNT surface chemical bonding, although the exact atomic structure of the metal-SWNT interface remains unclear. Contacts can also be formed by depositing SWNT on top of the predefined metallic electrodes and side-contacted to the surfaces of the metals (side-contact scheme), which corresponds to the weak coupling limit due to the weak van der Waals bonding in the side-contact geometry leading to high contact resistance (Ref. ). Other types of contact may also exist corresponding to intermediate coupling strength. The contact geometry chosen in this work thus serves as a simplified model of the low-resistance contact. A comprehensive study of the contact effects in SWNT junction devices is currently under way and will be reported in a future publication. 1994 *CRC Handbook of Chemistry and Physics* (CRC Press, Boca Raton) Yao Z, Kane C L and Dekker C 2000 *Phys. Rev. Lett.* [**84**]{} 2941; Park J-Y, Rosenblatt S, Yaish Y, Sazonova V, Ustunel H, Braig S, Arias T A, Brouwer P and McEuen P L 2004 *Nano Lett.* [**4**]{} 517 Xue Y, Datta S and Ratner M A 2002 *Chem. Phys.* [**281**]{} 151; See also Datta S 1995 *Electron Transport in Mesoscopic Systems* (Cambridge University Press, Cambridge) The SWNT-metal interaction arises from one discrete cylindrical shell of metal atoms, surrounded by the bulk metal and treated using the Green’s function method as detailed in Ref. . We use a SWNT-metal surface distance of $2.0(\AA)$, close to the average inter-atomic spacing in the SWNTs and metals. Hoffmann R 1988 *Rev. Mod. Phys.* [**60**]{} 601; Rochefort A, Salahub D R and Avouris Ph 1999 *J. Phys. Chem. B* [**103**]{} 641 Le[ó]{}nard F and Tersoff J 2002 *Appl. Phys. Lett.* [**81**]{} 4835 Wind S J, Appenzeller J, and Avouris Ph 2003 *Phys. Rev. Lett.* [**91**]{} 58301 Auvray S, Borghetti J, Goffman M F, Filoramo A, Derycke V, Bourgoin J P and Jost O 2004 *Appl. Phys. Lett.* [**84**]{} 5106 Freitag M, Radosavljevi[ć]{} M, Clauss W and Johnson A T 2000 *Phys. Rev. B* [**62**]{} R2307; Venema L C, Janssen J W, Buitelaar M R, Wild['’o]{}er J W G, Lemay S G, Kouwenhoven L P and Dekker C 2000 *Phys. Rev. B* [**62**]{} 5238 Xue Y and Ratner M A 2003 *Phys. Rev. B* [**68**]{} 115406; Xue Y and Ratner M A 2004 *Phys. Rev. B* [**69**]{} 85403
![\[xueFig1\] (Color online) (a) Schematic illustration of the Au-SWNT-Ti junction. The ends of the long SWNT wire are surrounded entirely by the semi-infinite electrodes, with only a finite segment being sandwiched between the electrodes (defined as the channel). Also shown is the coordinate system of the nanotube junction. (b) Schematic illustration of the band diagram in the Au-SWNT-Ti junction. The band alignment in the middle of the SWNT junction is determined by the average of the metal work functions. $W_{1(2)},E_{F}$ denote the work functions and Fermi-level of the bi-metallic junction. ](xueFig1.eps){height="4.0in" width="5.0in"}
![\[xueFig2\] Electrostatics of the Au-SWNT-Ti junction for SWNT channels of different lengths. (a) Upper figure shows transferred charge per atom as a function of position along the SWNT axis for SWNT channel lengths of $2.0, 8.4 12.6, 16.9$ and $21.2$ nm. Lower figure shows the magnified view of the transferred charge in the middle of the channel for the longest (21.2 nm) SWNT studied. (b) Upper figure shows the electrostatic potential change at the cylindrical surface of the 20 (8.4 nm) and 40- unitcell (16.9 nm) SWNTs studied. The dotted line denote the linear voltage ramp $V_{ext}$ (contact potential) due to the work function difference of gold and titanium. The dashed line show the charge-transfer induced electrostatic potential change $\delta V(\delta \rho)$. The solid line shows the net electrostatic potential change $V_{ext}+\delta V$. Lower figure shows the magnified view of the electrostatic potential change in the middle of the 40-unitcell SWNT junction. ](xueFig2-1.eps "fig:"){height="3.0in" width="5.0in"} ![\[xueFig2\] Electrostatics of the Au-SWNT-Ti junction for SWNT channels of different lengths. (a) Upper figure shows transferred charge per atom as a function of position along the SWNT axis for SWNT channel lengths of $2.0, 8.4 12.6, 16.9$ and $21.2$ nm. Lower figure shows the magnified view of the transferred charge in the middle of the channel for the longest (21.2 nm) SWNT studied. (b) Upper figure shows the electrostatic potential change at the cylindrical surface of the 20 (8.4 nm) and 40- unitcell (16.9 nm) SWNTs studied. The dotted line denote the linear voltage ramp $V_{ext}$ (contact potential) due to the work function difference of gold and titanium. The dashed line show the charge-transfer induced electrostatic potential change $\delta V(\delta \rho)$. The solid line shows the net electrostatic potential change $V_{ext}+\delta V$. Lower figure shows the magnified view of the electrostatic potential change in the middle of the 40-unitcell SWNT junction. ](xueFig2-2.eps "fig:"){height="3.0in" width="5.0in"}
![\[xueFig3\] (Color online) Local density of states (LDOS) as a function of position along the SWNT axis for SWNT channel length of $16.9 nm$. We show the result when self-consistent SWNT screening of the build-in electric field is included in (a). For comparison we have also shown the result for non self-consistent calculation in (b). The plotted LDOS is obtained by summing over the $10$ atoms of each carbon ring of the $(10,0)$ SWNT. Note that each cut along the energy axis at a given axial position gives the LDOS of the corresponding carbon ring and each cut along the position axis at a given energy gives the corresponding band shift. ](xueFig3-1.eps "fig:"){height="3.8in" width="5.0in"} ![\[xueFig3\] (Color online) Local density of states (LDOS) as a function of position along the SWNT axis for SWNT channel length of $16.9 nm$. We show the result when self-consistent SWNT screening of the build-in electric field is included in (a). For comparison we have also shown the result for non self-consistent calculation in (b). The plotted LDOS is obtained by summing over the $10$ atoms of each carbon ring of the $(10,0)$ SWNT. Note that each cut along the energy axis at a given axial position gives the LDOS of the corresponding carbon ring and each cut along the position axis at a given energy gives the corresponding band shift. ](xueFig3-2.eps "fig:"){height="3.8in" width="5.0in"}
![\[xueFig4\] Local-density-of-states (LDOS) and transmission versus energy (TE) characteristics of the 40-unit cell Au-SWNT-Ti junction. (a) LDOS at the 1st unit cell adjacent to the Au side (left end) of the Au-SWNT-Ti junction (solid line) and the LDOS at the corresponding location of the Au-SWNT-Au junction (dashed line). (b) LDOS in the middle unit cell of the Au-SWNT-Ti junction (solid line) and the LDOS of the bulk (10,0) SWNT (dashed line). (c) LDOS at the 1st unit cell adjacent to the Ti side (right end) of the Au-SWNT-Ti junction (solid line) and the LDOS at the corresponding location of the Ti-SWNT-Ti junction (dashed line). (d) TE characteristics of the Au-SWNT-Ti junction. ](xueFig4.eps){height="4.0in" width="5.0in"}
![\[xueFig5\] (a) Room temperature conductance of the Au-SWNT-Ti junction as a function of SWNT channel length. (b) temperature dependence of the conductance of the 40-unit cell (16.9 nm) SWNT junction. The room temperature current-voltage characteristics of the 20- and 40- unit cell SWNT junctions are shown in (c) and (d) respectively. ](xueFig5.eps){height="4.0in" width="5.0in"}
|
{
"pile_set_name": "arxiv"
}
|
///
/// Massively by HTML5 UP
/// html5up.net | @ajlkn
/// Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
///
/* Wrapper */
#wrapper {
@include vendor('transition', 'opacity #{_duration(menu)} ease');
position: relative;
z-index: 1;
overflow: hidden;
> .bg {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: _palette(wrapper-bg);
background-image: url('../../images/overlay.png'), linear-gradient(0deg, rgba(0,0,0,0.1), rgba(0,0,0,0.1)), url('../../images/bg.jpg');
background-size: auto, auto, 100% auto;
background-position: center, center, top center;
background-repeat: repeat, no-repeat, no-repeat;
background-attachment: scroll, scroll, scroll;
z-index: -1;
&.fixed {
position: fixed;
width: 100vw;
height: 100vh;
}
}
&.fade-in {
&:before {
@include vendor('pointer-events', 'none');
@include vendor('transition', 'opacity 1s ease-in-out');
@include vendor('transition-delay', '0.75s');
background: _palette(invert, bg);
content: '';
display: block;
height: 100%;
left: 0;
opacity: 0;
position: fixed;
top: 0;
width: 100%;
}
body.is-loading & {
&:before {
opacity: 1;
}
}
}
@include orientation(portrait) {
> .bg {
background-size: auto, auto, auto 175%;
}
}
}
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We present a fast and versatile method to calculate the characteristic spectrum $h_c$ of the gravitational wave background (GWB) emitted by a population of eccentric massive black hole binaries (MBHBs). We fit the spectrum of a reference MBHB with a simple analytic function and show that the spectrum of any other MBHB can be derived from this reference spectrum via simple scalings of mass, redshift and frequency. We then apply our calculation to a realistic population of MBHBs evolving via 3-body scattering of stars in galactic nuclei. We demonstrate that our analytic prescription satisfactorily describes the signal in the frequency band relevant to pulsar timing array (PTA) observations. Finally we model the high frequency steepening of the GWB to provide a complete description of the features characterizing the spectrum. For typical stellar distributions observed in massive galaxies, our calculation shows that 3-body scattering alone is unlikely to affect the GWB in the PTA band and a low frequency turnover in the spectrum is caused primarily by high eccentricities.'
author:
- |
Siyuan Chen,$^1$[^1] Alberto Sesana$^1$[^2] and Walter Del Pozzo$^{1,2}$\
$^1$School of Physics & Astronomy, University of Birmingham, Birmingham, B15 2TT, UK\
$^2$Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Pisa I-56127, Italy
bibliography:
- 'bibliography.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: Efficient computation of the gravitational wave spectrum emitted by eccentric massive black hole binaries in stellar environments
---
\[firstpage\]
black hole physics – gravitational waves – galaxies: kinematics and dynamics – methods: analytical
Introduction {#sec:Introduction}
============
It is now well established that most (potentially all) massive galaxies harbour massive black holes (MBHs) in their centre [see @KormendyHo:2013 and references therein]. In the standard hierarchical cosmological model [@WhiteRees:1978], present day galaxies grow in mass and size by accreating cold gas from the cosmic web [@2009Natur.457..451D] and by merging with other galaxies [@1993MNRAS.264..201K]. In a favoured scenario in which MBHs are ubiquitous up to high redshift, following the merger of two galaxies, the central MBHs hosted in their nuclei sink to the centre of the merger remnant eventually forming a bound binary system [@BegelmanBlandfordRees:1980]. The binary orbit shrinks because of energy and angular momentum exchange with the surrounding ambient, stars and cold gas [see @DottiSesanaDecarli:2012 for a recent review], to the point at which gravitational wave (GW) emission takes over, efficiently bringing the pair to coalescence. Since galaxies are observed to merge quite frequently and the observable Universe encompasses several billions of them, a sizeable cosmological population of MBHBs is expected to be emitting GWs at any time [@SesanaVecchioColacino:2008 hereinafter SVC08].
At nHz frequencies, their signal is going to be captured by pulsar timing arrays [PTAs @FosterBacker:1990]. Passing GWs leaves an imprint in the time of arrival of ultra-stable millisecond pulsars. By cross correlating data from an ensemble of millisecond pulsars (i.e. from a PTA), this signature can be confidently identified [@HellingsDowns:1983]. Because pulsars are timed on weekly basis ($\Delta{t}=1$week) over a period ($T$) of many years (almost 30yr for some of them), PTAs are sensitive to GW in the frequency window $[1/T,1/(2\Delta{t})]\approx [1{\rm nHz},1\mu{\rm Hz}]$.
The European Pulsar Timing Array [EPTA @2016MNRAS.458.3341D] and the Parkes Pulsar Timing Array [PPTA @2016MNRAS.455.1751R] and North American Nanohertz Observatory for Gravitational Waves [NANOGrav @2015ApJ...813...65T] have made considerable advances in increasing the sensitivity of their datasets. And the first data release of the International Pulsar Timing Array [IPTA @VerbiestEtAl_IPTA1stData:2016] is paving the way towards an effective combination of all PTA observation into a dataset that has the potential to detect GWs within the next ten years [@2015MNRAS.451.2417R; @2016ApJ...819L...6T]. Moreover, new powerful telescopes such as the SKA pathfinder MeerKAT in South Africa [@2009arXiv0910.2935B] and the 500-mt FAST in China [@2011IJMPD..20..989N] will be online in the next couple of years, boosting the odds of GW detection with their exquisite timing capabilities.
The frequency spectrum of the overall GW signal is given by the superposition of all sources emitting at a given frequency. Because of the abundance of MBHBs in the Universe this has been generally described as a stochastic GW background (GWB), characterized, in case of circular GW driven binaries by a power-law spectrum $h_c\propto f^{-2/3}$ [@2001astro.ph..8028P]. However, two facts have became clear in the past decade. Firstly, to get to the PTA band, MBHBs need to efficiently interact with their stellar environment, which potentially has a double effect on the shape of the GW spectrum. If at the lower end of the PTA sensitivity window, MBHBs shrink because of interaction with their environment more efficiently than because of GW emission, then the spectrum is attenuated, even showing a potential turnover [@KocsisSesana:2011; @Sesana:2013CQG; @2014MNRAS.442...56R; @2016arXiv160601900K]. Moreover, both scattering of ambient stars and interaction with a circumbinary disk tend to excite the binary eccentricity [@1996NewA....1...35Q; @2005ApJ...634..921A; @2011MNRAS.415.3033R]. This also results in a loss of power at low frequency (eccentric binaries evolve faster and generally emit at higher frequencies), potentially leading to a turnover in the spectrum [@EnokiNagashima:2007; @HuertaEtAl:2015]. Secondly, for $f>10$nHz, the bulk of the signal is provided by sparse massive and/or nearby sources, and cannot be described simply as a stochastic signal. This was first noticed by SVC08, who came to the conclusion that at high frequency the GW signal will be dominated by sparse, individually resolvable sources, leaving behind a stochastic GWB at a level falling below the nominal $f^{-2/3}$ power law.
With the constant improvement of their timing capabilities, PTAs are placing increasingly stringent limits on the amplitude of the expected GWB [@ShannonEtAl_PPTAgwbg:2015; @LentatiEtAl_EPTAgwbg:2015; @ArzoumanianEtAl_NANOGRAV9yrData:2016], and detection is possible within the next decade. One crucial question is then: what astrophysics do we learn from a GWB detection with PTA? This question has been sparsely tackled by a number of authors [see, e.g., @Sesana:2013CQG] but the answers have been mostly qualitative. A full assessment of what we can learn from PTA detection will stem from a combination of all the measurements PTA will be able to make, including: amplitude and shape of the unresolved GW signal, possible non Gaussianity and non-stationarity, statistics and properties of individually resolvable sources. With this long-term goal in mind, a first step is to investigate what information can be retrieved from the [*amplitude and shape*]{} of the GWB.
As part of the common effort of the EPTA collaboration [@2016MNRAS.458.3341D] to detect GWs with pulsar timing, in this paper, we derive the expected spectrum of a GWB for a generic population of eccentric MBHBs evolving in typical stellar environments. Expanding on the work of [@MiddletonEtAl:2016], the goal is to define a model that links the MBHB mass function and eccentricity distribution to the shape of the GWB spectrum. In particular, we find that the astrophysical properties of the MBHBs are reflected in two features of the spectrum. The efficiency of environmental coupling and the MBHB eccentricity might cause a low frequency flattening (or even a turnover) in the spectrum. The shape of the MBHB mass function affects the statistics of bright sources at high frequency, causing a steepening of the unresolved level of the GWB. We develop an efficient (mostly analytical) formalism to generate GW spectra given a minimal number of parameters defining the MBHB mass function, the efficiency of environmental coupling and the eccentricity distribution. In a companion paper we will show how the formalism developed here is suitable to an efficient exploration of the model parameter space, allowing, for the first time, a quantitative estimate of the MBHB population parameters from a putative GWB measurement.
The paper is organized as follows. In Section \[sec:Model\], we derive a versatile and quick analytic approximation to the shape of a GW spectrum produced by eccentric GW driven binaries. In Section \[sec:Coupling\] we study the evolution of eccentric MBHBs in stellar environments with properties constrained by observations of massive spheroids. We derive typical transition frequencies at which GWs take over, coalescence timescales, and we construct a simplified but robust framework to include the scattering-driven phase in the computation of the GW spectrum. Section \[sec:Population\] reports the main results of our investigation. By employing a range of MBHB populations, we demonstrate that our quick approximation is applicable in the PTA frequency window, with little dependence on the detailed properties of the stellar environment. Moreover, we derive a fast way to compute the high frequency steepening of the spectrum to account for the small number statistics of massive, high frequency MBHBs. We discuss our results and describe future applications of our findings in Section \[sec:Conclusions\].
Analytical modelling of the GW spectrum {#sec:Model}
=======================================
The GWB generated by a population of eccentric binaries was first investigated by [@EnokiNagashima:2007] and more recently by [@HuertaEtAl:2015]. In this section we follow the same approach and review their main results. Following [@2001astro.ph..8028P], the characteristic strain $h_c(f)$ of the GW spectrum produced by a population of cosmological MBHBs can be written as
$$h_c^2(f) = \frac{4G}{\pi c^2 f} \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M} \frac{d^2n}{dzd{\cal M}} \frac{dE}{df_r}.
\label{eq:hc}$$
Here, $d^2n/dzd{\cal M}$ defines the comoving differential number density (i.e. number of systems per Mpc$^3$) of merging MBHBs per unit redshift and unit chirp mass ${\cal M}=(M_1M_2)^{3/5}/(M_1+M_2)^{1/5}$ – where $M_1>M_2$ are the masses of the binary components – and the [*observed*]{} GW frequency at Earth $f$ is related to the [*emitted*]{} frequency in the source frame $f_r$ via $f_r=(1+z)f$. The evaluation of equation (\[eq:hc\]) involves a double integral in mass and redshift, generally to be performed numerically, and the computation of the energy spectrum $dE/df_r$. For an eccentric MBHB, this is given by a summation of harmonics as: $$\frac{dE}{df_r} = \sum_{n=1}^{\infty} \frac{1}{n} \frac{dE_n}{dt} \frac{dt}{de_n} \frac{de_n}{df_n},
\label{eq:dEdf}$$ where now $f_n=f_r/n$ is the restframe [*orbital*]{} frequency of the binary for which the $n$-th harmonic has an observed frequency equal to $f$ and $e_n$ is the eccentricity of the binary at that orbital frequency. We used the concatenation rule of derivation to highlight the role of the eccentricity. The first differential term in the rhs of equation (\[eq:dEdf\]) is the luminosity of the $n$-th GW harmonic given by $$\frac{dE_n}{dt} = \frac{32}{5} \frac{G^{7/3}}{c^5} \mathcal{M}^{10/3} (2\pi f_n)^{10/3} g_n(e_n)
\label{eq:dEdt}$$ where $$\begin{split}
& g_n(e) = \\ & \frac{n^4}{32} \Big[\Big(J_{n-2}(ne)-2eJ_{n-1}(ne)+\frac{2}{n}J_n(ne)+2eJ_{n+1}(ne)-J_{n+2}(ne)\Big)^2 \\ & +(1-e^2)\Big(J_{n-2}(ne)-2J_n(ne)+J_{n+2}(ne)\Big)^2 + \frac{4}{3n^2} J_n^2(ne)\Big],
\end{split}$$ and $J_n$ is the $n$-th Bessel function of the first kind. The other two differential terms describe the evolution of the binary frequency and eccentricity with time, and for an eccentric MBHB driven by GW emission only, are given by $$\begin{aligned}
\frac{df_n}{dt} & = \frac{96}{5} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_n^{11/3} F(e_n) \label{eq:dfdt}
\\
\frac{de_n}{dt} & = -\frac{1}{15} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_n^{8/3} G(e_n)
\label{eq:dedt}\end{aligned}$$ where $$\begin{aligned}
F(e) & = \frac{1+(73/24)e^2+(37/96)e^4}{(1-e^2)^{7/2}}
\\
G(e) & = \frac{304e+121e^3}{(1-e^2)^{5/2}}.\end{aligned}$$ By plugging (\[eq:dEdt\]), (\[eq:dfdt\]) and (\[eq:dedt\]) into the expression (\[eq:dEdf\]), equation (\[eq:hc\]) one obtains [@HuertaEtAl:2015] $$\begin{split}
h_c^2(f) = & \frac{4G}{\pi c^2 f} \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M}\frac{d^2n}{dzd{\cal M}} \\ & \frac{{\cal M}^{5/3}(\pi G)^{2/3}}{3(1+z)^{1/3} f^{1/3}}\sum_{n=1}^{\infty}\frac{g_n(e_n)}{F(e_n)(n/2)^{2/3}}.
\label{eq:hcgw}
\end{split}$$
We note that Equation is strictly valid only if the merger happens at a fixed redshift. However, we will see later that the typical merger timescale, $t_c$, of MBHBs can be Gyrs (cf Equation (\[eq:tcoal\]) and figure \[sec:fttcsingle\]), which is comparable to the cosmic expansion time $t_{\rm Hubble}$. Despite this fact, what actually matters is only the last phase of the MBHB inspiral, when the GW power is emitted in PTA band. Let us consider an optimistic PTA able to probe frequencies down to $\approx 1$nHz (i.e. observing for 30 years). If binaries are circular, then they start to emit in the PTA band only when their orbital frequency is $f_{\rm orb}=0.5$nHz. For typical MBHBs of ${\cal M}>3\times 10^{8}$ [which are those dominating the GWB, see e.g. @SesanaVecchioColacino:2008], the coalescence time at that point is $\tilde{t}_c<0.15$Gyr. The bulk of the PTA signal comes from $z<1.5$ [@2015MNRAS.447.2772R; @2016ApJ...826...11S], where the typical cosmic expansion time is already $t_{\rm Hubble}(z)>1$Gyr. This is almost an order of magnitude larger than $\tilde{t}_c$, which we also stress becomes much shorter with increasing MBHB masses. On the other hand, if binaries are very eccentric, they start to emit significant GW power in the PTA band when their orbital frequency is much lower than the minimum frequency probed by the array. Figure \[fig:speccompare\] shows that, if $e=0.9$, considering only the power emitted since $f_{\rm orb}=0.1$nHz provides a good approximation to the overall spectrum from $f \approx 1$nHz onwards. Although $f_{\rm orb}$ is much lower in this case, eccentric binaries coalesce much faster (see again Equation (\[eq:tcoal\])). For typical MBHBs of ${\cal M}>3\times 10^{8}$ with $e=0.9$, the coalescence time at that point is $\tilde{t}_c<10$Myr. Therefore, $\tilde{t}_c\ll t_{\rm Hubble}$ becomes a better approximation with increasing eccentricity, and Equation (\[eq:hcgw\]) generally provide a good approximation to the GWB.
In practice, equation (\[eq:hcgw\]) is evaluated numerically. For each integration element, the sum in the expression has to be computed by solving numerically equations (\[eq:dfdt\]) and (\[eq:dedt\]) to evaluate $e_n$ at each of the orbital frequencies $f_n$ contributing to the spectrum observed at frequency $f$, and by then computing the appropriate $g_n(e_n)$ function. This procedure is extremely cumbersome and time consuming. [@2009PhRvD..80h4001Y] proposed an analytical approximation for $e(f)$ that helps in speeding up the computation. However, it is accurate only for $e<0.9$, and even then one is left with the computation of the $n$ harmonics and the evaluation of the Bessel functions. Note that the GW energy spectrum of a binary with eccentricity $e$ peaks at the $n_p\approx(1-e)^{-3/2}$ harmonic, with still significant contributions at $n\sim 10n_p$ [@2010PhRvD..82j7501B]. For a MBHB with $e=0.9$ this implies the computation of several hundreds of harmonics.
Fitting formula and scaling properties {#sec:fit}
--------------------------------------
![characteristic amplitude spectrum for different eccentricities calculated with $n = 12500$ harmonics computed with no lower limit on $f_n$.[]{data-label="fig:specint"}](images/specint){width="45.00000%"}
Our first goal is to compute an efficient and accurate way to numerically calculate $h_c^2(f)$. Although the double integral might be solvable analytically for a suitable form of $d^2n/dzd{\cal M}$, a numerical evaluation is generally required. We therefore concentrate on the computation of the single integral element. We thus consider a reference system with a unity number density per Mpc$^3$ characterized by selected chirp mass and redshift. This corresponds to setting $$\frac{d^2n}{dzd{\cal M}}=\delta({\cal M}-{\cal M}_0)\delta(z-z_0)/ \text{Mpc}^3.$$ Equation (\[eq:hcgw\]) then becomes $$\begin{split}
h_{c,0}^2(f) & = \frac{4G^{5/3} \text{Mpc}^{-3}}{3\pi^{1/3} c^2 f^{4/3}} \frac{{\cal M}_0^{5/3}}{(1+z_0)^{1/3}}\sum_{n=1}^{\infty}\frac{g(n,e_n)}{F(e_n)(n/2)^{2/3}}
\end{split}
\label{eq:hc0}$$ To fully define the system we need to specify an initial MBHB eccentricity $e_0$ at a reference orbital frequency $f_0$, so that the eccentricity $e_n=e_n(n,f_0,e_0)$ can to be evaluated for the appropriate $n-$th harmonic at the orbital frequency $f_n=f(1+z_0)/n$ via equations (\[eq:dfdt\]) and (\[eq:dedt\]).
We study the behaviour of equation (\[eq:hc0\]) by taking a fiducial binary with ${\cal M}_0=4.16\times10^8{{\rm M}_\odot}$, $z_0=0.02$, $f_0=0.1$nHz and different eccentricities $e_0=0.3, 0.6, 0.9, 0.99$. Results are shown in figure \[fig:specint\]. Obviously, since the binary circularizes because of GW emission, at high frequency all the spectra eventually sit on the same power law. Moreover, the spectra look self-similar, as also noted by [@HuertaEtAl:2015]. This property allows the spectra to be shifted on the $f^{-2/3}$ diagonal, given an analytic fitting expression for one reference spectrum. Self similarity has to be expected because equations (\[eq:dfdt\]) and (\[eq:dedt\]) combine to give [@EnokiNagashima:2007] $$\frac{f}{f_0}=\left(\frac{1-e_0^2}{1-e^2}\left(\frac{e}{e_0}\right)^{12/19}\left(\frac{1+\frac{121}{304}e^2}{1+\frac{121}{304}e_0^2}\right)^{870/2299}\right)^{-3/2}.$$ This means that the eccentricity evolution is just a function of the frequency ratio $f/f_0$ and there is no intrinsic scale in the problem. Any inspiral will thus pass through any given eccentricity at some frequency during the process. A reference binary with $e_0=0.9$ at $f_0=10^{-10}$Hz is simply an earlier stage in the evolution of a binary with a smaller $e$ at a higher $f$, see figure \[fig:specshift\]. Therefore, the spectrum of a binary with a different initial eccentricity $e_t$ specified at a different initial frequency $f_t$ can be simply obtained by shifting the spectrum of the reference binary. What one needs to know is by how much the spectrum has to be shifted along the $f^{-2/3}$ diagonal. To answer this question we identify a reference point of the spectrum. The obvious choice is the peak frequency defined by [@HuertaEtAl:2015]. They showed that the deviation of the spectrum of an eccentric binary, defined by fixing the eccentricity $e$ at a given orbital frequency $f$, with respect to its circular counterpart peaks at a frequency $f_p$ given by[^3] $$\frac{f_p}{f} = \frac{1293}{181} \Big[\frac{e^{12/19}}{1-e^2}\big(1+\frac{121e^2}{304}\big)^{870/2299}\Big]^{3/2} \label{eq:fpeak}$$
![Analytical spectral shift. The upper panel shows the eccentricity evolution over frequency for a fiducial spectrum characterized by the initial conditions $(e_0 = 0.9, \ f_0 = 10^{-10}$Hz$)$ (blue) and a generic spectrum characterized by $(e_t, \ f_t)$ (green). The lower panel shows the respective GW spectra (again, blue for fiducial and green for generic) and the steps involved in the shifting. The two vertical dashed lines mark the ’peak frequencies’ defined in [@HuertaEtAl:2015], the horizontal arrow shifts the fiducial spectrum by $f_{p,t}/f_{p,0}$ (black dashed spectrum), and the vertical arrow moves it up by a factor $(f_{p,t}/f_{p,0})^{-2/3}$, as described in the main text.[]{data-label="fig:specshift"}](images/specshift){width="45.00000%"}
Let us consider two spectra as shown in figure \[fig:specshift\]. The first one is a reference spectrum $h_{c,0}(f)$ defined by $e_0=0.9$ at $f_0=10^{-10}$Hz, the second one is a generic spectrum $h_c(f)$ characterized by a generic value of $e_t$ at a transition frequency $f_t$ typically different from $f_0$. By feeding these input into we directly get the two peak frequencies $f_{p,0}$ and $f_{p,t}$ respectively, marked in the lower panel of the figure. We want to compute $h(f)$ from $h_{c,0}(f)$. It is clear that the peak frequency at $f_{p,0}$ has to shift to $f_{p,t}$, therefore $h_c(f)$ has to correspond to $h_{c,0}(f')$ where $f'=f(f_{p,0}/f_{p,t})$. However, this transformation just shifts the spectrum horizontally. To get to $h_{c,0}(f)$ we still need to multiply $h_{c,0}(f')$ by a factor $(f_{p,t}/f_{p,0})^{-2/3}$. The total shift has therefore the form $$h_c(f) = h_{c,0}\Big(f\frac{f_{p,0}}{f_{p,t}}\Big)\left(\frac{f_{p,t}}{f_{p,0}}\right)^{-2/3},
\label{eq:hshift}$$ In fact, it is easy to verify that by applying equation (\[eq:hshift\]) to any of the spectra in figure \[fig:specint\] all the other spectra are recovered.
All we need then is a suitable fit for a reference MBHB. For this, we take the reference case $f_0=10^{-10}$Hz and $e_0=0.9$ and, based of the visual appearance on the spectrum, we fit a trial analytic function of the form $$h_{c,{\rm fit}}(f) = a_0 \bar{f}^{a_1} e^{-a_2 \bar{f}}+b_0 \bar{f}^{b_1} e^{-b_2 \bar{f}}+c_0 \bar{f}^{-c_1} e^{-c_2/\bar{f}}
\label{eq:hcfit}$$ where $a_i, b_i, c_i$ are constants to be determined by the fit and $\bar{f}=f/(10^{-8}{\rm Hz})$. We find that setting $$\begin{aligned}
a_0&= 7.27\times 10^{-14}\,\,\,\,\,\,\,\,\,\, & a_1&=0.254 & a_2&=0.807\\
b_0&= 1.853\times 10^{-12}\,\,\,\,\,\,\,\,\,\, & b_1&=1.77 & b_2&=3.7\\
c_0&= 1.12\times 10^{-13}\,\,\,\,\,\,\,\,\,\, & c_1&=0.676 & c_2&=0.6\end{aligned}$$ reproduces the spectrum within a maximum error of 1.5% in log-amplitude (i.e. 3.5% in amplitude), as shown in figure \[fig:specfit\]. It also shows the difference between the analytical fit presented in this paper versus [@HuertaEtAl:2015]. The lower frequency shape (left to the peak) is recovered more accurately by equation .
![Gravitational wave spectrum $h_c(f)$ for the reference binary described in the text, computed by summing $n = 12500$ harmonics (dashed line) compared to the best fit $h_{c,{\rm fit}}(f)$ with an analytic function of the form given by equation (\[eq:hcfit\]) (solid line) and by [@HuertaEtAl:2015] (dotted line). The lower panel shows the difference ${\rm log}_{10}h_{c,{\rm fit}}-{\rm log}_{10}h_{c}$ as a function of frequency.[]{data-label="fig:specfit"}](images/specfit){width="45.00000%"}
With this fitting formula in hand, equation (\[eq:hshift\]) readily enables the analytical evaluation of the spectrum for any desired pair of reference values $f_t$, $e_t=e(f_t)$ (note that those can be function of the MBHB parameters, e.g. its chirp mass, or of the environment in which the binary evolve, as we will see in Section \[sec:Coupling\]). Moreover, equation (\[eq:hc0\]) shows that the spectrum of a binary with different chirp mass and redshift can be simply obtained by multiplying $h_{c,{\rm fit}}(f)$ by $({\mathcal{M}}/{\mathcal{M}_0})^{5/3}$ and $(({1+z})/({1+z_0}))^{-1/3}$, respectively. Therefore, the overall spectrum of the MBHB population can be generated from $h_{c,{\rm fit}}(f)$ as $$\begin{split}
h_c^2(f) = & \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M} \frac{d^2n}{dzd{\cal M}} h_{c,{\rm fit}}^2\Big(f\frac{f_{p,0}}{f_{p,t}}\Big) \\ & \Big(\frac{f_{p,t}}{f_{p,0}}\Big)^{-4/3} \Big(\frac{\mathcal{M}}{\mathcal{M}_0}\Big)^{5/3} \Big(\frac{1+z}{1+z_0}\Big)^{-1/3},
\label{eq:hcanalytic}
\end{split}$$ where the ratio $f_{p,0}/f_{p,t}$ is calculated by means of equation (\[eq:fpeak\]).
Range of applicability {#sec:applicability}
----------------------
The assumption behind the above derivation is that the dynamics of the MBHB is purely driven by GW emission, i.e., its evolution is defined by equations (\[eq:dfdt\]) and (\[eq:dedt\]) formally back to $f= -\infty$. This of course cannot be true in practice, the question is whether the derivation provides a good approximation in the frequency range relevant to PTA detection.
![characteristic amplitude spectrum for different eccentricities calculated with $n = 12500$ harmonics where only frequencies $f_n \geq 10^{-10}$ contribute (dashed lines) compared to the spectrum computed with no limitations on $f_n$ (solid lines). The lower panel shows the difference ${\rm log}_{10}h_{c,{\rm fit}}-{\rm log}_{10}h_{c}$ as a function of frequency for the different cases, and it is always $<0.1$ for $f>1\,$nHz.[]{data-label="fig:speccompare"}](images/speccompare){width="45.00000%"}
MBHBs are driven by coupling with their environment up to a certain [*transition orbital frequency*]{}, $f_t$. At lower frequencies the evolution is faster than what is predicted by GW emission only and the eccentricity does not indefinitely grow to approach $e=1$. If the lowest frequency probed by PTA is $f_{\rm min}$ (which is $1/T$, where $T$ is the observation time, as defined in the introduction), then a necessary requirement for the applicability of equation (\[eq:hcanalytic\]) is $f_t<f_{\rm min}$. This is, however, not a sufficient condition because for an eccentric MBHB population, the spectrum at $f_{\rm min}$ is defined by the contribution of binaries emitting at $f_n<f_{\rm min}$ satisfying the requirement $f_n=f_{\rm min}(1+z)/n$ for some $n$. If $f_t=f_{\rm min}$, and therefore the binary evolves faster and is less eccentric at $f_n<f_{\rm min}$, then the contribution of the $n$-th harmonics of systems emitting at $f_n$ is smaller, affecting the overall spectrum at $f_{\rm min}$ and above.
To investigate the impact of this fact on the spectrum we consider the same reference binaries with transition frequency $f_t=f_0=0.1$nHz and $e_t=e_0=0.3, 0.6, 0.9, 0.99$, but now assuming they [*form*]{} at $f_t$, i.e., discarding the contribution of lower frequencies to the computation of the spectrum. The result is compared to the full spectrum in figure \[fig:speccompare\]. As expected, the absence of binaries at $f<f_t$ partially suppresses the signal observed at $f>f_t$. However three things should be noticed: i) the suppression is relevant only up to $f\sim 10f_t$, ii) the effect is small for highly eccentric binaries – this is because for large $e$, the gravitational wave strain $h_c$ is dominated by the first, rather than the second harmonic, see figure 4 in [@2016ApJ...817...70T]–, and iii) this is the most pessimistic case, since for a realistic orbital evolution, binaries do emit also at $f<f_t$, but their contribution to the spectrum at $f>f_t$ is smaller due to the faster evolution and lower eccentricity. Therefore, our approximation should hold in the PTA band as long as the typical transition frequency $f_t$ is few time smaller than $f_{\rm min}$. In the next section we will show that for a typical MBHB population driven by scattering of stars this is indeed generally the case.
Binaries in stellar environments {#sec:Coupling}
================================
Following galaxy mergers, MBHBs sink to the centre because of dynamical friction [@1943ApJ....97..255C] eventually forming a bound pair when the mass in star and gas enclosed in their orbit is of the order of the binary mass. For MBHBs with $M=M_1+M_2>10^8{{\rm M}_\odot}$ relevant to PTA, this occurs at an orbital separation of few parsecs, and the corresponding GW emission is well outside the PTA band. At this point, dynamical friction becomes highly inefficient, and further hardening of the binary proceeds via exchange of energy and angular momentum with the dense gaseous and stellar environment [see @DottiSesanaDecarli:2012 and references therein].
The bulk of the PTA GW signal is produced by MBHBs hosted in massive galaxy (generally spheroids) at redshift $<1$. [@Sesana:2013] and [@2015MNRAS.447.2772R] further showed that the vast majority of the signal comes from ’red’ systems, featuring old stellar populations and only a modest amount of cold gas. This fact does not immediately imply that MBHBs cannot be driven by interaction with cold gas in a form of a massive circumbinary disk. After all, because of the observed MBH-host galaxy relations [see, e.g. @KormendyHo:2013], even a mere 1% of the galaxy baryonic mass in cold gas is still much larger than the MBHB mass, and therefore sufficient to form a circumbinary disk with mass comparable to the binary, if concentrated in the very centre of the galaxy. On the other hand, the relative fraction of observed bright quasars declines dramatically at $z<1$ [e.g. @2007ApJ...654..731H], implying that accretion of large amounts of cold gas, and hence a scenario in which MBHBs evolve in massive circumbinary disks, is probably not the norm. We therefore concentrate here on MBHBs evolving via interaction with stars.
[@SesanaKhan:2015] have shown that, following the merger of two stellar bulges, the evolution of the bound MBHBs can be approximately described by the scattering experiment formalism developed by [@1996NewA....1...35Q]. In Quinlan’s work, the binary semimajor axis evolution follows the simple equation $$\frac{da}{dt}=-\frac{HG\rho a^2}{\sigma},
\label{eq:dadt}$$ where $\rho$ is a fiducial stellar background density and $\sigma$ the characteristic value of the Maxwellian distribution describing the velocity of the stars. $H$ is a dimensionless constant (empirically determined by the scattering experiments) of order $15-20$, largely independent on the MBHB mass ratio $q=M_2/M_1$ and eccentricity $e$. [@SesanaKhan:2015] found that equation (\[eq:dadt\]) is applicable to post merger stellar distributions providing that $\sigma$ is the typical velocity dispersion of the stellar bulge and $\rho$ is the average stellar density at the MBHB influence radius, $\rho_i=\rho(r_i)$, defined approximately as the radius enclosing a stellar mass twice the total MBHB mass $M=M_1+M_2$. In the stellar dynamic jargon, this corresponds to a situation where the MBHB ’loss-cone’ is full at the binary influence radius. By using different methods, [@2015ApJ...810...49V] came to similar conclusions stressing, however, that in the long term the MBHB hardening rate tends to drop compared to equation (\[eq:dadt\]), a hint that the loss-cone might not be kept full in the long term. The evolution of the Keplerian orbital frequency $f_K$ of the MBHB can therefore be written as: $$\frac{df_K}{dt}=\frac{df_K}{dt}\Big{|}_* + \frac{df_K}{dt}\Big{|}_{gw},
\label{eq:fcombined}$$ where $$\begin{aligned}
\frac{df_K}{dt}\Big{|}_* & = \frac{3}{2 (2\pi)^{2/3}} \frac{H\rho_i}{\sigma} G^{4/3} M^{1/3} f_K^{1/3},
\label{eq:fstar}
\\
\frac{df_K}{dt}\Big{|}_{gw} & = \frac{96}{5} (\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_K^{11/3} F(e).
\label{eq:fgw}\end{aligned}$$ Equation (\[eq:fstar\]) is readily obtained from equation (\[eq:dadt\]) by using Kepler’s law, and equation (\[eq:fgw\]) is the standard GW frequency evolution already seen in the previous section. It is easy to show that at low frequency stellar hardening dominates and GW takes over at a transition frequency that can be calculated by equating the two evolution terms to obtain: $$\begin{split}
f_{t} & =
(2\pi)^{-1} \Big(\frac{5H\rho_i}{64\sigma F(e)}\Big)^{3/10} \frac{c^{3/2}}{G^{1/10}} \frac{(1+q)^{0.12}}{q^{0.06}} \mathcal{M}^{-2/5}\\
& \approx 0.56\pi^{-1} \Big(\frac{5H\rho_i}{64\sigma F(e)}\Big)^{3/10} \frac{c^{3/2}}{G^{1/10}} \mathcal{M}^{-2/5} \\
& = 0.356\, {\rm nHz}\, \left(\frac{1}{F(e)}\frac{\rho_{i,100}}{\sigma_{200}}\right)^{3/10}\mathcal{M}_9^{-2/5}
\label{eq:ft}
\end{split}$$ Where $\rho_{i,100}=\rho_i/(100\,{{\rm M}_\odot}{\rm pc}^{-3})$, $\sigma_{200}=\sigma/(200\,{\rm km\,s}^{-1})$, $\mathcal{M}_9=\mathcal{M}/(10^9\,{{\rm M}_\odot})$ and we assume $H=16$ in the last line. We notice that in the mass ratio $0.1<q<1$, that by far dominates the PTA GW signal [see, e.g. figure 1 in @2012MNRAS.420..860S], the function $(1+q)^{0.12}/q^{0.06}$ falls in the range $[1.08,1.15]$. Therefore, in the last two lines of equation (\[eq:ft\]) we neglected the mass ratio dependence by substituting $(1+q)^{0.12}/q^{0.06}=1.12$. A fair estimate of the MBHB coalescence timescale is provided by the evolution timescale at the transition frequency, $t_c=f_t(dt/df_t)$. By using equations (\[eq:ft\]) and (\[eq:fstar\]) one obtains
$$\begin{split}
t_c & = \frac{5}{96} (2\pi)^{-8/3} \frac{c^5}{G^{5/3}} \mathcal{M}^{-5/3} f_t^{-8/3} F(e)^{-1}\\
& = \frac{2}{3} \frac{c}{G^{7/5}} \Big(\frac{\sigma}{H\rho_i}\Big)^{4/5} \Big(\frac{5}{64F(e)}\Big)^{1/5} \frac{q^{0.16}}{(1+q)^{0.32}} \mathcal{M}^{-3/5}\\
& \approx 0.5 \frac{c}{G^{7/5}} \Big(\frac{\sigma}{H\rho_i}\Big)^{4/5} \Big(\frac{5}{64F(e)}\Big)^{1/5} \mathcal{M}^{-3/5}\\
& = 0.136 \ {\rm Gyr} \ F(e)^{-1/5}\left(\frac{\rho_{i,100}}{\sigma_{200}}\right)^{4/5}\mathcal{M}_9^{-3/5}
\label{eq:tcoal}
\end{split}$$
where, once again, we omitted mild $q$ dependences in the last approximation by substituting $q^{0.16}/(1+q)^{0.32}=0.75$ ($0.67 < q^{0.16}/(1+q)^{0.32} < 0.8$ for $0.1<q<1$). For an operational definition of $f_t$ and $t_c$, we need to define $\rho_i$ and $\sigma$. The density profile of massive spheroidals is well captured by the Dehnen’s density profile family [@Dehnen:1993] which takes the form $$\rho(r) = \frac{(3-\gamma)M_* a}{4\pi} r^{-\gamma} (r+a)^{\gamma-4}$$ where $0.5 < \gamma < 2$ determines the inner slope of the stellar density distribution, $M_*$ is the total mass of the bulge in star, $a$ is its characteristic radius. The influence radius $r_i$ of the MBHB is then set by the condition $$2M = \int_0^{r_i} 4\pi r^2 \rho(r) dr
\label{eq:ricondition}$$ which gives $$r_i = \frac{a}{(2M/M_*)^{1/(\gamma-3)}-1}.$$ Inserting $r_i$ back into the Dehnen profile gives $$\rho_i \approx \frac{(3-\gamma)M_*}{4\pi a^3} \Big(\frac{2M}{M_*}\Big)^{\gamma/(\gamma-3)}$$ where we used the fact that $2M << M_*$. It is possible to reduce the number of effective parameters $M, M_*, a, \gamma, \sigma$ by employing empirical relations connecting pairs of them, valid for stellar spheroids. In particular we use the $a-M_*$ relation of [@DabringhausenHilkerKroupa:2008], and the $M-\sigma$ and $M-M_*$ of [@KormendyHo:2013] relations: $$\begin{aligned}
a & = 239 \,\text{pc}\, (2^{1/(3-\gamma)}-1) \Big(\frac{M_*}{10^9{{\rm M}_\odot}}\Big)^{0.596}
\label{eq:ascale}
\\
\sigma & = 261\, \text{km s}^{-1}\, \Big(\frac{M}{10^9{{\rm M}_\odot}}\Big)^{0.228}
\label{eq:msigma}
\\
M_* & = 1.84\times 10^{11}\,{{\rm M}_\odot}\, \Big(\frac{M}{10^9{{\rm M}_\odot}}\Big)^{0.862}.
\label{eq:mbulge}\end{aligned}$$ This allows to express $\rho_i$ as a function of $M$ and $\gamma$ only in the form $$\rho_i = 0.092 \,{{\rm M}_\odot}\text{pc}^{-3} \,{\cal F}(\gamma) \Big(\frac{M}{10^9 M_\odot}\Big)^{{\cal G}(\gamma)},
\label{eq:rhoi}$$ where $$\begin{aligned}
{\cal F}(\gamma) & = \frac{(3-\gamma) 92^{\gamma/(3-\gamma)}}{(2^{1/(3-\gamma)}-1)^3}\nonumber\\
{\cal G}(\gamma) & = -0.68-0.138\frac{\gamma}{3-\gamma}\nonumber\end{aligned}$$ Equations (\[eq:msigma\]) and (\[eq:rhoi\]) are expressed as a function of $M$. However, we notice from equation (\[eq:ft\]) that $f_t\propto M^{-0.3-0.041\gamma/(3-\gamma)}$. Since ${\cal M}=Mq^{3/5}/(1+q)^{6/5}$, if $0.1<q<1$, then $2.32 {\cal M}<M< 3.57{\cal M}$. It is easy to show that for $0.5<\gamma<2$, by substituting $M=2.9{\cal M}$, equation (\[eq:ft\]) returns $f_t$ within 10% of the correct value when $0.1<q<1$.
Finally, equation (\[eq:fcombined\]) defines only the frequency evolution of the MBHB. For a complete description of the system, tracking of the eccentricity evolution is also required. Both scattering experiments and N-body simulations have shown that MBHB-star interactions tend to increase $e$. The increase is generally mild for equal mass binaries and the eccentricity at the transition frequency largely depends on the initial eccentricity at the moment of binary pairing. Because of this mild evolution at $f_K<f_t$ and in order to keep the problem simple, we approximate the eccentricity evolution of the MBHB as: $$\frac{de}{dt}=
\begin{cases}
0\,\,\,\,\,\,\,\, {\rm if}\,\,\,\,\,\,\,\, f_K<f_t\\
-\frac{1}{15} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_K^{8/3} G(e)\,\,\,\,\,\,\,\, {\rm if}\,\,\,\,\,\,\,\, f_K>f_t
\label{eq:ecombined}
\end{cases}$$
Results: Gravitational wave spectra calculation {#sec:Population}
===============================================
Dynamics of MBHBs: transition frequency and coalescence time {#sec:fttcsingle}
------------------------------------------------------------
{width="0.85\columnwidth"} {width="0.85\columnwidth"}\
{width="0.85\columnwidth"} {width="0.85\columnwidth"}\
{width="0.85\columnwidth"} {width="0.85\columnwidth"}\
{width="0.85\columnwidth"} {width="0.85\columnwidth"}\
Before going into the computation of the GW spectrum, we can have a look at how transition frequency $f_t$ and coalescence timescale $t_c$ change as a function of ${\cal M}$ and $e_t$. In the following we consider four selected models representative of a range of physical possibilities having a major impact on the MBHB dynamics. Results are shown in figure \[fig:ftandtc\]. The top panel shows a model with $\gamma=1$ and $\rho_i$ given by equation (\[eq:rhoi\]). We consider this as our default model, because most of the PTA signal is expected to come from MBHBs hosted in massive elliptical galaxies with relatively shallow density profiles. The GW signal is generally dominated by MBHBs with ${\cal M}>3\times 10^8{{\rm M}_\odot}$, which are therefore our main focus. At low $e_t$ those systems have $f_t<0.3$nHz and coalescence timescales in the range $1.5-4$Gyr. For $e_t=0.9$, $f_t$ is ten times lower, nonetheless $t_c$ is roughly an order of magnitude shorter, in virtue of the $F(e)$ factor appearing in equation (\[eq:tcoal\]). The effect of a steeper density profile is shown in the second row of plots in figure \[fig:ftandtc\], where we now assume $\gamma=1.5$. The effect of a steeper inner power law, is to make the stellar distribution more centrally concentrated, thus enhancing $\rho_i$. This makes stellar hardening more efficient and shifts $f_t$ by a factor $\approx 1.3$ upwards making $t_c$ a factor of $\approx 2$ shorter (using a shallower profile $\gamma=0.5$ would have an opposite effect of the same magnitude). We recognize that $\rho_i$ given by equation (\[eq:rhoi\]) relies on a number of scaling relations that are constructed on a limited sample of local, non-merging, galaxies. We therefore also explore the effect of a bias in some of those relations. For example, merging galaxies might be more centrally concentrated and we explore this possibility by arbitrarily reducing the typical scale radius $a$ by a factor of two compared to equation (\[eq:ascale\]). The effect is shown in the third row of panels of figure \[fig:ftandtc\] assuming $\gamma=1$, and it is very similar (slightly larger) to the effect of the steeper ($\gamma=1.5$) density profile shown in the second row. Finally, it has been proposed that the MBH-galaxy relations might be biased high because of selection effects in the targeted galaxy samples. [@2016MNRAS.460.3119S] propose that the typical MBH mass might be in fact a factor $\approx 3$ lower than what is implied by equations (\[eq:msigma\]) and (\[eq:mbulge\]). We therefore explore a model featuring $\gamma=1$ but with MBH mass decreased by a factor of three for given galaxy properties. Results are shown in the bottom panels of figure \[fig:ftandtc\]. For a given MBHB mass, this model implies just a minor change in $\rho_i$ and $\sigma$, with negligible effects of $f_t$ and $t_c$, compared to the fiducial model.
GW spectra of fiducial MBHBs
----------------------------
--------------------------------------------------- -- ---------------------------------------------------
{width="0.95\columnwidth"} {width="0.95\columnwidth"}
--------------------------------------------------- -- ---------------------------------------------------
The GW spectrum generated by a MBHB evolving in a fiducial stellar background can now be computed by evaluating $dE/df_r$ in equation (\[eq:dEdf\]), where the frequency and eccentricity evolution of the pair are now given by equations (\[eq:fcombined\]) and (\[eq:ecombined\]), instead of equations (\[eq:dfdt\]) and (\[eq:dedt\]), and the system is defined by the transition frequency $f_t$ as given in equation (\[eq:ft\]) at which $e_t$ must be specified. We consider a fiducial MBHB with ${\cal{M}}=10^9{{\rm M}_\odot}$ at $z=0.02$ ,and compare the real spectrum including stellar scattering to our approximated formula given by equation (\[eq:hcfit\]) and appropriately re-scaled as described in section \[sec:fit\].
Results are shown in figure \[fig:specsingle\] for all the environment models of figure \[fig:ftandtc\]. In this and the following plots, solid lines are spectra computed via equation , whereas dashed lines are spectra that includes stellar scattering driving the binary evolution at low frequency. We start by discussing the outcome of our fiducial model with $\gamma=1$, as a function of $e_t$, which is shown in the left plot. For circular binaries $f_t\approx0.2$nHz, well below the minimum PTA frequency $f_{\rm min}=1$nHz, appropriate for an PTA baseline of 30yrs, achievable within 2030. By increasing $e_t$, $f_t$ is pushed at lower values, eventually becoming irrelevant. Obviously, the real spectrum diverges from our analytic fit at $f<f_t$. Moreover, for moderately eccentric binaries ($e_t=0.5$ panel) the two spectra differ significantly up to almost $f=1$ nHz. This is mostly because the presence of the stellar environment ’freezes’ the eccentricity to 0.5 at $f<f_t$; the real spectrum at $f\gtrsim f_t$ is missing the contribution from the very eccentric phase at $f<f_t$ that occurs when the environment is not taken into account and the binary is evolved back in time assuming GW emission only. The problem becomes less severe for larger values of $e_t$. Even though the presence of the environment freezes the binary eccentricity, $e_t$ is large enough that most of the relevant contribution from the higher harmonics emitted at low frequencies is kept. Most importantly, in all cases, at all $f>f_{\rm min}=1$nHz, our analytical fit perfectly describes the emitted spectrum. The right plot in figure \[fig:specsingle\] shows the spectrum assuming $e_t=0.9$ for the four different environment models outlined in the previous subsection. Again, we notice that in all cases the GW spectrum is well described by our fitting formula in the relevant PTA frequency range, and the peak of the spectrum is only mildly affected (within a factor of two) by the different host models.
Stochastic background from a cosmic MBHB population
---------------------------------------------------
Having studied the signal generated by a fiducial system, we turn now to the computation of the overall GW spectrum expected from a cosmological population of MBHBs. To do this, we simply need to specify the distribution $d^2n/dzd{\cal M}$. We consider two population models:
- [*model-NUM*]{}: the $d^2n/dzd{\cal M}$ population is numerically constructed on the basis of a semi-analytic galaxy formation model implemented on the Millennium simulation [@SpringelEtAl_MilleniumSim:2005], as described in [@SesanaVecchioVolonteri:2009]. In particular, we use a model implementing the $M_{\rm BH}-M_{\rm bulge}$ relation of [@2003ApJ...589L..21M], with accretion occurring [*before*]{} the final MBHB coalescence on both MBHs in the pair.
- [*model-AN*]{}: employs a parametrised population function of the form [@MiddletonEtAl:2016] $$\frac{d^2 n}{dz d \log_{10} \mathcal{M}} = \dot{n_0} \Big(\frac{\mathcal{M}}{10^7 M_\odot}\Big)^{-\alpha} e^{-\mathcal{M}/\mathcal{M}_*} (1+z)^\beta e^{-z/z_*} \frac{dt_r}{dz} \label{eq:pop},$$ where $t_r$ is time measured in the source reference frame and $$\frac{dt_r}{dz} = \frac{1}{H_0 (1+z) (\Omega_M(1+z)^3 + \Omega_k(1+z)^2 + \Omega_\Lambda)^{1/2}}
\label{eq:dtrdz}$$ Based on loose cosmological constraints [see @MiddletonEtAl:2016 for details], parameters lie in the range $\dot{n}_0 \in [10^{-20},10^3] \ \text{Mpc}^{-3} \text{Gyr}^{-1}, \ \alpha \in [-3,3], \ \mathcal{M}_* \in [10^6,10^{11}] \ M_\odot, \ \beta \in [-2,7], \ z_* \in [0.2,5]$. $H_0 = 70 {\,\text{km}}\,{\text{Mpc}^{-1}\text{s}^{-1}}$ is the Hubble constant and $\Omega_M = 0.3, \ \Omega_k = 0, \ \Omega_\Lambda = 0.7$ are the cosmological energy density ratios. We specialize our calculation to a fiducial mass function with $\log_{10} \dot{n}_0 = -4, \ \alpha = 0, \ \mathcal{M}_* = 10^8 M_\odot, \ \beta = 2, \ z_* = 2$.
![Same as the left half of figure \[fig:specsingle\], but the signal has now been integrated over the [*model-NUM*]{} MBHB population described in the text.[]{data-label="fig:specpopnum"}](images/specpopnum10){width="0.95\columnwidth"}
[0.45]{} $model-NUM$ {width="0.95\columnwidth"}
[0.45]{} $model-AN$ {width="0.95\columnwidth"}
\
To construct the spectrum we still need to specify a reference eccentricity $e_t$ at a reference binary orbital frequency $f_{t}$. Assuming MBHBs evolving in stellar bulges, We take $f_{t}$ from equation (\[eq:ft\]) assuming the four environment models presented in Section \[sec:fttcsingle\]. As for $e_t$ we make the simplifying assumption that, regardless of redshift, mass and environment, all MBHBs share the same eccentricity at the transition frequency. We take $e_t=0.01, 0.5, 0.9, 0.99$.
For each model $h_c(f)$ is computed either via equations (\[eq:hc\],\[eq:dEdf\],\[eq:dEdt\],\[eq:fcombined\],\[eq:ecombined\]), i.e., by solving the binary evolution numerically –including the stellar driven phase– and summing-up all the harmonics, or via equation (\[eq:hcanalytic\]), i.e., by employing our fitting spectrum for GW driven binaries defined by $f_{t},e_t$.
Results are presented in figures \[fig:specpopnum\] and \[fig:specpopcomp\]. Figure \[fig:specpopnum\] shows the impact of $e_t$ on the spectrum. We notice that in the true spectrum (the dashed lines), changing the population from almost circular to highly eccentric shifts the peak of the spectrum by more then one order of magnitude. As already described, our model does not represent well the low frequency turnover for small $e_t$, however in all cases, the GW signal is well described by equation (\[eq:hcanalytic\]) in the relevant PTA frequency band ($f>1\,$nHz), and the factor of $\approx 10$ peak shift in the eccentricity range $0.5<e_t<0.99$ is fairly well captured. As anticipated, typical turnover frequencies due to three body scattering are at sub-nHz scales, and flattening (and eventually turnover) in the GW spectrum is observable only if MBHBs have relatively high eccentricities at transition frequency. Figure \[fig:specpopcomp\] shows the impact of changing the physical parameters describing the efficiency of stellar driven MBHB hardening. Those parameters are fixed to a fiducial value in our model, but can in principle have an impact on the spectrum of the signal. When directly compared to \[fig:specpopnum\], the left panel, showing [*model-NUM*]{}, clarifies that none of those parameters affect the signal to a level comparable to $e_t$. The reason is that essentially all of them cause a change in $\rho_i$, and the $f_t$ dependence on $\rho_i$ is extremely mild ($f_t\propto\rho_i^{3/10}$). For example, shrinking the characteristic galaxy radius $a$ by a factor of two, is equivalent to increasing $\rho_i$ by a factor of eight, which still results in a $<2$ shift of $f_t$. In the right set of panels we see that the same applies to [*model-AN*]{}. However, there is a striking difference of almost an order of magnitude in the location of the peak. This is because [*model-NUM*]{} and [*model-AN*]{} have a very different underlying MBHB mass function. This means that the GWB is dominated by MBHBs with different typical masses, which decouple at different $f_t$. So even if the underlying MBHB dynamics and eccentricity at transition $e_t$ is the same, the resulting peak frequency can be significantly shifted. It is therefore clear that the location of the GWB spectrum turnover is sensitive to both $e_t$ and to the parameters defining the MBHB cosmological mass function, and much less sensitive to the details of the stellar hardening process. This also means, however, that in absence of additional features in the spectrum, the determination of $e_t$ is highly degenerate with the shape of the MBHB mass function.
Removal of individual sources
-----------------------------
![Comparison of the spectrum of a population of binaries with different parameters for the [*model-AN*]{}. Parameters in the plot are specified in the sequence $\{{\rm log}_{10}(\dot{n}_0),\beta,z_*,\alpha,{\rm log}_{10}({\cal M}_*),e_t\}$. The solid lines represent the spectrum with the drop in upper mass limit in the high frequency regime, the dashed lines represent the spectrum with no mass limit change.[]{data-label="fig:specmdrop"}](images/specmdrop){width="45.00000%"}
Interestingly, as mentioned in the introduction, another feature appearing in the GW spectrum at high frequencies has been pointed out by SVC08, and depends on the shape of the cosmic MBHB mass function. Let us consider circular binaries. In an actual observation, the GW signal generated by a cosmic population of MBHBs at a given observed frequency bin, is given by the sum of all MBHBs emitting at that frequency. This is related to the cosmic density of merging binaries via standard cosmology transformations: $$\frac{d^2n}{dzd \log_{10} \mathcal{M}}= \frac{d^3 N}{dz d \log_{10} \mathcal{M} df} \frac{df}{df_r} \frac{df_r}{dt_r} \frac{dt_r}{dz} \frac{dz}{dV_c}$$ where [@Hogg:1999] $$\begin{aligned}
\frac{dV_c}{dz} & = \frac{4\pi c}{H_0} \frac{D_M^2}{(\Omega_M(1+z)^3 + \Omega_k(1+z)^2 + \Omega_\Lambda)^{1/2}},
\\
D_M & = \frac{c}{H_0} \int_0^z \frac{dz'}{(\Omega_M(1+z')^3 + \Omega_k(1+z')^2 + \Omega_\Lambda)^{1/2}},
\\
\frac{df_r}{dt_r} & = \frac{96}{5} (\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_r^{11/3},
\\
\frac{df}{df_r} & = \frac{1}{1+z},\end{aligned}$$ and $dt_r/dz$ is given by equation (\[eq:dtrdz\]). The number of sources emitting in a given observed frequency bin of width $\Delta{f}=1/T$ is therefore given by: $$N_{\Delta{f}}=\int_{f-\Delta f/2}^{f+\Delta f/2} \int_0^{\infty} \int_{0}^{\infty} \frac{d^3 N}{df dz d \log_{10} \mathcal{M}}.
\label{eq:Nf}$$ Each chirp mass and redshift bin contribute to $h_c$ in a measure that is proportional to ${\cal M}^{5/6}/(1+z)^{1/6}$ (see, e.g., equation (\[eq:hc0\])). Therefore, it is possible to rank systems in order of decreasing contribution to the GWB. Because of the very small dependence on redshift – $1<(1+z)^{1/6}<1.3$ for $0<z<5$ considered in our models – we simplify the problem by integrating over $z$ and rank systems based on mass only. It is easy to show that $d^2N/{dfd \log_{10} \mathcal{M}}$ is a strong decreasing function of mass, and is in general $\ll 1$ for the most massive systems when $f>10$nHz. This means that the contribution to the GWB coming from those massive sources at that frequency, is in fact given by ’less than one source’. Since the actual GW signal is given by a discrete population of sources, having less than a source in a given frequency bin means that in a typical realization of the Universe that source might or might not be there with a given probability. For the practical purpose of the GWB computation, the contribution from those systems at those frequencies is actually not there, at least not in the form of a stochastic GWB (we defer the reader to SVC08 for a rigorous mathematical treatment of this issue).
One can therefore assume that in each bin $\Delta{f}$ the most massive sources integrating to $1$ in number do not contribute to the GWB. The value $\bar{M}$ corresponding to this condition is implicitly given by imposing
$$\begin{split}
1 = & \int_{\bar{M}}^{\infty} \int_{f-\Delta f/2}^{f+\Delta f/2} \int_0^{z_{\rm max}} \frac{d^3 N}{df dz d\log_{10} \mathcal{M}}
\\
= & \dot{n}_0 \int_{\bar{M}}^{\infty} \Big(\frac{\mathcal{M}}{10^7 M_\odot}\Big)^{-\alpha} e^{-\mathcal{M}/\mathcal{M}_*} \mathcal{M}^{-5/3} d\log_{10} \mathcal{M}
\\
& \int_0^{\bar{z}} (1+z)^{\beta+1} e^{-z/z_*} \frac{dV_c}{dz} dz \int_{f-\Delta f/2}^{f+\Delta f/2} \frac{dt_r}{df_r} \mathcal{M}^{5/3} df
\end{split}
\label{eq:Mmax}$$
Where in the last equation we substituted the analytical merger rate density given by equation (\[eq:pop\]). Given an observation time $T$, the frequency spectrum is divided in bins $\Delta{f}=1/T$. $h_c(f)$ is therefore calculated at the centroid of each frequency bin by substituting the upper limit $\bar{M}$ defined by equation (\[eq:Mmax\]) in equation (\[eq:hc\]). Note that in equation (\[eq:Mmax\]) mass and frequency integrals are analytic, and only the redshift integral has to be evaluated numerically.
Examples of the GW spectrum obtained including the $\bar{M}$ cut-off are shown in figure \[fig:specmdrop\] for two different mass functions assuming [*model-AN*]{}. Note that the spectrum is significantly reduced only at $f>10$nHz. This justifies a posteriori our assumption of circular GW driven binaries; at such high frequencies even MBHBs that were very eccentric at $f_t$ had become almost circular because of GW backreaction. The figure illustrates that a detection of both spectral features (low frequency turnover and high frequency steepening) might help breaking degeneracies between $e_t$ and MBHB mass function. The two displayed models have very different $e_t$ (0.2 vs 0.9), but also quite different mass functions, so that the GWB turnover occurs around the same frequency. If the signal can be detected up to $f\approx 10^{-7}$Hz, however, differences in the high frequency slope might help pinning down the MBHB mass function and disentangle it from $e_t$. In a companion paper [@2017MNRAS.468..404C], we explore the feasibility of this approach and the implication for astrophysical inference.
Discussion and conclusions {#sec:Conclusions}
==========================
In this paper we developed a semi-analytical model that allows the fast computation of the stochastic GWB from a population of eccentric GW driven MBHBs. The spectrum computation does not directly take into account for any coupling of the MBHB with its stellar and gaseous environment and therefore cannot provide a trustworthy description of the GW signal at all frequencies. The coupling enters in the calculation only by setting the characteristic binary population eccentricity $e_t$ at the transition (or decoupling) frequency $f_t$. We showed, however, that in the plausible astrophysical scenario of MBHBs driven by three body scattering of ambient stars, $f_t<1$nHz (for MBHBs with ${\cal M}>10^8{{\rm M}_\odot}$, that dominate the PTA signal), which is a plausible lower limit for future PTA efforts. Therefore environmental coupling only affects the direct computation of the GW signal in a frequency range that is likely inaccessible to current and near future PTAs, justifying our strategy. Our simple semi-analytic model therefore provides a quick and accurate way to construct the GWB from a population of eccentric MBHBs evolving in stellar environment [*in the frequency range relevant for PTA*]{} (see figure \[fig:specpopnum\]).
Compared to the standard $f^{-2/3}$ power-law, the GWB shows two prominent spectral features: i) a low frequency turnover defined by the coupling with the environment and typical eccentricity of the MBHBs at the transition frequency (figure \[fig:specpopnum\]), and ii) a high frequency steepening due to small number statistics affecting the most massive MBHBs contributing to the GW signal at high frequency (figure \[fig:specmdrop\], see SVC08).
We consider stellar driven MBHBs and we employ for the first time in a PTA related investigation realistic density profiles, appropriate for massive elliptical galaxies (which are the typical hosts of PTA sources). For example, both [@Sesana:2013CQG] and [@2014MNRAS.442...56R] used a simplistic double power-law model matching an isothermal sphere at $r>r_i$ defined by equation (\[eq:ricondition\]). This model is more centrally concentrated and results in much higher $\rho_i$ and $f_t$ than what found in the present study. We find that in density profiles that are typical for massive ellipticals, MBHBs can coalesce on timescales of few Gyr or less (depending on mass and eccentricity) and the typical transition frequency (from stellar driven to GW driven binaries) is located well below $1\,$nHz. Therefore, an observed turnover in the GWB spectrum in the PTA relevant frequency range is likely to be due to high eccentricities rather than coupling with the environment. In particular, we find that a low frequency bending is likely to be seen for $e_t>0.5$, whereas a proper turnover is characteristic of MBHB populations with $e_t>0.9$. These findings are robust against a variety of plausible host galaxy models; i.e. the properties of the stellar environment affect the location of the bending/turnover of the spectrum only within a factor of two within the cases examined here. This latter point deserves some further consideration. All the physical parameters describing the environment of the MBHB affect the location of $f_t$ through $\rho_i$. Essentially it is the density at the influence radius of the binary (together with the MBHB mass and eccentricity) that determines $f_t$. Although for a range of astrophysically plausible scenarios the typical $\rho_i$ for a given MBHB is found to vary within a factor of ten, it might be worth considering the possibility of more extreme scenarios. This can be easily incorporated in our treatment as a free multiplicative parameter to $\rho_i$, and we plan to expand our model in this direction in future investigations.
This has a number of interesting consequences in terms of astrophysical inference from PTA observations. Firstly, for $e_t<0.5$ no low frequency signature in the GWB spectrum is likely to be seen, making it impossible to distinguish circular from mildly eccentric MBHBs [*on the basis of the GWB spectral shape only*]{}. Secondly, because of the $(\rho_i/\sigma)^{3/10}$ dependence of equation (\[eq:ft\]) it will be difficult to place strong constraints on the stellar environment of MBHBs via PTA observations. Lastly, a turnover in the PTA band would be indicative of an highly eccentric ($e_t>0.9$) MBHB population. The turnover frequency depends on both $e_t$ and the MBHB mass function (through the ${\cal M}$ dependence of $f_t$), therefore the detection of a low frequency turnover alone might not place strong constraints on the typical MBHB eccentricity. The high frequency steepening, on the other hand, generally occur at $f>10\,$nHz, where MBHBs have mostly circularized. Therefore it depends exclusively on the MBHB mass function. A measurement of such steepening can therefore constrain the MBHB mass function and break the mass function eccentricity degeneracy affecting the location of the low frequency turnover.
Looking at the prospects of performing astrophysical inference from PTA data, our model has several advantages. First, it directly connects the relevant astrophysical parameters of the MBHB population to the shape of the GWB. As mentioned above, in this first paper, we keep the MBHB mass function and eccentricity at decoupling as free parameters, arguing that other factors affecting the dynamics likely have a minor impact on the signal. Those can, however, be incorporated in our scheme as additional free parameters, if needed. This will eventually allow to perform astrophysical inference from PTA measurements exploiting a model that self consistently includes all the relevant physics defining the MBHB population. This improves upon the ’proof of principle’ type of analysis performed in [@ArzoumanianEtAl_NANOGRAV9yrData:2016], where limits on different model ingredients were placed by adding them individually to the model. For example, by assuming a standard $f^{-2/3}$ power-law, limits were placed on the MBH-host relation. Then a prior on the amplitude was assumed and an ad-hoc broken power law was constructed to put constraints on environmental coupling. Finally, the latter was put aside and eccentricity was added to the model to be constrained separately. Although this is a useful exercise, eventually all ingredients have to be considered at the same time, to be meaningfully constrained, and our modelling takes a step in this direction. Second, the model is mostly analytical, involving only few numerical integrals. The most computationally expensive operations, namely the integration of the MBHB orbital evolution and the summation over all the harmonics of the GW signal, are captured by the simple fitting formula given in equation (\[eq:hcfit\]), together with its simple scaling properties. Therefore, for a given set of parameters $\{{\rm log}_{10}(\dot{n}_0),\beta,z_*,\alpha,{\rm log}_{10}({\cal M}_*),e_t\}$ the GWB can be numerically computed within few ms. This makes the model suitable for large parameter space exploration via parallel Markov Chain Monte Carlo or Nested Sampling searches. In a companion paper [@2017MNRAS.468..404C] we explore this possibility and demonstrate which MBHB parameters and with which accuracy, one can constrain from PTA observations.
Before closing we stress again that we consider the [*shape of the GWB only*]{}. Additional information about the MBHB population will be enclosed in the statistical nature of this background (whether, for example, it is non-Gaussian, anisotropic, non-stationary) and in the properties of individually resolvable sources. A comprehensive account of astrophysical inference from PTA observations will necessarily have to take into account for all this combined information, and our current investigation is only the first step in this direction.
acknowledgements {#acknowledgements .unnumbered}
================
We acknowledge the support of our colleagues in the European Pulsar Timing Array. A.S. is supported by a University Research Fellow of the Royal Society.
\[lastpage\]
[^1]: E-mail:
[^2]: E-mail:
[^3]: Note that $f_p$ does not coincide with the peak of the characteristic amplitude, as also clear from figure \[fig:specshift\].
|
{
"pile_set_name": "arxiv"
}
|
<import src="../../../common/head.wxml"/>
<import src="../../../common/foot.wxml"/>
<view class="container">
<template is="head" data="{{title: 'sendMessage'}}"/>
<view class="page-body">
<view class="weui-cells__title">发送内容(以下字段可自由适配)</view>
<view class="weui-cells weui-cells_after-title">
<view class="weui-cell weui-cell_input">
<view class="weui-cell__hd">
<view class="weui-label">实例字段</view>
</view>
<view class="weui-cell__bd">
<input class="weui-input" type="text" placeholder="请输入"></input>
</view>
</view>
<view class="weui-cell weui-cell_input">
<view class="weui-cell__hd">
<view class="weui-label">实例字段</view>
</view>
<view class="weui-cell__bd">
<input class="weui-input" type="text" placeholder="请输入"></input>
</view>
</view>
</view>
<view class="weui-cells">
<view class="weui-cell weui-cell_input">
<view class="weui-cell__hd">
<view class="weui-label">跳转链接</view>
</view>
<view class="weui-cell__bd">
<input class="weui-input" type="text" placeholder="请输入" value="{{shareData.path}}"></input>
</view>
</view>
</view>
<view class="btn-area">
<button type="primary">发送模板消息</button>
</view>
</view>
<template is="foot"/>
</view>
|
{
"pile_set_name": "github"
}
|
Buddha's Lost Children
Buddha's Lost Children is a 2006 documentary film by Dutch director Mark Verkerk. The feature film tells the story of Khru Bah Neua Chai Kositto, a Buddhist monk who has dedicated his life to orphaned children in the Golden Triangle area of Thailand. The film opened in Dutch cinemas in September 2006.
Awards
The film won the International Documentary Grand Jury Prize (2006) at the Los Angeles AFI Fest , the Jury Award for Documentary (2007) at the Newport Beach Film Festival, the Best Global Insight Film (2007) at the Jackson Hole Film Festival , the David L. Wolper Best Documentary Award (2007) at the Napa Sonoma Valley Film Festival , the City of Rome Award (2006) at the Asiaticafilmmediale in Rome, the Crystal Film (2006) at the Netherlands Film Festival, and the Silver Dove (2006) at the Dok Leipzig .
External links
Category:2006 films
Category:Dutch films
Category:Thai-language films
Category:Documentary films about Buddhism
Category:Dutch documentary films
Category:Documentary films about orphanages
Category:2000s documentary films
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'The standing wave nodes of nonradial oscillations on a neutron star crust will drift with a definite angle velocity around rotational pole due to the rotation of neutron stars. This is called the nonradial oscillation node precession of neutron stars. This article estimated the precession velocity and pointed out that it merely lies on the star’s rotation velocity and the angular order of spherical harmonic $l$ by one order approximation. If we suppose that oscillations effect the particles’ escaping from the polar cap of a neutron star, so that the antinode and node areas of the standing waves have different radiative intensity, several unusual conclusions are acquired by reviewing the observation of pulsars which had already been taken as neutron stars. For example, the drifting subpulse period $P_{3}$ can be gotten from the width of subpulses and order $l$; the larger velocity drift may produce the peak structure of average pulse profiles; the dissimilar radiation phenomena between neighboring periods generated from drift provide a reasonable explanation of interpulses which have been found on some pulsars.'
author:
- |
[Haochen Li]{}\
[Physics Department, Washington University]{}\
[St. Louis, MO 63143]{}
date: 'March 28, 2001'
title: The Nonradial Oscillation Node Precession of Neutron Stars
---
=-2mm =-1.50mm =6.7in =8.5in
Introduction
============
####
Boriakoff (1976) had detected quasi-periodic micropulsations within the subpulses of PSR 2016+28, and inclined to take it as nonradial oscillations of neutron stars. In the pulsar polar cap model (Radhakrishnan and Cooke 1969) the radio pulse is produced by the coherent radiation of particles escaping from a certain surface area of the star(polar cap)along the magnetic field lines. Because of the high particle velocity, the radiation is emitted in a narrow cone, the axis of which coincide with the velocity vector of particles, which is tangential to the magnetic field lines. Since these are periodically distorted by the star’s vibration, the radiation cone will periodically change directions, switching on and off the radiopulse illumination of the observer(modulation). Van Horn (1980) pointed out that rotating, magnetized neutron stars can support a rich variety of oscillation modes and firstly suggested a possible association of subpulse drift and torsional oscillations. The special terms of the neutron stars are considered for calculating one order approximation of the frequency split of the torsional oscillations in Section 2. And using this result, we will discuss phenomena such as drifting subpulses, average pulse profiles and interpulses of pulsars in Section 3. Section 4 is the summary.
Theory of Neutron Star Oscillation Node Precession
==================================================
####
Ruderman (1968) firstly pointed out torsional oscillation modes of neutron star crusts. Hansen and Cioffi (1980) calculated the periods of those for a range of stellar models and found those associated with fundamental modes have periods of around 20 ms. We can use this result to estimate the lowest frequency of torsional oscillation of neutron stars as$$\omega_{0}={\frac{2\pi}{20ms}}=100\pi s^{-1}.$$ The rotation angular velocity of neutron star can be considered as $2\pi s^{-1}$, thus the ratio is $$\epsilon={\frac{\Omega}{\omega_{0}}}=0.02.$$ We can see that although the angular velocity of neutron star rotation is much larger than that of common stars, it is still small compared with the frequency of self-oscillation. This inspires us that the oscillation node precession theory which has been established on other heavenly bodies can be used on neutron stars (also because the torsional oscillation is little sensitive to sphere models, see Van Horn 1980, and its results is simple). That is, we can take rotation effect as perturbation to solve the sphere oscillation equations just as Ledoux (1951) did on gaseous stars and MacDonald and Ness (1961) did on the earth, and the frequency of free oscillation of sphere crust is the sum of the undisturbed frequency plus the perturbation frequency:$$\omega=\omega_{0}+\omega^{1}.$$ As one order approximation for torsional oscillation,$$\omega^{1}={\frac{m}{l(l+1)}}\Omega,$$ where $l$ and $m$ are integers denoting angular orders of spherical harmonic. As the theory of oscillation of stars (Ledoux 1951) and the earth(MacDonald and Ness 1961) has noted, each value of $m$ has two travelling waves associated with it. In the case of the earth one wave travels eastward, and its rate of travel is decreased by the angular velocity of earth rotation; the other travels westward, and its rate is faster. The waves corresponded with neighboring values of $m$ have relative angular velocity $${\frac{\Omega}{l(l+1)}}.$$ The combined effect is to produce a standing-wave pattern that for a given value of $m$ moves westward with the angular velocity $${\frac{\Omega}{l(l+1)}}$$ of its nodes, which is well known in seismology as the node precession of oscillations. And this is just the result we will use next to recur to the attempt which Van Horn (1980) had made to connect the torsional oscillation of rotating neutron stars with the observation phenomena of pulsars.
Discussion
==========
Drifting Subpulses
------------------
####
We suppose that the node and antinode of the standing wave separately correspond with those of subpulse radiate wave pattern, i.e., drifting subpulses reflect the node precession. Then the degrees of subpulse drift in one period of a pulsar rotation (Manchester and Taylor 1977) is $$D_{\phi}={\frac{\Omega}{l(l+1)}} P_{1}={\frac{360}{l(l+1)}},$$ where $P_{1}$ is the pulsar rotational period and 360 of longitude are equal to one pulsar period. We can see that when $D_{\phi}$ is smaller than the width of subpulses (the drifting subpulse observation results are exactly so, see Manchester and Taylor 1977), then we get the subpulse drifting-band spacing $$P_{3}={\frac{\frac{P_{2}}{P_{1}}\times360}{D_{\phi}}}={\frac{l(l+1)}{\frac{P_{1}}{P_{2}}}}$$(in units of $P_{1}$),where $P_{2}$ is the subpulse period (converted from degrees of longitude). We calculated the values of $P_{3}$ for several pulsars using the observational data from Van Horn(1980) and Wright and Fowler(1981). The results are listed in Table 1. Note that these are acquired with larger values of $l$, with which the values of $P_{3}$ increase. Several proximal values have been enumerated in the table for compare. The difference between theoretical and observational values probably due to error and disconsidering of the coupling of several values of $l$. The different values of $P_{3}$ in one pulsar are deemed to mode switch of different $l$.
Average Pulses
--------------
####
Theoretically we have no reasons to believe that the drifting pace of subpulses is always small. Then, for convenience we define drifting rate as $$V={\frac{\frac{P_{1}}{P_{2}}}{l(l+1)}},$$ which represents drift space (in units of $P_{2}$) in each rotational period of a pulsar. Because we could not find the integer drifting space (integer $V$), so the practically observed rate $V'$ rest with the decimal part of $V$. For example, if $V=3/2$ or $1/2$, then $V'=1/2$; if $V=5/3$, then $V'=2/3$ or $-1/3$(minus sign represents opposite drift direction). Here it implies that ${\frac{P_{1}}{P_{2}}}$ is integer which goes on the fact that it here represents the node number of the standing wave along the longitude of the sphere. When $l=1$ or $2$(the fundamental mode which the oscillation is most likely on), it is easy to determine that $V'$ will frequently get $1/2$, $1/6$, $2/6$(the same as $4/6$), etc. Unlike the smaller drifting pace discussed in Section 3.1, these values of $V'$ are too great to be detected as the drift we commonly mean (we do not know whether PSR 2303+30 listed in Table 1. belongs to these small $l$ modes). But in this situation subpulses will appear more frequently at the several fixures in the general radiate windows. The average pulse profiles imitated by computer program through adding a great many drifting periods show peak structures as displayed in Fig.1.
Interpulses
-----------
####
If the pulses of pulsars can embody the node precession of standing waves around the longitude of neutron stars, then according to the observed drifting pace $V'$ discussed above, it must have the circumstances that neighboring periods of pulses have different observational pictures especially when the standing wave length is longer than the general radiate window. For instance, once we see a pulse which practically is a fraction of the standing wave length(a node or nearby), then next period we see the antinode or nearby($V'=1/2$ is very common), therefore the different intensity pulses alternately occur along with the integral periods of rotation. This can give an natural explanation of interpulses (Manchester and Taylor 1977). The weaker pulse will be surely inclined to the nearby node(or antinode) area which should have stronger radiation. That is why the degrees between neighboring pulses are not exactly 180(Manchester and Taylor 1977). If this is true, it means that the real periods of the interpulse pulsars are only half of those we believe now.
Summary
=======
####
The sphere free oscillation has been proved by theory and observation to be a very common phenomena in the world of stars and planets. Many prior works have supposed this happens on neutron stars which have so great density and so rapid rotation (Cheng and Ruderman 1980; Harding and Tademaru 1981; McDermott, Van Horn, and Hansen 1988; Cordes, Weisberg, and Hankins 1990). Although the mechanism of radiation affected by oscillation has not been clearly discussed ( which obviously is a very important problem), it gives a very natural explanation to drifting subpulse phenomena, the generalization of which reasonably gives clear pictures of pulsar’s fundamental observation facts such as average pulse profile, interpulse, etc. The theory also has great potential in explanation of mode changing, micropulses, and glitches which maybe the author will discuss later.
####
It should be pointed out that although the theoretic values of $P_{3}$ we get in Section 3.1 using bigger $l$ are in good agreement with the observation values(see Table 1.), it does not mean that the actual oscillation orders are always high. The lower modes (small $l$) are not considered in the calculating of $P_{3}$ is because the larger scale drift devote little observational effect(this can be seen from the discussion of Section 3.2). Actually it is most probable that more than one mode of oscillation are simultaneously sustained on the star crust, and the observation phenomena is only the coupling of these modes. In mode switching, the dominant precession rate changes sequentially. The advanced job need to determine the relationship of $l$, $m$ and the width of subpulses, so that we can know exactly the parameters of the stars’ oscillation and more detailed knowledge about a given pulsar.
####
I wish to thank Xinji Wu and Xiaofei Chen of Peking University for the helpful discussions.
Boriakoff, V. 1976, Ap.J. Lett., [**208**]{}, L43. Cheng, A. F., and Ruderman, M. A. 1980, Ap.J., [**235**]{}, 576. Cordes, J. M., Weisberg, J. M., and Hankins T.H.1990, A.J., [**100**]{}, 1882. Hansen, C. J., and Cioffi, D. F. 1980, Ap.J., [**238**]{}, 740. Harding, A.K., and Tadermaru, E. 1981, Ap.J., [**243**]{}, 597. Ledoux, P. 1951, Ap.J., [**114**]{}, 373. MacDonald, G.J.F., and Ness, N. F. 1961, J.Geophys.Res., [**66**]{}, 1865. McDermott, P. N., Van Horn, H. M., and Hansen C. J. 1988, Ap.J., [**325**]{}, 725. Manchester, R. N., and Taylor, J. H.1977, Pulsars (San Francisco: Freeman). Radhakrishnan, V., and Cooke, D. J. 1969 Ap.Lett., [**3**]{}, 225. Ruderman, M. A. 1968, Nature, [**218**]{}, 1128 Van Horn, H. M. 1980, Ap.J., [**236**]{}, 899. Wright, G. A. E., and Fowler, L. A. 1981, IAU Symposium 95, Pulsars, ed. W. Sieber and R. Wielebinski, p.211
Fig.1. The sketch maps of average pulse formed by great pace drifting. The abscissa is longitude and the ordinate is intensity, and the numbers only have relative meaning. 1) is single pulse which indicates our suppose that there are only two gaussian subpulses in one general pulse window and this measures up the observational fact; 2)-3) are both average pulses with $V'=1$, distinct at the original positions; 4)-6) are $V'=1/2$, $2/6$, $1/20$ separately. The adding times are all $10^{4}$. It is obvious that the figures will keep stable on more adding times.
Table 1. The periods of drifting subpulses.
PSR $P_{1}$(s) $P_{2}$(ms) ${\frac{P_{1}}{P_{2}}}$ $l$ $P_{3}$(Theory) $P_{3}$(Observation)
--------- ------------ ------------- ------------------------- ----- ----------------- ----------------------
19 18
1944+17 0.440 21 21 20 20 20
21 22
8 4.2
9 5.3
10 6.5 4.5
0031-07 0.943 55 17 11 7.8 6.8
12 9.2 12.5
13 11
14 12
15 14
8 1.7 2.11
0943+10 1.097 26 42 9 2.1 or
10 2.6 1.90
16 10
0809+74 1.292 50 26 17 12 11.0
18 13
18 3.8
1919+21 1.337 15 89 19 4.3 4.2
20 4.7
18 5.9
0301+19 1.387 24 58 19 6.6 6.4
20 7.2
13 1.7
2303+30 1.575 15 105 14 2.0 $\approx2$
15 2.3
8 2.14
1237+25 1.38 41.0 33.7 9 2.67 $2.8\pm0.1$
10 3.26
|
{
"pile_set_name": "arxiv"
}
|
---
author:
- |
John Smith,$^{1\ast}$ Jane Doe,$^{1}$ Joe Scientist$^{2}$\
\
\
\
\
\
bibliography:
- 'scibib.bib'
title: 'A simple [*Science*]{} Template'
---
This document presents a number of hints about how to set up your [*Science*]{} paper in LaTeX . We provide a template file, `scifile.tex`, that you can use to set up the LaTeX source for your article. An example of the style is the special `{sciabstract}` environment used to set up the abstract you see here.
Introduction {#introduction .unnumbered}
============
In this file, we present some tips and sample mark-up to assure your LaTeX file of the smoothest possible journey from review manuscript to published [*Science*]{} paper. We focus here particularly on issues related to style files, citation, and math, tables, and figures, as those tend to be the biggest sticking points. Please use the source file for this document, `scifile.tex`, as a template for your manuscript, cutting and pasting your content into the file at the appropriate places.
[*Science*]{}’s publication workflow relies on Microsoft Word. To translate LaTeX files into Word, we use an intermediate MS-DOS routine [@tth] that converts the TeX source into HTML. The routine is generally robust, but it works best if the source document is clean LaTeX without a significant freight of local macros or `.sty` files. Use of the source file `scifile.tex` as a template, and calling [*only*]{} the `.sty` and `.bst` files specifically mentioned here, will generate a manuscript that should be eminently reviewable, and yet will allow your paper to proceed quickly into our production flow upon acceptance [@use2e].
Formatting Citations {#formatting-citations .unnumbered}
====================
Citations can be handled in one of three ways. The most straightforward (albeit labor-intensive) would be to hardwire your citations into your LaTeX source, as you would if you were using an ordinary word processor. Thus, your code might look something like this:
> However, this record of the solar nebula may have been
> partly erased by the complex history of the meteorite
> parent bodies, which includes collision-induced shock,
> thermal metamorphism, and aqueous alteration
> ({\it 1, 2, 5--7\/}).
Compiled, the last two lines of the code above, of course, would give notecalls in [*Science*]{} style:
> …thermal metamorphism, and aqueous alteration ([*1, 2, 5–7*]{}).
Under the same logic, the author could set up his or her reference list as a simple enumeration,
> {\bf References and Notes}
>
> \begin{enumerate}
> \item G. Gamow, {\it The Constitution of Atomic Nuclei
> and Radioactivity\/} (Oxford Univ. Press, New York, 1931).
> \item W. Heisenberg and W. Pauli, {\it Zeitschr.\ f.\
> Physik\/} {\bf 56}, 1 (1929).
> \end{enumerate}
yielding
> [**References and Notes**]{}
>
> 1. G. Gamow, [*The Constitution of Atomic Nuclei and Radioactivity*]{} (Oxford Univ. Press, New York, 1931).
>
> 2. W. Heisenberg and W. Pauli, [*Zeitschr. f. Physik*]{} [**56**]{}, 1 (1929).
>
That’s not a solution that’s likely to appeal to everyone, however — especially not to users of BTeX [@inclme]. If you are a BTeX user, we suggest that you use the `Science.bst` bibliography style file and the `scicite.sty` package, both of which are downloadable from our author help site. [**While you can use BTeX to generate the reference list, please don’t submit your .bib and .bbl files; instead, paste the generated .bbl file into the .tex file, creating `{thebibliography}` environment.**]{} You can also generate your reference lists directly by using `{thebibliography}` at the end of your source document; here again, you may find the `scicite.sty` file useful.
Whatever you use, be very careful about how you set up your in-text reference calls and notecalls. In particular, observe the following requirements:
1. Please follow the style for references outlined at our author help site and embodied in recent issues of [*Science*]{}. Each citation number should refer to a single reference; please do not concatenate several references under a single number.
2. The reference numbering continues from the main text to the Supplementary Materials (e.g. this main text has references 1-3; the numbering of references in the Supplementary Materials should start with 4).
3. Please cite your references and notes in text [*only*]{} using the standard LaTeX `\cite` command, not another command driven by outside macros.
4. Please separate multiple citations within a single `\cite` command using commas only; there should be [*no space*]{} between reference keynames. That is, if you are citing two papers whose bibliography keys are `keyname1` and `keyname2`, the in-text cite should read `\cite{keyname1,keyname2}`, [*not*]{} `\cite{keyname1, keyname2}`.
Failure to follow these guidelines could lead to the omission of the references in an accepted paper when the source file is translated to Word via HTML.
Handling Math, Tables, and Figures {#handling-math-tables-and-figures .unnumbered}
==================================
Following are a few things to keep in mind in coding equations, tables, and figures for submission to [*Science*]{}.
#### In-line math. {#in-line-math. .unnumbered}
The utility that we use for converting from LaTeX to HTML handles in-line math relatively well. It is best to avoid using built-up fractions in in-line equations, and going for the more boring “slash” presentation whenever possible — that is, for `$a/b$` (which comes out as $a/b$) rather than `$\frac{a}{b}$` (which compiles as $\frac{a}{b}$). Please do not code arrays or matrices as in-line math; display them instead. And please keep your coding as TeX-y as possible — avoid using specialized math macro packages like `amstex.sty`.
#### Tables. {#tables. .unnumbered}
The HTML converter that we use seems to handle reasonably well simple tables generated using the LaTeX`{tabular}` environment. For very complicated tables, you may want to consider generating them in a word processing program and including them as a separate file.
#### Figures. {#figures. .unnumbered}
Figure callouts within the text should not be in the form of LaTeX references, but should simply be typed in — that is, `(Fig. 1)` rather than `\ref{fig1}`. For the figures themselves, treatment can differ depending on whether the manuscript is an initial submission or a final revision for acceptance and publication. For an initial submission and review copy, you can use the LaTeX `{figure}` environment and the `\includegraphics` command to include your PostScript figures at the end of the compiled file. For the final revision, however, the `{figure}` environment should [*not*]{} be used; instead, the figure captions themselves should be typed in as regular text at the end of the source file (an example is included here), and the figures should be uploaded separately according to the Art Department’s instructions.
What to Send In {#what-to-send-in .unnumbered}
===============
What you should send to [*Science*]{} will depend on the stage your manuscript is in:
- [**Important:**]{} If you’re sending in the initial submission of your manuscript (that is, the copy for evaluation and peer review), please send in [*only*]{} a PDF version of the compiled file (including figures). Please do not send in the TeX source, `.sty`, `.bbl`, or other associated files with your initial submission. (For more information, please see the instructions at our Web submission site.)
- When the time comes for you to send in your revised final manuscript (i.e., after peer review), we require that you include source files and generated files in your upload. [**The .tex file should include the reference list as an itemized list (see “Formatting citations” for the various options). The bibliography should not be in a separate file.**]{} Thus, if the name of your main source document is `ltxfile.tex`, you need to include:
- `ltxfile.tex`.
- `ltxfile.aux`, the auxilliary file generated by the compilation.
- A PDF file generated from `ltxfile.tex`.
Acknowledgments {#acknowledgments .unnumbered}
===============
Include acknowledgments of funding, any patents pending, where raw data for the paper are deposited, etc.
Supplementary materials {#supplementary-materials .unnumbered}
=======================
Materials and Methods\
Supplementary Text\
Figs. S1 to S3\
Tables S1 to S4\
References *(4-10)*
[**Fig. 1.**]{} Please do not use figure environments to set up your figures in the final (post-peer-review) draft, do not include graphics in your source code, and do not cite figures in the text using LaTeX`\ref` commands. Instead, simply refer to the figure numbers in the text per [*Science*]{} style, and include the list of captions at the end of the document, coded as ordinary paragraphs as shown in the `scifile.tex` template file. Your actual figure files should be submitted separately.
|
{
"pile_set_name": "arxiv"
}
|
LibreOffice Draw
LibreOffice Draw is a free and open source vector graphics editor. It is one of the applications included in the LibreOffice office suite, developed by The Document Foundation.
LibreOffice Draw can be used to create complicated figures using shape tools, straight and curved tools, polygon tools, among other features.
Like the other components of LibreOffice, Draw can be run on Linux, MacOS and Microsoft Windows.
LibreOffice Draw natively uses Open Document Format for Office Applications (ODF) (.odg graphics extension) which is an international standard file format established by the Organization for the Advancement of Structured Information Standards (OASIS).
Features
Draw can be used to make flowcharts, technical drawings, brochures, posters, photo galleries and albums.
Draw includes a spellchecker, autocorrect, thesaurus, hyphenation mode and color replacing. It has a gallery of shapes and drawings. It also supports as macro execution with Java, extensions and has configurable XML filter settings.
Import and export function
Import, edit, export PDFs
Import Microsoft Visio .vsd files
Import Microsoft Publisher .pub files.
Import from OTT, STW, OTH, OTS, STC, OTP, STI, OTG, STD and VOR formats
Export to BMP, EPS, GIF, JPEG, PNG, SVG, WMF, as well as create HTML, XHTML, PDF and SWF files
Reception
In a 2014 review, Elena Opris wrote in Softpedia, "The Good: LibreOffice's best features are applicable in all modules. Draw sports numerous options and configuration parameters for defining each part of the graphical plan as well as for the general elements in the interface (such as toolbars and keyboard shortcuts). The styles and formatting presets simplify the entire layout designing process. The document recovery feature usually comes to the rescue when experiencing system crashes." Opris noted, "The Bad: The program often takes a while to paste pictures as well as to load some features. It froze several times during our evaluation when inserting files with unsupported formats, which eventually led us to restarting Draw."
Writing in December 2017 for It's FOSS, Ankush Das, said, "LibreOffice Draw module is one of the best open source alternatives to Microsoft Visio. With the help of it, you can either choose to make a quick sketch of an idea or a complex professional floor plan for presentation. Flowcharts, organization charts, network diagrams, brochures, posters, and what not! All that without even requiring to spend a penny."
PAT Research described Draw in a 2018 review, "LibreOffice Draw is a powerful office and flowchart software that provides a clean interface and tools that enable users to unleash their creativity and enhance their productivity".
GoFree wrote, "it is simply amazing that a free vector graphics editor like this can deliver such professional results."
References
External links
Category:Free vector graphics editors
Category:LibreOffice
|
{
"pile_set_name": "wikipedia_en"
}
|
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Dapper" version="1.50.4-alpha1-00070" targetFramework="net452" />
<package id="Dapper.Contrib" version="1.50.0" targetFramework="net452" />
<package id="Dapper.Extension" version="1.0.0.1" targetFramework="net452" />
<package id="EntityFramework" version="6.1.3" targetFramework="net452" />
<package id="SyntacticSugar" version="2.4.1" targetFramework="net452" />
</packages>
|
{
"pile_set_name": "github"
}
|
---
abstract: 'Using semi-empirical isochrones, we find the age of the Taurus star-forming region to be 3-4 Myr. Comparing the disc fraction in Taurus to young massive clusters suggests discs survive longer in this low density environment. We also present a method of photometrically de-reddening young stars using $iZJH$ data.'
---
Introduction
============
Taurus is a low-density star-forming region containing primarily low-mass stars and so represents an ideal laboratory for studying the environmental effects on circumstellar disc lifetimes [@Kenyon2008 (Kenyon et al. 2008)]. To investigate the impact of the low-density environment on the discs in Taurus, we used the Wide-Field Camera (WFC) on the 2.5m Isaac Newton Telescope (INT) on La Palma to obtain $griZ$ photometry of 40 fields in Taurus. Our fields are focused on the densest regions not covered by the Sloan Digital Sky Survey. The resultant INT-WFC survey mainly covers the L1495, L1521 and L1529 clouds.
We have augmented the WFC data with near-infrared $JHK$ data from 2MASS [@Cutri2003 (Cutri et al. 2003)]. To determine the age of Taurus we compare the semi-empirical isochrones discussed in [@Bell2013; @Bell2014 Bell et al. (2013, 2014)] to the observed colour-magnitude diagrams (CMDs). For a brief description of these isochrones see Bell et al. (these proceedings).
De-reddening
============
The extinction in Taurus is spatially variable across the different clouds, and so we require a method of de-reddening the stars individually. We have found that in an $i$-$Z$, $J$-$H$ colour-colour diagram the reddening vectors are almost perpendicular to the theoretical stellar sequence (Fig.\[fig:izjh\_age\]), whose position is almost independent of age, and so we can de-redden stars using photometry alone. We construct a grid of models over a range of ages (1 to 10 Myr) and binary mass ratios (single star to equal mass binary). We adopt the reddening law from [@Fitzpatrick1999 Fitzpatrick (1999)], apply it to the atmospheric models and fold the result through sets of filter responses to derive reddening coefficients in each photometric system.
[0.45]{} ![**Left:** $i$-$Z$, $J$-$H$ diagram for Taurus members. Asterisks are Class II sources, open circles are Class III sources. Overlaid as solid lines are a 2 and 4Myr isochrone. The dashed lines are reddening vectors in this colour space. **Right:** $r$, $r$-$i$ diagram for Taurus members identified as Class III. Isochrones of 1, 4 and 10Myr are overlaid. Asterisks indicate the position of a theoretical star with mass 0.75M$_\odot$. The black dashed line shows a reddening vector for A$_V$ = 1 mag. []{data-label="fig:izjh_age"}](izjh.eps "fig:"){width="\textwidth" height="5.5cm"} \[fig:izjh\_ccd\]
[0.45]{} ![**Left:** $i$-$Z$, $J$-$H$ diagram for Taurus members. Asterisks are Class II sources, open circles are Class III sources. Overlaid as solid lines are a 2 and 4Myr isochrone. The dashed lines are reddening vectors in this colour space. **Right:** $r$, $r$-$i$ diagram for Taurus members identified as Class III. Isochrones of 1, 4 and 10Myr are overlaid. Asterisks indicate the position of a theoretical star with mass 0.75M$_\odot$. The black dashed line shows a reddening vector for A$_V$ = 1 mag. []{data-label="fig:izjh_age"}](age.eps "fig:"){width="\textwidth" height="5.5cm"} \[fig:age\]
It is well known that for a fixed value of E(B-V), extinction in a given filter will vary with [$T_{\mathrm{eff}}$]{} (see e.g. [@Bell2013 Bell et al. 2013]). To account for this we use extinction tables to redden the isochrones for a grid of E($B$-$V$) and [$T_{\mathrm{eff}}$]{} values, and compare the reddened model grid to the data. We adopt a Bayesian approach and marginalise over binary mass, age and [$T_{\mathrm{eff}}$]{}. We take the extinction values from the model with the highest likelihood, and use this to de-redden the star.
Taurus
======
Plotting the de-reddened Taurus members in the $r$, $r$-$i$ CMD, we notice that a significant fraction of the Class II objects appear much fainter than the primary locus. This is likely an accretion effect, and if we were to fit for the age of these members we would derive an age that is erroneously old. To avoid this effect, we fit only the Class III sources. We note that those Class II sources that are not scattered below the sequence lie coincident with the Class III sources, and thus we believe the age derived from the Class III sources alone will be representative of the overall age. We plot our de-reddened Taurus members in an $r$, $r$-$i$ CMD to fit for the age (Fig.\[fig:izjh\_age\]). We find that isochrones of 3-4 Myr (older than is commonly quoted in the literature) trace the observed stellar sequence well. To ensure consistency with the [@Bell2013 Bell et al. (2013)] age scale we compare the position of a theoretical star with a mass of 0.75 M$_\odot$ to the middle of the observed sequence. We find consistency with the overall isochrone fitting, with an age of 3-4 Myr still providing a good fit. With a robust age for Taurus we then examined the disc fraction. Taurus has a disc fraction of 69% [@Luhman2010 (Luhman et al. 2010)]. If we compare this to the other clusters in [@Bell2013 Bell et al. (2013)], which are on the same age scale, we find that Taurus has the largest disc fraction in the sample, significantly higher than the group of young (2 Myr), massive clusters, suggesting that discs may have survived longer in the low density environment present in Taurus.
2013, *MNRAS*, 434, 806
2014, *MNRAS*, 445, 3496
2003, *2MASS All Sky Catalog of point sources.*
1999, *PASP*, 111, 63
2008, *Handbook of Star Forming Regions, Vol. 1*, p.405
2010, *ApJS*, 186, 111
|
{
"pile_set_name": "arxiv"
}
|
/*
* TupleTypeUtil.java
*
* This source file is part of the FoundationDB open source project
*
* Copyright 2015-2019 Apple Inc. and the FoundationDB project authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.apple.foundationdb.record.metadata;
import com.apple.foundationdb.record.provider.foundationdb.FDBRecordVersion;
import com.apple.foundationdb.tuple.Tuple;
import com.google.protobuf.ByteString;
import com.google.protobuf.Descriptors;
import com.google.protobuf.ProtocolMessageEnum;
import javax.annotation.Nonnull;
import javax.annotation.Nullable;
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.List;
/**
* Utility class for dealing with {@link Tuple} types. In theory, these methods should live in
* {@link com.apple.foundationdb.tuple.TupleHelpers TupleHelpers} except that they use some Protobuf specific things
* like the {@link ByteString} class, and {@code TupleHelpers} is defined in the
* <a href="https://javadoc.io/doc/org.foundationdb/fdb-extensions/">fdb-extensions</a> sub-project
* which does not (and probably should not) take Protobuf as a dependency.
*/
class TupleTypeUtil {
@Nonnull
private static final BigInteger BIG_INT_MAX_LONG = BigInteger.valueOf(Long.MAX_VALUE);
@Nonnull
private static final BigInteger BIG_INT_MIN_LONG = BigInteger.valueOf(Long.MIN_VALUE);
/**
* Normalize a list of values so that it can be checked for equality with other lists sharing
* the same {@link Tuple} representation. In other words, it should be the case that:
*
* <pre> {@code
* toTupleEquivalentValue(list1).equals(toTupleEquivalentValue)
* == Arrays.equals(Tuple.fromList(toTupleAppropriateList(list1)).pack(), Tuple.fromList(toTupleAppropriateList(list2)).pack())
* }</pre>
*
* <p>
* for any two lists {@code list1} and {@code list2}.
* </p>
*
* @param values the list of values to normalized
* @return a new list containing the normalized elements of {@code values}
*/
@Nonnull
static List<Object> toTupleEquivalentList(@Nonnull List<?> values) {
List<Object> tupleEquivalentList = new ArrayList<>(values.size());
for (Object o : values) {
tupleEquivalentList.add(toTupleEquivalentValue(o));
}
return tupleEquivalentList;
}
/**
* Normalize a value so that it compares equal to anything with the same {@link Tuple} representation.
* The value that is returned cannot necessarily be packed by a {@code Tuple} (for example,
* a <code>byte[]</code> is returned as a {@link ByteString}), but it does implement {@link Object#equals(Object)}
* and {@link Object#hashCode()}, so the value can be used in hash-based data structures like
* {@link java.util.HashSet HashSet}s and {@link java.util.HashMap HashMap}s. In other words, it should
* bethe case that:
*
* <pre> {@code
* Objects.equals(toTupleEquivalentValue(value1), toTupleEquivalentValue(value2))
* == Arrays.equals(Tuple.from(value1).pack(), Tuple.from(value2).pack())
* }</pre>
*
* <p>
* for any two values {@code value1} and {@code value2}.
* </p>
*
* <p>
* This will only return {@code null} if {@link #toTupleAppropriateValue(Object)} would return {@code null}
* on the same input. If the object is already in
* </p>
*
* @param obj the value to normalize
* @return a value that has the same representation when {@link Tuple}-encoded
*/
@Nullable
static Object toTupleEquivalentValue(@Nullable Object obj) {
if (obj == null || obj instanceof Key.Evaluated.NullStandin) {
return null;
} else if (obj instanceof List<?>) {
List<?> list = (List<?>)obj;
return toTupleEquivalentList(list);
} else if (obj instanceof Tuple) {
return toTupleEquivalentList(((Tuple)obj).getItems());
} else if (obj instanceof byte[]) {
return ByteString.copyFrom((byte[]) obj);
} else if ((obj instanceof Byte) || (obj instanceof Short) || (obj instanceof Integer)) {
return ((Number)obj).longValue();
} else if (obj instanceof BigInteger) {
BigInteger bigInt = (BigInteger)obj;
if (bigInt.compareTo(BIG_INT_MIN_LONG) > 0 && bigInt.compareTo(BIG_INT_MAX_LONG) < 0) {
return bigInt.longValue();
} else {
return bigInt;
}
} else if (obj instanceof ProtocolMessageEnum) {
return (long)((ProtocolMessageEnum)obj).getNumber();
} else if (obj instanceof Descriptors.EnumValueDescriptor) {
return (long)((Descriptors.EnumValueDescriptor)obj).getNumber();
} else if (obj instanceof FDBRecordVersion) {
return ((FDBRecordVersion)obj).toVersionstamp(false);
} else {
return obj;
}
}
/**
* Convert a list of values into items that can all be stored within a {@link Tuple}.
*
* @param values a list of values
* @return a new list with {@link Tuple}-encodable versions of the elements of {@code values}
*/
@Nonnull
static List<Object> toTupleAppropriateList(@Nonnull List<?> values) {
List<Object> tupleAppropriateList = new ArrayList<>(values.size());
for (Object o : values) {
tupleAppropriateList.add(toTupleAppropriateValue(o));
}
return tupleAppropriateList;
}
/**
* Convert a value into a type that can be stored within a {@link Tuple}.
*
* @param obj the value to convert
* @return the value converted to some {@link Tuple}-encodable type
*/
@Nullable
static Object toTupleAppropriateValue(@Nullable Object obj) {
if (obj instanceof Key.Evaluated.NullStandin) {
return null;
} else if (obj instanceof ByteString) {
return ((ByteString) obj).toByteArray();
} else if (obj instanceof List) {
return toTupleAppropriateList((List<?>) obj);
// Following two are both Internal.EnumLite, so could use that, too.
} else if (obj instanceof ProtocolMessageEnum) {
return ((ProtocolMessageEnum) obj).getNumber();
} else if (obj instanceof Descriptors.EnumValueDescriptor) {
return ((Descriptors.EnumValueDescriptor) obj).getNumber();
} else if (obj instanceof FDBRecordVersion) {
return ((FDBRecordVersion) obj).toVersionstamp(false);
} else {
return obj;
}
}
private TupleTypeUtil() {
}
}
|
{
"pile_set_name": "github"
}
|
I soon realised that Kathy and I had settled at the periphery of the rules and the order, separated categorically from the mystics and their task; we existed like stray animals sheltered in a monastery.
|
{
"pile_set_name": "pile-cc"
}
|
Debbie Gregory DNP, RN
Dr. Debbie Gregory is a national leader in healthcare design, innovation, and transformation. As a nurse executive and interior designer, Dr. Gregory is passionate about “Intentional Design” that aligns People, Place, and Process. She creates and transforms environments into functional ecosystems using complex systems science and strategic thinking.
Dr. Gregory has Doctorate of Nursing Practice in Health Innovation and Leadership from the University of Minnesota and a bachelors in nursing from Vanderbilt University. Currently, she serves as Senior Clinical Consultant for the Technology Planning Group at Smith, Seckman, Reid, Inc., a national engineering firm.
In today’s healthcare environment, clinical transformation and innovation are essential in navigating and reengineering the care delivery model of the future. She serves as a liaison and visionary between the clinical community, the design and construction community, and the IT/engineering community to interpret and enhance the clinical operations and functionality of the healthcare environment. She develops strategies for operational and financial improvements designed to advance clinical excellence, improve quality of care, patient experience, and overall patient outcomes.
She is a frequent presenter at national conferences and has authored many articles in national publications. Dr. Gregory provides educational summits that bring healthcare leaders, technology experts, and visionaries together to discuss the future of the healthcare delivery model and the integration of technology.
She is the co-founder and past president of the Nursing Institute for Healthcare Design (NIHD) and current President of the Nursing Institute for Healthcare Design Foundation. NIHD is an international not for profit organization created from a need, an idea, and a passion to engage and include clinicians at the design table to improve the healthcare environment.
|
{
"pile_set_name": "pile-cc"
}
|
'use strict';
angular.module("ngLocale", [], ["$provide", function($provide) {
var PLURAL_CATEGORY = {ZERO: "zero", ONE: "one", TWO: "two", FEW: "few", MANY: "many", OTHER: "other"};
function getDecimals(n) {
n = n + '';
var i = n.indexOf('.');
return (i == -1) ? 0 : n.length - i - 1;
}
function getVF(n, opt_precision) {
var v = opt_precision;
if (undefined === v) {
v = Math.min(getDecimals(n), 3);
}
var base = Math.pow(10, v);
var f = ((n * base) | 0) % base;
return {v: v, f: f};
}
$provide.value("$locale", {
"DATETIME_FORMATS": {
"AMPMS": [
"Dinda",
"Dilolo"
],
"DAY": [
"Lumingu",
"Nkodya",
"Nd\u00e0ay\u00e0",
"Ndang\u00f9",
"Nj\u00f2wa",
"Ng\u00f2vya",
"Lubingu"
],
"MONTH": [
"Ciongo",
"L\u00f9ishi",
"Lus\u00f2lo",
"M\u00f9uy\u00e0",
"Lum\u00f9ng\u00f9l\u00f9",
"Lufuimi",
"Kab\u00e0l\u00e0sh\u00ecp\u00f9",
"L\u00f9sh\u00eck\u00e0",
"Lutongolo",
"Lung\u00f9di",
"Kasw\u00e8k\u00e8s\u00e8",
"Cisw\u00e0"
],
"SHORTDAY": [
"Lum",
"Nko",
"Ndy",
"Ndg",
"Njw",
"Ngv",
"Lub"
],
"SHORTMONTH": [
"Cio",
"Lui",
"Lus",
"Muu",
"Lum",
"Luf",
"Kab",
"Lush",
"Lut",
"Lun",
"Kas",
"Cis"
],
"fullDate": "EEEE d MMMM y",
"longDate": "d MMMM y",
"medium": "d MMM y HH:mm:ss",
"mediumDate": "d MMM y",
"mediumTime": "HH:mm:ss",
"short": "d/M/y HH:mm",
"shortDate": "d/M/y",
"shortTime": "HH:mm"
},
"NUMBER_FORMATS": {
"CURRENCY_SYM": "FrCD",
"DECIMAL_SEP": ",",
"GROUP_SEP": ".",
"PATTERNS": [
{
"gSize": 3,
"lgSize": 3,
"maxFrac": 3,
"minFrac": 0,
"minInt": 1,
"negPre": "-",
"negSuf": "",
"posPre": "",
"posSuf": ""
},
{
"gSize": 3,
"lgSize": 3,
"maxFrac": 2,
"minFrac": 2,
"minInt": 1,
"negPre": "-",
"negSuf": "\u00a4",
"posPre": "",
"posSuf": "\u00a4"
}
]
},
"id": "lu-cd",
"pluralCat": function (n, opt_precision) { var i = n | 0; var vf = getVF(n, opt_precision); if (i == 1 && vf.v == 0) { return PLURAL_CATEGORY.ONE; } return PLURAL_CATEGORY.OTHER;}
});
}]);
|
{
"pile_set_name": "github"
}
|
Maroš Ferenc
Maroš Ferenc (born 19 February 1981, in Prešov) is a Slovak football goalkeeper who currently plays for 1. FC Tatran Prešov.
References
Category:1981 births
Category:Living people
Category:Slovak footballers
Category:Association football goalkeepers
Category:1. FC Tatran Prešov players
Category:AS Trenčín players
Category:MEAP Nisou players
Category:MFK Zemplín Michalovce players
Category:FC Eindhoven players
Category:Slovak Super Liga players
Category:Sportspeople from Prešov
|
{
"pile_set_name": "wikipedia_en"
}
|
Upper Lusatian Heath and Pond Landscape
The Upper Lusatian Heath and Pond Landscape (also ... District or ... Lake District, ) is a natural region in Saxony. It runs from a line between Wittichenau and Kamenz for roughly 60 kilometres in an east-west direction as far as the River Neisse. Its width between the bordering natural regions of the Upper Lusatian Gefilde and Eastern Upper Lusatia to the south and the Muskau Heath and Upper Lusatian Mining Region to the north is between 15 and 20 kilometres.
The landscape is a transition zone between the hilly southern part of Upper Lusatia and Lower Lusatia. Its central part takes in the Upper Lusatian Heath and Pond Landscape Biosphere Reserve, whose core zone is a nature reserve.
The region is part of the Saalian glaciation meltwater valley or urstromtal. Valley sands close to the groundwater level at heights of between 130 and 150 metres alternate with stretches of valley bottom over 500 metres wide and only a few metres lower. Dry areas lie next to waterlogged or even boggy areas.
Almost 10% of the area is made up of 335 ponds, which makes the Upper Lusatian Heath and Lake District the largest economically utilized pond region in Europe.
Part of the original landscape was destroyed by the brown coal open-cast mine around the Boxberg Power Station, however the pits left behind have been flooded and now form a new part of the countryside.
The potential natural vegetation (pnV) is birch and oak-pine woods, with alder and ash woods in the water meadows.
References
External links
Large scale grouping, subdivisions and overview map as part of the Upper Lusatia-Lower Silesia Planning Region, retrieved 11 March 2012
Category:Natural regions of Saxony
Category:Upper Lusatia
Category:Ponds of Germany
|
{
"pile_set_name": "wikipedia_en"
}
|
Volunteer Services
Volunteer Services
As Charleston Area Medical Center volunteers, our mission is to serve as support for patients, families and hospital staff, and to provide a caring, comforting and courteous environment.
Volunteers at CAMC bring their unique personalities and skills to our hospital. They range in age from 15 to 99. Our ranks are made up of men and women; students and retirees; homemakers and business people. Last year, 334 volunteers contributed over 36,000 hours to our hospitals and Cancer Center.
We are looking for volunteers who exemplify CAMC's core values of respect, integrity, stewardship, quality, service with compassion and safety. These volunteers will help us with our mission of "striving to provide the best health care to every patient, every day."
|
{
"pile_set_name": "pile-cc"
}
|
GT300
The GT300 may refer to:
A Super GT car category
The GT300 family of graphics processors from Nvidia
|
{
"pile_set_name": "wikipedia_en"
}
|
Gary Wade Finley
Gary Wade Finley Jr. a multiple time champion at Huntsville (AL) Speedway. Finley won the 1989 NASCAR Charlotte/Daytona Dash Series championship. Finley won the Daytona 200 event that year for the series.
Gary Wade Finley Jr. arrived at Nashville Speedway USA in 1999 to compete in the NASCAR SuperTruck division at Nashville Speedway USA. The speedway was promoted by Alabama's Bob Harmon. Finley at season end claimed the rookie title and the championship title for the 1999 season.
Gary Wade Finley Jr. is married to Misty Rose Finley and for the 2016 season heads up the teams of his two sons. Garrett Finley currently driving the open wheel modified division and actively competes at Huntsville (AL) Speedway. The younger son Austin, competes at Fairgrounds Speedway Nashville in the limited late model division. The 2016 season is his rookie season.
References
Category:Living people
Category:Year of birth missing (living people)
|
{
"pile_set_name": "wikipedia_en"
}
|
---
address:
- 'Department of Mathematics, University of Michigan, East Hall, 525 East University Avenue, Ann Arbor, MI 48109-1109, USA'
- 'Department of Mathematics, University of Illinois at Chicago, 851 S. Morgan St., M/C. 249, Chicago, IL 60607-7045, USA'
- 'Department of Mathematics, Harvard University, 1 Oxford Street, Cambridge, MA 02138, USA'
author:
- Tommaso de Fernex
- Lawrence Ein
- Mircea Mustaţǎ
title: Bounds for log canonical thresholds with applications to birational rigidity
---
Introduction {#introduction .unnumbered}
============
Let $X$ be a smooth algebraic variety, defined over an algebraically closed field of characteristic zero, and let $V \subset X$ be a proper closed subscheme. Our main goal in this paper is to study an invariant of the pair $(X,V)$, called the log canonical threshold of $X$ along $V$, and denoted by $\operatorname{lc}(X,V)$. Interest in bounds for log canonical thresholds is motivated by techniques that have recently been developed in higher dimensional birational geometry. In this paper, we study this invariant using intersection theory, degeneration techniques and jet schemes.
A natural question is how does this invariant behave under basic operations such as restrictions and projections. Restriction properties have been extensively studied in recent years, leading to important results and conjectures. In the first section of this paper, we investigate the behavior under projections, and we prove the following result (see Theorem \[thm1\] for a more precise statement):
\[thm1-intro\] With the above notation, suppose that $V$ is Cohen-Macaulay, of pure codimension $k$, and let $f : X \to Y$ be a proper, dominant, smooth morphism of relative dimension $k-1$, with $Y$ smooth. If $f|_V$ is finite, then $$\operatorname{lc}(Y,f_*[V]) \leq \frac{k! \. \operatorname{lc}(X,V)^k}{k^k},$$ and the inequality is strict if $k\geq 2$. Moreover, if $V$ is locally complete intersection, then $$\operatorname{lc}(Y,f_*[V]) \leq \frac{\operatorname{lc}(X,V)^k}{k^k}.$$
Examples show that these bounds are sharp. The proof of the above theorem is based on a general inequality relating the log canonical threshold of a fractional ideal of the form $h^{-b}\. \a$, and the colength of $\a$. Here $\a$ is a zero dimensional ideal in the local ring of $X$ at some (not necessarily closed) point, $b\in{\mathbb Q}_+$, and $h$ is the equation of a smooth divisor. We prove this inequality in the second section (see Theorem \[l(a)-e(a)\]), using a degeneration to monomial ideals. It generalizes a result from [@DEM], which was the case $b=0$.
In the third section, we give lower bounds for the log canonical threshold of affine subschemes defined by homogeneous equations of the same degree. We prove the following
Let $V\subset X=\A^n$ be a subscheme defined by homogeneous equations of degree $d$. Let $c=\operatorname{lc}(\A^n, V)$, and let $Z$ be the non log terminal locus of $(\A^n, c\. V)$. If $e=\operatorname{codim}(Z,\A^n)$, then $$\operatorname{lc}(\A^n,V) \ge \frac{e}d.$$ Moreover, we have equality if and only if the following holds: $Z$ is a linear subspace, and if $\pi : \A^n\longrightarrow\A^n/Z$ is the projection, then there is a subscheme $V'\subset\A^n/Z$ such that $V=\pi^{-1}(V')$, $\operatorname{lc}(\A^n/Z,V')=e/d$, and the non log terminal locus of $(\A^n/Z,(e/d)\. V')$ is the origin.
The proof of this result is based on the characterization of the log canonical threshold via jet schemes from [@Mu2]. In the particular case when $V$ is the affine cone over a projective hypersurface with isolated singularities, the second assertion in the above result proves a conjecture of Cheltsov and Park from [@CP].
In the last section we apply the above bounds in the context of birational geometry. In their influential paper [@IM], Iskovskikh and Manin proved that a smooth quartic threefold is what is called nowadays birationally superrigid; in particular, every birational automorphism is regular, and the variety is not rational. There has been a lot of work to extend this result to other Fano varieties of index one, in particular to smooth hypersurfaces of degree $N$ in $\P^N$, for $N>4$. The case $N=5$ was done by Pukhlikov in [@Pu2], and the cases $N=6,7,8$ were proven by Cheltsov in [@Ch2]. Moreover, Pukhlikov showed in [@Pu5] that a general hypersurface as above is birationally superrigid, for every $N>4$. We use our results to give an easy and uniform proof of birational superrigidity for arbitrary smooth hypersurfaces of degree $N$ in $\P^N$ when $N$ is small.
\[thm3\_introd\] If $X\subset{\mathbb P}^N$ is a smooth hypersurface of degree $N$, and if $4\leq N\leq 12$, then $X$ is birationally superrigid.
Based on previous ideas of Corti, Pukhlikov proposed in [@Pu1] a proof of the birational rigidity of every smooth hypersurface of degree $N$ in $\P^N$, for $N\geq 6$. Unfortunately, at the moment there is a gap in his arguments (see Remark \[gap\] below). Despite this gap, the proof proposed in [@Pu1] contains many remarkable ideas, and it seems likely that a complete proof could be obtained in the future along those lines. In fact, the outline of the proof of Theorem \[thm3\_introd\] follows his method, and our contribution is mainly to simplifying and solidifying his argument.
Acknowledgements {#acknowledgements .unnumbered}
----------------
We are grateful to Steve Kleiman and Rob Lazarsfeld for useful discussions. Research of the first author was partially supported by MURST of Italian Government, National Research Project (Cofin 2000) “Geometry of Algebraic Varieties”. Research of the second author was partially supported by NSF Grant DMS 02-00278. The third author served as a Clay Mathematics Institute Long-Term Prize Fellow while this research has been done.
Singularities of log pairs under projections
============================================
Let $X$ be a smooth algebraic variety, defined over an algebraically closed field of characteristic zero, and let $V \subset X$ be a proper subscheme. For any rational number $c > 0$, we can consider the pair $(X,c\. V)$. The usual definitions in the theory of singularities of pairs, for which we refer to [@Ko], extend to this context. In particular, we say that an irreducible subvariety $C \subset X$ is a center of non log canonicity (resp. non log terminality, non canonicity, non terminality) for $(X,c\. V)$ if there is at least one divisorial valuation of $K(X)$, with center $C$ on $X$, whose discrepancy along $(X,c\.V)$ is $<-1$ (resp. $\le -1$, $<0$, $\le 0$). We will denote by $\operatorname{lc}(X,V)$ the log canonical threshold of the pair $(X,V)$, i.e., the largest $c$ such that $(X,c\. V)$ is log canonical. We will occasionally consider also pairs of the form $(X,c_1\. V_1-c_2\.V_2)$, where $V_1$, $V_2\subset X$ are proper subschemes of $X$. The definition of (log) terminal and canonical pairs extends in an obvious way to this setting.
We fix now the set-up for this section. Let $f : X \to Y$ be a smooth and proper morphism onto a smooth algebraic variety $Y$. We assume that $V\subset X$ is a pure dimensional, Cohen-Macaulay closed subscheme, such that $\dim V = \dim Y - 1$, and such that the restriction of $f$ to $V$ is finite. If $[V]$ denotes the cycle associated to $V$, then its push-forward $f_*[V]$ determines an effective Cartier divisor on $Y$. We set $\operatorname{codim}(V,X)=k$.
\[thm1\] With the above notation, let $C \subset X$ be an irreducible center of non log terminality for $(X,c\. V)$, for some $c>0$. Then $f(C)$ is a center of non log terminality (even non log canonicity, if $k\geq 2$) for the pair $$\label{gen_formula}
\( Y, \frac{k! \. c^k}{k^k} \. f_*[V] \).$$ Moreover, if $V$ is locally complete intersection (l.c.i. for short) then $f(C)$ is a center of non log terminality for the pair $$\label{lci_formula}
\( Y, \frac{c^k}{k^k} \. f_*[V] \).$$
Let $k$ and $n$ be two positive integers with $n > k$, and let $R = K[x_k,\dots,x_n]$. We take $X = \P^{k-1}_R = \operatorname{Proj}R[x_0,\dots,x_{k-1}]$, $Y = \operatorname{Spec}R$, and let $f$ be the natural projection from $X$ to $Y$. For any $t>0$, let $V_t$ be the subscheme of $X$ defined by the homogeneous ideal $(x_1,\dots,x_k)^t$. Note that $\operatorname{lc}(X,V_t) = k/t$, and that if $c=k/t$, then $V_1$ is a center of non log terminality for $(X,c\. V_t)$. Since $l(\O_{V_t,V_1})=\binom{k+t-1}{k}$, we see that $$\lim_{t\to\infty}\frac{k! \. c^k/k^k}{\operatorname{lc}(Y,f_*[V_t])}
=\lim_{t\to\infty}\frac{t(t+1)\ldots(t+k-1)}{t^k}=1,$$ so the bound in (\[gen\_formula\]) is sharp (at least asymptotically).
To prove sharpness in the l.c.i. case, let $W_t\subset X$ be the complete intersection subscheme defined by $(x_1^t,\dots,x_k^t)$. This time $l(\O_{W_t,W_1}) = t^k$, and $\operatorname{lc}(Y,f_*[W_t]) = 1/t^k = \operatorname{lc}(X,W_t)^k/k^k$.
By hypothesis, there is a proper birational morphism $\n : W \to X$, where $W$ can be chosen to be smooth, and a smooth irreducible divisor $E$ on $W$, such that $\n(E) = C$, and such that the discrepancy of $(X,c\. V)$ at $E$ is $$\label{eq1}
a_E(X,c\. V) \le -1.$$ The surjection $f$ induces an inclusion of function fields $f^* : K(Y) \inj K(X)$. Let $R_E:=\O_{W,E}\subset K(X)$ be the discrete valuation ring associated to the valuation along $E$, and let $R = (f^*)^{-1}R_E$. Note that $R$ is a non-trivial discrete valuation ring.
$R$ corresponds to a divisorial valuation.
It is enough to show that the transcendence degree of the residue field of $R$ over the ground field is $\dim Y-1$ (see [@KM], Lemma 2.45). This follows from [@ZS], VI.6, Corollary 1.
The lemma implies that there is a proper birational morphism $\g : Y' \to Y$ and an irreducible divisor $G$ on $Y'$ such that $R=\O_{Y',G}$. By Hironaka’s theorem, we may assume that both $Y'$ and $G$ are smooth, and moreover, that the union between $G$ and the exceptional locus of $\g$ has simple normal crossings. Since the center of $R_E$ on $X$ is $C$, we deduce that $R$ has center $f(C)$ on $Y$, so $\g(G) = f(C)$.
Consider the fibered product $X' = Y' \times_Y X$. We may clearly assume that $\n$ factors through the natural map $\f : X' \to X$. Therefore we have the following commutative diagram: $$\xymatrix{
W \ar[r]^{\e} & X' \ar[d]_g \ar[r]^{\f} & X \ar[d]^f \\
&Y' \ar[r]^{\g} & Y,
}$$ where $\phi\circ\eta=\nu$. Note that $X'$ is a smooth variety, $g$ is a smooth, proper morphism, and $\e$ and $\f$ are proper, birational morphisms. Let $V' = \f^{-1}(V)$ be the scheme theoretic inverse image of $V$ in $X'$, i.e., the subscheme of $X'$ defined by the ideal sheaf $I_V \. \O_{X'}$.
\[lem1\] $V'$ is pure dimensional, $\operatorname{codim}(V',X')=k$, and $\f^*[V]$ is the class of $[V']$. Moreover, if $V$ is l.c.i., then so is $V'$.
Note that both $\gamma$ and $\phi$ are l.c.i. morphisms, because they are morphisms between smooth varieties. The pull-back in the statement is the pull-back by such a morphism (see [@Fulton], Section 6.6). Recall how this is defined. We factor $\gamma$ as $\gamma_1\circ\gamma_2$, where $\gamma_1 : Y'\times Y\longrightarrow Y$ is the projection, and $\gamma_2 : Y'\hookrightarrow Y'\times Y$ is the graph of $\gamma$. By pulling-back, we get a corresponding decomposition $\phi=\phi_1\circ\phi_2$, with $\phi_1$ smooth, and $\phi_2 : X'\hookrightarrow Y'\times X$ a regular embedding of codimension $\dim Y'$. Then $\phi^*[V]=\phi_2^!([Y'\times V])$.
Since $f|_V$ is finite and $V' = Y'\times_Y V$, $g|_{V'}$ is also finite. Moreover, since $g(V')$ is a proper subset of $Y'$, we see that $\dim V' \le \dim Y'-1$. On the other hand, $V'$ is locally cut in $Y'\times V$ by $\dim\,Y'$ equations, so that every irreducible component of $V'$ has dimension at least $\dim\,V$. Therefore $V'$ is pure dimensional, and $\dim\,V'=\dim\,V$.
Since $Y'\times V$ is Cohen-Macaulay, this also implies that $\phi_2^!([Y'\times V])$ is equal to the class of $[V']$, by Proposition 7.1 in [@Fulton]. This proves the first assertion. Moreover, if $V$ is l.c.i., then it is locally defined in $X$ by $k$ equations. The same is true for $V'$, hence $V'$ is l.c.i., too.
We will use the following notation for multiplicities. Suppose that $W$ is an irreducible subvariety of a variety $Z$. Then the multiplicity of $Z$ along $W$ is denoted by $e_WZ$ (we refer to [@Fulton], Section 4.3, for definition and basic properties). If $\alpha=\sum_in_i[T_i]$ is a pure dimensional cycle on $Z$, then $e_W\alpha:=\sum_in_ie_WT_i$ (if $W\not\subseteq T_i$, then we put $e_WT_i=0$). Note that if $W$ is a prime divisor, and if $D$ is an effective Cartier divisor on $Z$, then we have $e_W[D]={\rm ord}_W(D)$, where $[D]$ is the cycle associated to $D$, and ${\rm ord}_W(D)$ is the coefficient of $W$ in $[D]$. As we work on smooth varieties, from now on we will identify $D$ with $[D]$.
Let $F = \e(E)$. Note that by construction, we have $g(F)=G$. Since $F\subseteq V'$, and $g|_{V'}$ is finite, and $\dim\,G=\dim\,V'$, it follows that $F$ is an irreducible component of $V'$, hence $\operatorname{codim}(F, X')=k$. We set $a = e_F(K_{X'/X})$.
To simplify the statements, we put $$\d =
\begin{cases}
1 &\text{if $V$ is a l.c.i.,} \\
k! &\text{otherwise.}
\end{cases}$$
\[lem2\] We have $$\operatorname{ord}_G(\g^* f_*[V]) \geq \frac{(a + 1)k^k}{\d c^k},$$ and the inequality is strict in the case $\delta=k!$, if $k\geq 2$.
Since $\f$ and $\g$ are l.c.i. morphisms of the same relative dimension, it follows from [@Fulton], Example 17.4.1, and Lemma \[lem1\] that $g_*[V']$ and $\g^*f_*[V]$ are linearly equivalent, as divisors on $Y'$. As the two divisors are equal outside the exceptional locus of $\g$, we deduce from the Negativity Lemma (see [@KM], Lemma 3.39) that also their $\g$-exceptional components must coincide. This gives $g_*[V'] = \g^*f_* [V]$.
In particular, ${\rm ord}_G(\g^*f_*[V])$ is greater or equal to the coefficient of $F$ in $[V']$. Lemma \[lem1\] implies $$\operatorname{ord}_G(\g^* f_*[V]) \geq l(\O_{V',F}),$$ so that it is enough to show that $$\label{lem2-eq}
l(\O_{V',F}) \geq \frac{(a + 1)k^k}{\d c^k},$$ and that the inequality is strict in the case $\delta=k!$, if $k\geq 2$.
By replacing $W$ with a higher model, we may clearly assume that $\n^{-1}(V)$ is an effective divisor on $W$. If $I_V\subseteq\O_X$ is the ideal defining $V$, then we put $\operatorname{ord}_E(I_V):=\operatorname{ord}_E\n^{-1}(V)$. It follows from (\[eq1\]) that we have $$-1\geq \operatorname{ord}_E(K_{W/X}) - c\.\operatorname{ord}_E(I_V) = \operatorname{ord}_E(K_{W/X'}) -
(c\.\operatorname{ord}_E(I_{V'})-\operatorname{ord}_E(K_{X'/X})).$$ Therefore $F$ is a center of non log terminality for the pair $(X',c\. V' - K_{X'/X})$. Since $g(F)=G$ is a divisor on $Y'$, it follows that $F$ can not be contained in the intersection of two distinct $\phi$-exceptional divisors. Hence the support of $K_{X'/X}$ is smooth at the generic point of $F$. Then (\[lem2-eq\]) follows from Theorem \[l(a)-e(a)\] below (note that the length of a complete intersection ideal coincides with its Samuel multiplicity).
We continue the proof of Theorem \[thm1\]. Note that $\operatorname{ord}_G K_{Y'/Y} \leq e_F(g^* K_{Y'/Y})$. Since $K_{X'/X} = g^* K_{Y'/Y}$ (see [@Hartshorne], Proposition II 8.10), we deduce $$\operatorname{ord}_G K_{Y'/Y} \leq a.$$ In conjunction with Lemma \[lem2\], this gives $$a_G\left(Y,\frac{\delta c^k}{k^k}f_*[V]\right)=
\operatorname{ord}_G \( K_{Y'/Y} - \frac{\d c^k}{k^k}\. \g^* f_*[V] \) \leq -1.$$ Moreover, this inequality is strict in the case when $\d=k!$, if $k\geq 2$. This completes the proof of Theorem \[thm1\].
We refer to [@Pu1] for a result on the canonical threshold of complete intersection subschemes of codimension 2, via generic projection.
Multiplicities of fractional ideals
===================================
In this section we extend some of the results of [@DEM], as needed in the proof of Theorem \[thm1\]. More precisely, we consider the following set-up. Let $X$ be a smooth variety, $V\subset X$ a closed subscheme, and let $Z$ be an irreducible component of $V$. We denote by $n$ the codimension of $Z$ in $X$, and by $\a \subset\O_{X,Z}$ the image of the ideal defining $V$. Let $H \subset X$ be a prime divisor containing $Z$, such that $H$ is smooth at the generic point of $Z$. We consider the pair $$(X, V-b\cdot H),$$ for a given $b\in{\mathbb Q}_+$.
\[l(a)-e(a)\] With the above notation, suppose that for some $\mu\in{\mathbb Q}_+^*$, $(X,\frac{1}{\mu}(V-b\cdot H))$ is not log terminal at the generic point of $Z$. Then $$\label{l(a)}
l(\O_{X,Z}/\a)\geq\frac{n^n \m^{n-1}(\m + b)}{n!},$$ and the inequality is strict if $n\geq 2$. Moreover, if $e(\a)$ denotes the Samuel multiplicity of $\O_{X,Z}$ along $\a$, then $$\label{e(a)}
e(\a) \ge n^n \m^{n-1}(\m+b).$$
For $n=2$, inequality (\[e(a)\]) gives a result of Corti from [@Co2]. On the other hand, if $b=0$, then the statement reduces to Theorems 1.1 and 1.2 in [@DEM].
We see that (\[l(a)\]) implies (\[e(a)\]) as follows. If we apply the first formula to the subscheme $V_t\subseteq X$ defined by $\a^t$, to $\mu_t=\mu t$, and to $b_t=bt$, we get $$l(\O_{X,Z}/\a^t)\geq\frac{n^n\mu^{n-1}(\mu+b)}{n!}t^n.$$ Dividing by $t^n$ and passing to the limit as $t \to \infty$ gives (\[e(a)\]).
In order to prove (\[l(a)\]), we proceed as in [@DEM]. Passing to the completion, we obtain an ideal $\^\a$ in $\^ \O_{X,Z}$. We identify $\^ \O_{X,Z}$ with $K[[x_1,\dots,x_n]]$ via a fixed isomorphism, where $K$ is the residue field of $\O_{X,Z}$. Moreover, we may choose the local coordinates so that the image of an equation $h$ defining $H$ in $\O_{X,Z}$ is $x_n$. Since $\^\a$ is zero dimensional, we can find an ideal $\bb \subset R = K[x_1,\dots,x_n]$, which defines a scheme supported at the origin, and such that $\^\bb = \^\a$.
If $V'$, $H'\subset{\mathbb A}^n$ are defined by $\bb$ and $x_n$, respectively, then $({\mathbb A}^n,\frac{1}{\mu}(V'-b\. H'))$ is not log terminal at the origin. We write $\mu = r/s$, for some $r,s \in \N$, and we may clearly assume that $sb\in\N$. Consider the ring $S = K[x_1,\dots,x_{n-1},y]$, and the inclusion $R\subseteq S$ which takes $x_n$ to $y^r$. This determines a cyclic covering of degree $r$ $$M := \operatorname{Spec}S \to N := {\mathbb A}^n=\operatorname{Spec}R,$$ with ramification divisor defined by $(y^{r-1})$.
For any ideal $\cc\subset R$, we put $\tilde\cc:=\cc S$. If $W$ is the scheme defined by $\cc$, then we denote by $\widetilde{W}$ the scheme defined by $\tilde\cc$. In particular, if $H''\subset M$ is defined by $(y)$, then $\widetilde{H'}=rH''$. It follows from [@ein1], Proposition 2.8 (see also [@Laz], Section 9.5.E) that $(N,\frac{1}{\mu}(V'-b\.H'))$ is not log terminal at the origin in $N$ if and only if $(M,\frac{1}{\mu}\cdot\widetilde{V'}-(sb+r-1)H'')$ is not log terminal at the origin in $M$.
We write the rest of the proof in the language of multiplier ideals, for which we refer to [@Laz]. We use the formal exponential notation for these ideals. If $\tilde\bb$ is the ideal defining $\widetilde{V'}$, then the above non log terminality condition on $M$ can be interpreted as saying that $$\label{J}
y^{bs+r-1} \not \in \J(\tilde \bb^{1/\mu}).$$
We choose a monomial order in $S$, with the property that $$x_1 > \dots > x_{n-1} > y^{bs+r-1}.$$ This induces flat deformations to monomial ideals (see [@Eisenbud], Chapter 15). For an ideal $\dd \subseteq S$, we write the degeneration as $\dd_t \to \dd_0$, where $\dd_t \cong \dd$ for $t \ne 0$ and $\dd_0 =: \operatorname{in}(\dd)$ is a monomial ideal.
We claim that $$\label{in(J)}
y^{bs+r-1} \not \in \operatorname{in}(\J(\tilde \bb^{1/\mu})).$$ Indeed, suppose that $y^{bs+r-1} \in \operatorname{in}(\J(\tilde \bb^{1/\mu}))$. Then we can find an element $f \in \J(\tilde \bb^{1/\mu})$ such that $\operatorname{in}(f) = y^{bs+r-1}$. Because of the particular monomial order we have chosen, $f$ must be a polynomial in $y$ of degree $bs+r-1$. On the other hand, $\J(\tilde \bb^{1/\mu})$ defines a scheme which is supported at the origin (or empty), since so does $\tilde \bb$. We deduce that $y^i\in\J(\tilde\bb^{1/\mu})$, for some $i\leq bs+r-1$, which contradicts (\[J\]).
\[in(J(c))vJ(in(c))\] For every ideal $\dd \subseteq S$, and every $c\in{\mathbb Q}_+^*$, we have $$\operatorname{in}(\J(\dd^c)) \supseteq \J(\operatorname{in}(\dd)^c).$$
Consider the family $\pi : \MM = \A^n \times T \to T$, with $T = \A^1$, and the ideal $\DDD\subset\O_{\MM}$ corresponding to the degeneration of $\dd$ described above. If $U$ is the complement of the origin in $T$, then there is an isomorphism $$(\pi^{-1}(U), \DDD\vert_{\pi^{-1}(U)})\simeq ({\mathbb A}^n\times U,
{\rm pr}_1^{-1}\dd).$$
Via this isomorphism we have $\J(\pi^{-1}(U),\DDD^c)
\simeq {\rm pr}_1^{-1}(\J(\dd^c))$. Since the family degenerating to the initial ideal is flat, we deduce easily that $$\J(\MM,\DDD^c)\cdot \O_{\pi^{-1}(0)}\subseteq\operatorname{in}(\J(\dd^c)).$$ On the other hand, the Restriction Theorem (see [@Laz]) gives $$\J(\operatorname{in}(\dd)^c)=\J((\DDD\vert_{\pi^{-1}(0)})^c)\subseteq\J(\MM,\DDD^c)\cdot
\O_{\pi^{-1}(0)}.$$ If we put together the above inclusions, we get the assertion of the lemma.
Note that the monomial order on $S$ induces a monomial order on $R$, and that $\widetilde{\operatorname{in}(\bb)}=\operatorname{in}(\tilde\bb)$. Indeed, the inclusion $\widetilde{\operatorname{in}(\bb)}\subseteq\operatorname{in}(\tilde\bb)$ is obvious, and the corresponding subschemes have the same length $r\cdot l(R/\bb)$.
On the other hand, Lemma \[in(J(c))vJ(in(c))\] and (\[in(J)\]) give $$y^{bs+r-1} \not \in \J(\operatorname{in}(\tilde \bb)^{1/\mu}).$$ Applying again Proposition 2.8 in [@ein1], in the other direction, takes us back in $R$: we deduce that $(N, \frac{1}{\mu}(W-b\cdot H'))$ is not log terminal at the origin, where $W\subset N$ is defined by $\operatorname{in}(\bb)$. Since $l(\O_{X,Z}/\a)=l(R/\bb)=l(R/\operatorname{in}(\bb))$, we have reduced the proof of (\[l(a)\]) to the case when $\a$ is a monomial ideal. In this case, we have in fact a stronger statement, which we prove in the lemma below; therefore the proof of Theorem \[l(a)-e(a)\] is complete.
The following is the natural generalization of Lemma 2.1 in [@DEM].
\[monomial\] Let $\a$ be a zero dimensional monomial ideal in the ring $R = K[x_1,\dots,x_n]$, defining a scheme $V$. Let $H_i$ be the hyperplane defined by $x_i=0$. We consider $\mu\in{\mathbb Q}_+^*$ and $b_i\in{\mathbb Q}$, such that $\mu\geq\max_i\{b_i\}$. If the pair $({\mathbb A}^n,
\frac{1}{\mu}(V+\sum_ib_iH_i))$ is not log terminal, then $$l(R/\a)\geq\frac{n^n}{n!} \. \prod_{i=1}^n (\m - b_i),$$ and the inequality is strict if $n\geq 2$.
We use the result in [@ELM] which gives the condition for a monomial pair, with possibly negative coefficients, to be log terminal. This generalizes the formula for the log canonical threshold of a monomial ideal from [@Ho]. It follows from [@ELM] that $(X,\frac{1}{\mu}(V+\sum_ib_iH_i))$ is not log terminal if and only if there is a facet of the Newton polytope associated to $\a$ such that, if $\sum_i u_i/a_i = 1$ is the equation of the hyperplane supporting it, then $$\sum_{i=1}^n \frac{\m- b_i}{a_i}\leq 1.$$ Applying the inequality between the arithmetic mean and the geometric mean of the set of nonnegative numbers $\{(\m - b_i)/a_i\}_i$, we deduce $$\prod_i a_i \geq n^n \. \prod_i (\m - b_i).$$ We conclude using the fact that $n! \. l(R/\a)\geq \prod_i a_i$, and the inequality is strict if $n\geq 2$ (see, for instance, Lemma 1.3 in [@DEM]).
Log canonical thresholds of affine cones
========================================
In this section we give a lower bound for the log canonical threshold of a subscheme $V\subset\A^n$, cut out by homogeneous equations of the same degree. The bound involves the dimension of the non log terminal locus of $(\A^n,c\cdot V)$, where $c=\operatorname{lc}(\A^n,V)$. Moreover, we characterize the case when we have equality. In the particular case when $V$ is the affine cone over a projective hypersurface with isolated singularities, this proves a conjecture of Cheltsov and Park from [@CP].
The main ingredient we use for this bound is a formula for the log canonical threshold in terms of jet schemes, from [@Mu2]. Recall that for an arbitrary scheme $W$, of finite type over the ground field $k$, the $m$th jet scheme $W_m$ is again a scheme of finite type over $k$ characterized by $${\rm Hom}({\rm Spec}\,A, W_m)\simeq{\rm Hom}({\rm Spec}\,A[t]/(t^{m+1}),
W),$$ for every $k$-algebra $A$. Note that $W_m(k)=
{\rm Hom}({\rm Spec}\,k[t]/(t^{m+1}), W),$ and in fact, we will be interested only in the dimensions of these spaces. For the basic properties of the jet schemes, we refer to [@Mu1] and [@Mu2].
\[ingred\][([@Mu2], 3.4)]{} If $X$ is a smooth, connected variety of dimension $n$, and if $V\subset X$ is a subscheme, then the log canonical threshold of $(X,V)$ is given by $$\operatorname{lc}(X,V)=n-\sup_{m\in\N}\frac{\dim\,V_m}{m+1}.$$ Moreover, there is $p\in\N$, depending on the numerical data given by a log resolution of $(X,V)$, such that $\operatorname{lc}(X,V)=n-(\dim\,V_m)/(m+1)$ whenever $p\mid (m+1)$.
For every $W$ and every $m\geq 1$, there are canonical projections $\phi^W_m:W_m\longrightarrow W_{m-1}$ induced by the truncation homomorphisms $k[t]/(t^{m+1})\longrightarrow k[t]/(t^m)$. By composing these projections we get morphisms $\pi^W_m:W_m
\longrightarrow W$. When there is no danger of confusion, we simply write $\phi_m$ and $\pi_m$.
If $W$ is a smooth, connected variety, then $W_m$ is smooth, connected, and $\dim\,W_m=(m+1)\dim\,W$, for all $m$. It follows from definition that taking jet schemes commutes with open immersions. In particular, if $W$ has pure dimension $n$, then $\pi_m^{-1}(W_{\rm reg})$ is smooth, of pure dimension $(m+1)n$.
Recall that the non log terminal locus of a pair is the union of all centers of non log terminality. In other words, its complement is the largest open subset over which the pair is log terminal. Theorem \[ingred\] easily gives a description via jet schemes of the non log terminal locus of a pair which is log canonical, but is not log terminal. Suppose that $(X, V)$ is as in the theorem, and let $c=\operatorname{lc}(X,V)$. We say that an irreducible component $T$ of $V_m$ (for some $m$) computes $\operatorname{lc}(X,V)$ if $\dim(T)=(m+1)(n-c)$. Note that basic results on jet schemes show that for every irreducible component $T$ of $V_m$, the projection $\pi_m(T)$ is closed in $V$ (see [@Mu1]). It follows from Theorem \[ingred\] that if $W$ is an irreducible component of $V_m$ that computes the log canonical threshold of $(X, V)$ then $\pi_m(W)$ is contained in the non log terminal locus of $(X,c\cdot V)$ (see also [@ELM]).
For future reference, we record here two lemmas. For $x \in \R$, we denote by $[x]$ the largest integer $p$ such that $p \le x$.
\[fiber\][([@Mu1], 3.7)]{} If $X$ is a smooth, connected variety of dimension $n$, $D\subset X$ is an effective divisor, and $x\in D$ is a point with $e_xD=q$, then $$\dim(\pi^D_m)^{-1}(x)\leq mn-[m/q],$$ for every $m\in\N$.
In fact, the only assertion we will need from Lemma \[fiber\] is that $\dim\,(\pi^D_m)^{-1}(x)\leq mn-1$, if $m\geq q$, which follows easily from the equations describing the jet schemes (see [@Mu1]).
\[semicont\][([@Mu2] 2.3)]{} Let $\Phi : {\mathcal W}\longrightarrow S$ be a family of schemes, and let us denote the fiber $\Phi^{-1}(s)$ by ${\mathcal W}_s$. If $\tau:S\longrightarrow{\mathcal W}$ is a section of $\Phi$, then the function $$f(s)=\dim(\pi_m^{{\mathcal W}_s})^{-1}(\tau(s))$$ is upper semi-continuous on the set of closed points of $S$, for every $m\in\N$.
The following are the main results in this section.
\[lower\_bound1\] Let $V\subset\A^n$ be a subscheme whose ideal is generated by homogeneous polynomials of degree $d$. Let $c=\operatorname{lc}(\A^n,V)$, and let $Z$ be the non log terminal locus of $(\A^n, c\cdot V)$. If $e=\operatorname{codim}(Z,\A^n)$, then $c\geq e/d$.
\[equality\_case1\] With the notation in the previous theorem, $c=e/d$ if and only if $V$ satisfies the following three properties:
1. $Z = L$ is a linear subspace of codimension $e$.
2. $V$ is the pull back of a closed subscheme $V'\subset\A^n/L$, which is defined by homogeneous polynomials of degree $d$ and such that $\operatorname{lc}(\A^n/L, V') =e/d$.
3. The non log terminal locus of $(\A^n/L, e/d\cdot V')$ is just the origin.
If $\pi_m:V_m\longrightarrow V$ is the canonical projection, then we have an isomorphism $$\label{isom}
\pi_m^{-1}(0)\simeq V_{m-d}\times\A^{n(d-1)},$$ for every $m\geq d-1$ (we put $V_{-1}=\{0\}$). Indeed, for a $k$-algebra $A$, an $A$-valued point of $\pi_m^{-1}(0)$ is a ring homomorphism $$\phi:k[X_1,\ldots,X_n]/(F_1,\ldots,F_s)\longrightarrow
A[t]/(t^{m+1}),$$ such that $\phi(X_i)\in(t)$ for all $i$. Here $F_1,\ldots, F_s$ are homogeneous equations of degree $d$, defining $V$. Therefore we can write $\phi(X_i)=tf_i$, and $\phi$ is a homomorphism if and only if the classes of $f_i$ in $A[t]/(t^{m+1-d})$ define an $A$-valued point of $V_{m-d}$. But $\phi$ is uniquely determined by the classes of $f_i$ in $A[t]/(t^m)$, so this proves the isomorphism in equation (\[isom\]).
By Theorem \[ingred\], we can find $p$ such that $$\dim\,V_{pd-1}=pd(n-c).$$ Let $W$ be an irreducible component of $V_{pd-1}$ computing $\operatorname{lc}(X,V)$, so $\dim\,W=pd(n-c)$ and $\pi_{pd-1}(W) \subset
Z$. By our hypothesis, $\dim\pi_{pd-1}(W)\leq n-e$. Therefore Lemma \[semicont\] gives $$\label{inequality1}
pd(n-c)=\dim\,W\leq\dim\pi_{pd-1}^{-1}(0)+n-e=
\dim V_{(p-1)d-1}+(d-1)n+n-e,$$ where the last equality follows from (\[isom\]). Another application of Theorem \[ingred\] gives $$\label{inequality2}
\dim\,V_{(p-1)d-1}\leq (p-1)d(n-c).$$ Using this and (\[inequality1\]), we get $c\geq e/d$.
We use the notation in the above proof. Since $c=e/d$, we see that in both equations (\[inequality1\]) and (\[inequality2\]) we have, in fact, equalities. The equality in (\[inequality2\]) shows that $\dim V_{(p-1)d-1}=(p-1)d(n-c)$, so we may run the same argument with $p$ replaced by $p-1$. Continuing in this way, we see that we may suppose that $p=1$. In this case, the equality in (\[inequality1\]) shows that for some irreducible component $W$ of $V_{d-1}$, with $\dim W=dn-e$, we have $\dim \pi_{d-1}(W)=n-e$. It follows that if $Z_1:=\pi_{d-1}(W)$, then $Z_1$ is an irreducible component of $Z$.
Fix $x\in Z_1$. If ${\rm mult}_xF\leq d-1$, for some degree $d$ polynomial $F$ in the ideal of $V$, then Lemma \[fiber\] would give $\dim\,\pi_{d-1}^{-1}(x)\leq (d-1)n-1$. This would imply $\dim\,W\leq n-e+(d-1)n-1$, a contradiction. Therefore we must have ${\rm mult}_xF\geq d$, for every such $F$.
Recall that we have degree $d$ generators of the ideal of $V$, denoted by $F_1,\ldots,F_s$. Let $L_i = \{x \in \A^n | {\rm mult}_xF_i =d\}$, for $i\leq s$. By the Bézout theorem, $L_i$ is a linear space. If $L= \bigcap_{i=1}^s L_i$, then $Z_1 \subset L$. On the other hand, by blowing-up along $L$, we see that $L$ is contained in the non log terminal locus of $(\A^n, c\cdot V)$. Therefore $Z_1 =L$. Let $z_1, ..., z_e$ be the linear forms defining $L$. Then each $F_i$ is a homogeneous polynomial of degree $d$ in $z_1, ..., z_e$. This shows that $V$ is the pull back of a closed subscheme $V'\subset\A^n/L$, defined by $F_1,..., F_s$. Since the projection map $\pi: \A^n
\longrightarrow \A^n/L$ is smooth and surjective, we see that $\operatorname{lc}(\A^n/L, V') = \operatorname{lc}(\A^n, V)$ and that the non log terminal locus of $(\A^n, \frac{e}{d}\cdot V)$ is just the pull-back of the corresponding locus for the pair $(\A^n/L, e/d\cdot V')$. Note that the non log terminal locus of $(\A^n/L, e/d\cdot V)$ is defined by an homogeneous ideal. By dimension considerations, we conclude that this locus consists just of the origin, so $Z= L$.
Conversely, if $V$ is the pull back of a closed subscheme from $\A^n/L$ as described in the theorem, one checks that $\operatorname{lc}(\A^n, V) = e/d$ and that the corresponding non log terminal locus is just $L$.
Let $V'$ be a closed subscheme of $\P^{n-1}$ defined by degree $d$ homogeneous polynomials $F_1,\ldots, F_s$, and let $V$ be the closed subscheme in $\A^n$ defined by the same set of polynomials. Let $c =\operatorname{lc}(\P^{n-1}, V')$, and let $Z'$ be the non log terminal locus of $(\P^{n-1}, c\cdot V')$. Suppose that the codimension of $Z'$ in $\P^{n-1}$ is $e$.
\[proj\_case\] With the above notation, $\operatorname{lc}(\P^{n-1}, V') \ge e/d$. Moreover, if we have equality, then $V'$ is the cone over a scheme in some $\P^{e-1}$.
Note that $$\operatorname{lc}(\P^{n-1}, V') = \operatorname{lc}(\A^n-\{0\}, V-\{0\}) \ge
\operatorname{lc}(\A^n, V).$$ Now the first assertion follows from Theorem \[lower\_bound1\].
If $\operatorname{lc}(\P^{n-1}, V') = e/d$, then $\operatorname{lc}(\A^n, V) = e/d$ and the non log terminal locus of $(\A^n, \frac{e}{d}\cdot V)$ is a linear space $L$ of codimension $e$. If $z_1,..., z_e$ are the linear forms defining $L$, then each $F_i$ is a homogeneous polynomial of degree $d$ in $z_1, ..., z_e$. Therefore $V'$ is the cone with center $L$ over the closed subscheme of $\P^{e-1}$ defined by $F_1,\ldots,F_s$.
In [@CP], Cheltsov and Park studied the log canonical threshold of singular hyperplane sections of smooth, projective hypersurfaces. If $X\subset{\mathbb P}^n$ is a smooth hypersurface of degree $d$, and if ${V}
=X\cap H$, for a hyperplane $H$, then they have shown that $$\label{ineq_CP}
\operatorname{lc}(X,{V})\geq\min\{(n-1)/d,1\}.$$ It follows from Theorem \[ingred\] that $\operatorname{lc}(X, V) = \operatorname{lc}(\P^{n-1}, V)$. As it is well known that ${V}$ has isolated singularities, if we apply the first assertion in Corollary \[proj\_case\], then we recover the result in [@CP].
Cheltsov and Park have conjectured in their setting that if $d\geq n$, then equality holds in (\[ineq\_CP\]) if and only if ${V}$ is a cone. They have shown that their conjecture would follow from the Log Minimal Model Program. The second assertion in Corollary \[proj\_case\] proves, in particular, their conjecture.
Application to birational rigidity
==================================
Using the bounds on log canonical thresholds from the previous sections, we prove now the birational rigidity of certain Fano hypersurfaces. We recall that a Mori fiber space $X$ is called [*birationally superrigid*]{} if any birational map $\f : X \rat X'$ to another Mori fiber space $X'$ is an isomorphism. For the definition of Mori fiber space and for another notion of rigidity, we refer to [@Co2]. Note that Fano manifolds having Néron-Severi group of rank 1 are trivially Mori fiber spaces. Birational superrigidity is a very strong condition: it implies that $X$ is not rational, and that $\operatorname{Bir}(X) = \operatorname{Aut}(X)$. Note that if $X$ is a smooth hypersurface of degree $N$ in $\P^N$ ($N \ge 4$), then $X$ has no nonzero vector fields. Therefore if $X$ is birationally superrigid, then the birational invariant $\operatorname{Bir}(X)$ is a finite group.
The following theorem is the main result of this section.
\[X\_N\] For any integer $4 \le N \le 12$, every smooth hypersurface $X = X_N \subset \P^N$ of degree $N$ is birationally superrigid.
The case $N=4$ of the above theorem is due to Iskovskikh and Manin (see [@IM]). The case $N=5$ was proven by Pukhlikov in [@Pu2], while the cases $N=6,7,8$ were established by Cheltsov in [@Ch2]. Birational superrigidity of smooth hypersurfaces of degree $N$ in $\P^N$ (for $N \ge 5$) was conjectured by Pukhlikov in [@Pu5], where the result is established under a suitable condition of regularity on the equation defining the hypersurface. We remark that there is an attempt due to Pukhlikov in [@Pu1] to prove the general case (for $N \ge 6$). Despite a gap in the proof (see the remark below), we believe that the method therein could lead in the future to the result. In fact, the proof given below for Theorem \[X\_N\] follows his method, and our contribution is mainly in simplifying and solidifying his argument.
\[gap\] The following gives a counterexample to Corollary 2 in [@Pu1]. Let $Q\subset{\mathbb P}^4$ be a cone over a twisted cubic, and let $\pi_a: Q\longrightarrow R=\pi_a(Q)$ be the projection from an arbitrary point $a\in{\mathbb P}^4\setminus Q$; note that $R$ is the cone over a singular plane cubic. If $p$ is the vertex of $Q$, then the restriction of $\pi_a$ to any punctured neighbourhood of $p$ in $Q$ can not preserve multiplicities, as $q=\pi_a(p)$ lies on a one dimensional component of the singular locus of $R$.
Before proving the above theorem, we recall the following result, due to Pukhlikov:
\[pu1\][([@Pu1], Proposition 5)]{} Let $X \subset \P^N$ be a smooth hypersurface, and let $Z$ be an effective cycle on $X$, of pure codimension $k < \frac 12 \dim X$. If $m \in \N$ is such that $Z \equiv m \.c_1(\O_X(1))^k$, then $\dim \{ x \in Z \mid e_xZ > m \} < k$.
\[pu1\_rmk\] Because we have assumed $k<\frac 12 \dim X$, the existence of $m$ as in the proposition follows from Lefschetz Theorem. One can check that the proof of Proposition \[pu1\] extends to the case $k = \frac 12 \dim X$, if we assume that such $m$ exists. Note also that the statement is trivially true if $k > \frac 12 \dim X$.
We need first a few basic properties which allow us to control multiplicities when restricting to general hyperplane sections, and when projecting to lower dimensional linear subspaces. The following proposition must be well known, but we include a proof for the convenience of the readers. We learned this proof, which simplifies our original arguments, from Steve Kleiman.
\[int\_mult\] Let $Z\subset\P^n$ be an irreducible projective variety. If $H \subset Z$ is a general hyperplane section, then $e_pH = e_pZ$ for every $p \in H$.
As observed by Whitney (e.g., see [@Kl], page 219), at any point $p \in Z$, the fiber over $p$ of the conormal variety of $Z$, viewed as a linear subspace of $(\P^n)^*$, contains the dual variety of every component of the embedded projective tangent cone $C_pZ$ of $Z$ at $p$. A hyperplane section $H$ of $Z$ satisfies $e_pH = e_pZ$ if the hyperplane meets $C_pZ$ properly. Therefore, this equality holds for every point $p$ in $H$ whenever $H$ is cut out by a hyperplane not in the dual variety of $Z$.
In the next two propositions, we consider a (possibly reducible) subvariety $Z \subset \P^{n+s}$, of pure dimension $n-1$, for some $n \ge 2$ and $s\geq 1$, and take a general linear projection $\p : \P^{n+s} \setminus \LL \to \P^n$. Here $\LL$ denotes the center of projection, that is an $(s-1)$ dimensional linear space. We put $T = \p(Z)$ and $g = \p|_Z : Z \to T$. It is easy to see that since $\LL$ is general, $g$ is a finite birational map. For convenience, we put $\dim(\emptyset)=-1$.
\[proj\_mult1\] With the above notation, consider the set $$\D = \Big\{q \in T \mid e_q T >
\sum_{p \in g^{-1}(q)} e_p Z \Big\}.$$ If the projection is chosen with suitable generality, then $\operatorname{codim}(\D,{\mathbb P}^n) \ge 3$.
Note that $e_qT \ge \sum e_pZ$ for every $q \in T$, the sum being taken over all points $p$ over $q$. Moreover, for a generic projection, every irreducible component of $Z$ is mapped to a distinct component of $T$. Therefore, by the linearity of the multiplicity, we may assume that $Z$ is irreducible.
Let $\D' \subset T$ be the set of points $q$, such that for some $p$ over $q$, the intersection of the $s$ dimensional linear space $\ov{\LL q}$ with the embedded projective tangent cone $C_pZ$ of $Z$ at $p$, is at least one dimensional. We claim that $\operatorname{codim}(\D',{\mathbb P}^n) \geq 3$. Indeed, it follows from the theorem on generic flatness that there is a stratification $Z=Z_1 \sqcup \dots \sqcup Z_t$ by locally closed subsets such that, for every $1 \le j \le t$, the incidence set $$I_j = \{ (p,x) \in Z_j\times\P^{n+s} \mid x \in C_pZ \}$$ is a (possibly reducible) quasi-projective variety of dimension no more than $2 \dim Z= 2n-2$. Let $\operatorname{pr}_1$ and $\operatorname{pr}_2$ denote the projections of $I_j$ to the first and to the second factor, respectively. It is clear that the set of those $y\in{\mathbb P}^{n+s}$, with $\dim \operatorname{pr}_2^{-1}(y)=\tau$ has dimension at most $\max\{2n-2-\tau,-1\}$, for every $\tau\in{\mathbb N}$. Since $\LL$ is a general linear subspace of dimension $s-1$, it intersects a given $d$ dimensional closed subset in a set of dimension $\max\{d-n-1,-1\}$. Hence $\dim \operatorname{pr}_2^{-1}(\LL)\leq n-3$, and therefore $\dim(\operatorname{pr}_1(\operatorname{pr}_2^{-1}(\LL))) \le n-3$. As this is true for every $j$, we deduce $\operatorname{codim}(\D',{\mathbb
P}^n)\geq 3$. Thus, in order to prove the proposition, it is enough to show that $\D \subseteq \D'$.
For a given point $p \in Z$, let $L_p \subset \P^{n+s}$ be an $(s+1)$ dimensional linear subspace passing through $p$. Let $\mm_p$ be the maximal ideal of $\O_{Z,p}$, and let $\PP \subset \O_{Z,p}$ be the ideal locally defining $L_p \cap Z$. If $L_p$ meets the tangent cone $C_pZ$ of $Z$ at $p$ properly, then the linear forms defining $L_p$ generate the ideal of the exceptional divisor of the blow up of $Z$ at $p$. Therefore $e(\mm_p)=e(\PP)$.
Consider now some $q \in T \setminus \D'$. Let $L_q \subset \P^n$ be a general line passing through $q$, and let $\QQ \subset \O_{T,q}$ be the ideal generated by the linear forms vanishing along $L_q$. We denote by $L$ the closure of $\p^{-1}(L_q)$ in $\P^{n+q}$. For every $p \in g^{-1}(q)$, let $\PP \subset \O_{Z,p}$ be the ideal generated by the linear forms vanishing along $L$. Since $L_q$ is general and $q \not \in \D'$, we may assume that $L$ intersects $C_pZ$ properly, hence $e(\mm_p) = e(\PP)$. On the other hand, if $\mm_q$ is the maximal ideal of $\O_{T,q}$, then $\QQ\subseteq\mm_q$, which gives $$\PP=\QQ\cdot\O_{Z,p} \subseteq \mm_q\cdot\O_{Z,p}\subseteq\mm_p.$$ Therefore $e(\mm_p)=e(\mm_q\cdot\O_{Z,p})$ for every $p$ as above, hence $q\not\in\D$, by [@Fulton], Example 4.3.6.
\[proj\_mult2\] With the notation in Proposition \[proj\_mult1\], consider the set $$\S = \S(Z,\p):= \{q \in T \mid \text{$g^{-1}(q)$ has al least 3
distinct points} \}.$$ If the projection is sufficiently general, then $\operatorname{codim}(\S,{\mathbb P}^n) \geq 3$.
We have $\operatorname{codim}(\S,{\mathbb P}^n)\geq 3$ if and only if $\S \cap P = \emptyset$ for every general plane $P\subset\P^n$. Pick one general plane $P$, let $P' \;(\cong \P^{s+2})$ be the closure of $\p^{-1}(P)$ in $\P^{n+s}$, and let $\p'$ be the restriction of $\p$ to $P' \setminus \LL$. If $Z' = Z \cap P'$, then $Z'$ is a (possibly reducible) curve, and its multisecant variety is at most two dimensional (see, for example, [@FOV], Corollary 4.6.17). Note that $\LL$ is general in $P'$. Indeed, choosing the center of projection $\LL$ general in $\P^{n+s}$, and then picking $P$ general in $\P^n$ is equivalent to first fixing a general $(s+2)$-plane $P'$ in $\P^{n+s}$ and then choosing $\LL$ general in $P'$. Therefore we conclude that $\S \cap P$, which is the same as $\S(Z',\p')$, is empty.
By adjunction, $\O_X(-K_X)\simeq\O_X(1)$. Let $\f : X \rat X'$ be a birational map from $X$ to a Mori fiber space $X'$, and assume that $\f$ is not an isomorphism. By the Noether-Fano inequality (see [@Co1] and [@Is], or [@Ma]), we find a linear subsystem $\H \subset |\O_X(r)|$, with $r \geq 1$, whose base scheme $B$ has codimension $\ge 2$, and such that the pair $(X,\frac 1r\.B)$ is not canonical. We choose $c<\frac{1}{r}$, such that $(X,c\cdot B)$ is still not canonical, and let $C \subset X$ be a center of non canonicity for $(X,c \.B)$. Note that $C$ is a center of non canonicity also for the pairs $(X,c \. D)$ and $(X,c \. V)$, where $V = D \cap D'$ and $D$, $D' \in \H$ are two general members. Applying Proposition \[pu1\] for $Z=D$ and $k=1$, we see that the multiplicity of $D$ is $\leq r$ on an open subset whose complement has dimension zero. On this open subset $(X,c\.D)$ is canonical (see, for example, [@Ko] 3.14.1). Therefore $C = p$, a point of $X$.
Let $Y$ be a general hyperplane section of $X$ containing $p$. Then $p$ is a center of non log canonicity for $(Y,c \. B|_Y)$. Note that $Y$ is a smooth hypersurface of degree $N$ in $\P^{N-1}$. Let $\p : \P^{N-1} \setminus \LL \to \P^{N-3}$ be a general linear projection, where the center of projection $\LL$ is a line. We can assume that the restriction of $\p$ to each irreducible component of $V|_Y$ is finite and birational. Note that $\p_*[V|_Y]$ is a divisor in $\P^{N-3}$ of degree $Nr^2$. If $\tilde Y = \operatorname{Bl}_{\LL \cap Y} Y$, then we get a morphism $f : \tilde Y \to \P^{N-3}$. If we choose $\LL$ general enough, then we can find an open set $U \subset \P^{N-3}$, containing the image $q$ of $p$, such that $f$ restricts to a smooth (proper) morphism $f^{-1}(U) \to U$. Applying Theorem \[thm1\], we deduce that the pair $$\label{pair}
\(\P^{N-3}, \frac{c^2}4 \. \p_*[V|_Y] \)$$ is not log terminal at $q$.
We claim that $$\label{dim_bound}
\dim \{y \in \pi(V|_Y) \mid e_y(\p_*[V|_Y]) > 2r^2 \}
\le \max \{ N-6,0 \}.$$ Indeed, by Propositions \[proj\_mult2\] and \[proj\_mult1\], the map $\operatorname{Supp}([V|_Y]) \to \operatorname{Supp}(\p_*[V|_Y])$ is at most 2 to 1 and preserves multiplicities outside a set, say $\D \cup \S$, of dimension $\le \max\{N-6,-1\}$. This implies that, for each $y$ outside the set $\D \cup \S$, $e_y(\p_*[V|_Y]) = \sum e_x([V|_Y])$, where the sum is taken over the points $x$ over $y$, and this sum involves at most two non-zero terms. Then (\[dim\_bound\]) follows from the fact that, by Propositions \[pu1\] and \[int\_mult\] (see also Remark \[pu1\_rmk\]), the set of points $x$ for which $e_x[V|_Y] > r^2$ is at most zero dimensional.
Note that the pair (\[pair\]) is log terminal at every point $y$ where $e_y(\p_*[V|_Y]) \le 4r^2$. If $4 \le N \le 6$, we deduce that the pair is log terminal outside a zero dimensional closed subset. In this case, Corollary \[proj\_case\] gives $c^2/4 \ge (N-3)/(Nr^2)$. Since $c < 1/r$, this implies $N < 4$, a contradiction. If $7 \le N \le 12$, then we can only conclude that the pair (\[pair\]) is log terminal outside a closed subset of codimension at least $3$. This time the same corollary gives $c^2/4 \ge 3/(Nr^2)$, which implies $N > 12$. This again contradicts our assumptions, so the proof is complete.
[dFEM]{}
I. A. Cheltsov, On a smooth four-dimensional quintic, (Russian) Mat. Sb. **191** (2000), 139–160; translation in Sb. Math. **191** (2000), 1399–1419.
I. Cheltsov and J. Park, Log canonical thresholds and generalized Eckardt points, Mat. Sb. **193** (2002), 149–160.
A. Corti, Factoring birational maps of threefolds after Sarkisov, J. Algebraic Geom. **4** (1995), 223–254.
A. Corti, Singularities of linear systems and $3$-fold birational geometry, in *Explicit birational geometry of $3$-folds*, 259–312, Cambridge Univ. Press, Cambridge, 2000.
T. de Fernex, L. Ein and M. Mustaţǎ, Multiplicities and log canonical threshold, preprint 2002, to appear in J. Algebraic Geom.
L. Ein, Multiplier ideals, vanishing theorem and applications, in *Algebraic Geometry, Santa Cruz 1995*, volume **62** of Proc. Symp. Pure Math. Amer. Math. Soc., 1997, 203–219.
L. Ein, R. Lazarsfeld and M. Mustaţǎ, Contact loci in arc spaces, preprint 2002.
D. Eisenbud, Commutative algebra with a view toward algebraic geometry, Grad. Texts in Math. **150**, Springer, New York, 1995.
H. Flenner, L. O’Carroll and W. Vogel, *Joins and Intersections*, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 1999.
W. Fulton, *Intersection Theory*, second ed., Springer-Verlag, Berlin, 1998.
R. Hartshorne, *Algebraic Geometry*, Graduate Texts in Mathematics, No. 52, Springer-Verlag, New York, 1977.
J. Howald, Multiplier ideals of monomial ideals, Trans. Amer. Math. Soc. **353** (2001), 2665–2671.
V. A. Iskovskikh, Birational rigidity and Mori theory, Uspekhi Mat. Nauk **56**:2 (2001) 3–86; English transl., Russian Math. Surveys **56**:2 (2001), 207–291.
V. A. Iskovskikh and Yu. I. Manin, Three-dimensional quartics and counterexamples to the Lüroth problem, Mat. Sb. **86** (1971), 140–166; English transl., Math. Sb. **15** (1972), 141–166.
S. Kleiman, Tangency and duality, Proceedings of the 1984 Vancouver Conference in Algebraic Geometry, 163–225, CMS Conf. Proc. **6**, Amer. Math. Soc., Providence, RI, 1986.
J. Kollár, Singularities of pairs, in *Algebraic Geometry, Santa Cruz 1995*, volume **62** of Proc. Symp. Pure Math. Amer. Math. Soc., 1997, 221–286.
J. Kollár, S. Mori, *Birational Geometry of Algebraic Varieties*, Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1998.
R. Lazarsfeld, *Positivity in Algebraic Geometry*, book in preparation.
K. Matsuki, *Introduction to the Mori Program*, Universitext, Springer-Verlag, New York, 2002.
M. Mustaţǎ, Jet schemes of locally complete intersection canonical singularities, with an appendix by David Eisenbud and Edward Frenkel, Invent. Math. **145** (2001), 397–424.
M. Mustaţǎ, Singularities of pairs via jet schemes, J. Amer. Math. Soc. **15** (2002), 599–615.
A.V. Pukhlikov, Birational automorphisms of a four-dimensional quintic, Invent. Math. **87** (1987), 303–329.
A. V. Pukhlikov, Birational automorphisms of Fano hypersurfaces, Invent. Math. **134** (1998), 401–426.
A.V. Pukhlikov, Birationally rigid Fano hypersurfaces, preprint 2002, arXiv: math.AG/0201302.
O. Zariski and P. Samuel, *Commutative Algebra, Vol. II*, Van Nostrand, Princeton, 1960.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We study thin interpolating sequences $\{\lambda_n\}$ and their relationship to interpolation in the Hardy space $H^2$ and the model spaces $K_\Theta = H^2 \ominus \Theta H^2$, where $\Theta$ is an inner function. Our results, phrased in terms of the functions that do the interpolation as well as Carleson measures, show that under the assumption that $\Theta(\lambda_n) \to 0$ the interpolation properties in $H^2$ are essentially the same as those in $K_\Theta$.'
address:
- |
Pamela Gorkin, Department of Mathematics\
Bucknell University\
Lewisburg, PA USA 17837
- |
Brett D. Wick, School of Mathematics\
Georgia Institute of Technology\
686 Cherry Street\
Atlanta, GA USA 30332-0160
author:
- 'Pamela Gorkin$^\dagger$'
- 'Brett D. Wick$^\ddagger$'
title: Thin Sequences and Their Role in Model Spaces and Douglas Algebras
---
[^1]
[^2]
\[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Conjecture]{} \[thm\][Problem]{} \[thm\][Proposition]{} \[thm\][Definition]{} \[thm\][Remark]{}
Introduction and Motivation
===========================
A sequence $\{\lambda_j\}_{j=1}^\infty$ is an [interpolating sequence]{.nodecor} for $H^\infty$, the space of bounded analytic functions, if for every $w\in\ell^\infty$ there is a function $f\in H^\infty$ such that $$f(z_j) = w_j, ~\mbox{for all}~ j\in{\mathbb{N}}.$$ Carleson’s interpolation theorem says that $\{\lambda_j\}_{j=1}^\infty$ is an interpolating sequence for $H^\infty$ if and only if $$\label{Interp_Cond}
\delta = \inf_{j}\delta_j:=\inf_j \left\vert B_j(\lambda_j)\right\vert=\inf_{j}\prod_{k \ne j} \left|\frac{\lambda_j - \lambda_k}{1 - \overline{\lambda}_j \lambda_k}\right| > 0,$$ where $$B_j(z):=\prod_{k\neq j}\frac{-\overline{\lambda_k}}{{\ensuremath{\left\vert\lambda_k\right\vert}}}\frac{z-\lambda_k}{1-\overline{\lambda}_kz}$$ denotes the Blaschke product vanishing on the set of points $\{\lambda_k:k\neq j\}$.
In this paper, we consider sequences that (eventually) satisfy a stronger condition than . A sequence $\{\lambda_j\}\subset{\mathbb{D}}$ is *thin* if $$\lim_{j\to\infty}\delta_j:=\lim_{j\to\infty}\prod_{k\neq j}\left\vert\frac{\lambda_j-\lambda_k}{1-\overline{\lambda}_k\lambda_j}\right\vert=1.$$
Thin sequences are of interest not only because functions solving interpolation for thin interpolating sequences have good bounds on the norm, but also because they are interpolating sequences for a very small algebra: the algebra $QA = VMO \cap H^\infty$, where $VMO$ is the space of functions on the unit circle with vanishing mean oscillation [@W].
Continuing work in [@CFT] and [@GPW], we are interested in understanding these sequences in different settings. This will require two definitions that are motivated by the work of Shapiro and Shields, [@SS], in which they gave the appropriate conditions for a sequence to be interpolating for the Hardy space $H^2$.
Considering more general Hilbert spaces will require the introduction of reproducing kernels: In a reproducing kernel Hilbert space $\mathcal{H}$ (see [@AM p. 17]) we let $K_{\lambda_n}$ denote the kernel corresponding to the point $\lambda_n$; that is, for each function in the Hilbert space we have that $f(\lambda_n)=\left\langle f, K_{\lambda_n}\right\rangle_{\mathcal{H}}$. If we have an $\ell^2$ sequence $a = \{a_n\}$, we define $$\|a\|_{N, \ell^2} = \left(\sum_{j \ge N} |a_j|^2\right)^{1/2}.$$ The concepts of interest are the following.
A sequence $\{\lambda_n\}\subset\Omega \subseteq \mathbb{C}^n$ is said to be [*an eventual $1$-interpolating sequence for a reproducing kernel Hilbert space $\mathcal{H}$*]{}, denoted $EIS_{\mathcal{H}}$, if for every $\varepsilon > 0$ there exists $N$ such that for each $\{a_n\} \in \ell^2$ there exists $f_{N, a} \in \mathcal{H}$ with $$f_{N, a}(\lambda_n) {\ensuremath{\left\|K_{\lambda_n}\right\|}}_{\mathcal{H}}^{-1}=f_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} = a_n ~\mbox{for}~ n \ge N ~\mbox{and}~ \|f_{N, a}\|_{\mathcal{H}} \le (1 + \varepsilon) \|a\|_{N, \ell^2}.$$ A sequence $\{\lambda_n\}$ is said to be a [*strong asymptotic interpolating sequence for $\mathcal{H}$*]{}, denoted $AIS_{\mathcal{H}}$, if for all $\varepsilon > 0$ there exists $N$ such that for all sequences $\{a_n\} \in \ell^2$ there exists a function $G_{N, a} \in \mathcal{H}$ such that $\|G_{N, a}\|_\mathcal{H} \le \|a\|_{N,\ell^2}$ and $$\|\{G_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_n\}\|_{N, \ell^2} < \varepsilon \|a\|_{N, \ell^2}.$$
Given a (nonconstant) inner function $\Theta$, we are interested in these sequences in model spaces; we define the model space for $\Theta$ an inner function by $K_\Theta = H^2 \ominus \Theta H^2$. The reproducing kernel in $K_\Theta$ for $\lambda_0 \in \mathbb{D}$ is $$K_{\lambda_0}^\Theta(z) = \frac{1 - \overline{\Theta(\lambda_0)}{\Theta(z)}}{1 - \overline{\lambda_0}z}$$ and the normalized reproducing kernel is $$k_{\lambda_0}^\Theta(z) = \sqrt{\frac{1 - |\lambda_0|^2}{1 - |\Theta(\lambda_0)|^2}} K_{\lambda_0}^\Theta(z).$$ Finally, note that $$K_{\lambda_0} = K_{\lambda_0}^\Theta + \Theta \overline{\Theta(\lambda_0)}K_{\lambda_0}.$$ We let $P_\Theta$ denote the orthogonal projection of $H^2$ onto $K_\Theta$.
We consider thin sequences in these settings as well as in Douglas algebras: Letting $L^\infty$ denote the algebra of essentially bounded measurable functions on the unit circle, a Douglas algebra is a closed subalgebra of $L^\infty$ containing $H^\infty$. It is a consequence of work of Chang and Marshall that a Douglas algebra $\mathcal{B}$ is equal to the closed algebra generated by $H^\infty$ and the conjugates of the interpolating Blaschke products invertible in $\mathcal{B}$, [@C; @M].
In this paper, we continue work started in [@GM] and [@GPW] investigating the relationship between thin sequences, $EIS_{\mathcal{H}}$ and $AIS_{\mathcal{H}}$ where $\mathcal{H}$ is a model space or the Hardy space $H^2$. In Section \[HSV\], we consider the notion of eventually interpolating and asymptotic interpolating sequences in the model space setting. We show that in reproducing kernel Hilbert spaces of analytic functions on domains in $\mathbb{C}^n$, these two are the same. Given results in [@GPW], this is not surprising and the proofs are similar to those in the $H^\infty$ setting. We then turn to our main result of that section. If we have a Blaschke sequence $\{\lambda_n\}$ in $\mathbb{D}$ and assume that our inner function $\Theta$ satisfies $|\Theta(\lambda_n)| \to 0$, then a sequence $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence if and only if it is an $EIS_{H^2}$ sequence (and therefore $AIS_{K_\Theta}$ sequence if and only if it is an $AIS_{H^2}$). In Section \[CMMS\] we rephrase these properties in terms of the Carleson embedding constants on the model spaces. Finally, in Section \[asip\_algebra\], we recall the definition of Douglas algebras and show that appropriate definitions and conditions are quite different in that setting.
Preliminaries
=============
Recall that a sequence $\{x_n\}$ in ${\mathcal{H}}$ is [*complete*]{} if $~\mbox{Span}\{x_n: n \ge 1\} = \mathcal{H}$, and [*asymptotically orthonormal*]{} ($AOS$) if there exists $N_0$ such that for all $N \ge N_0$ there are positive constants $c_N$ and $C_N$ such that $$\begin{aligned}
\label{thininequality}
c_N \sum_{n \ge N} |a_n|^2 \le \left\|\sum_{n \ge N} a_n x_n\right\|^2_{{\mathcal{H}}} \le C_N \sum_{n \ge N} |a_n|^2,\end{aligned}$$ where $c_N \to 1$ and $ C_N \to 1$ as $N \to \infty$. If we can take $N_0 = 1$, the sequence is said to be an $AOB$; this is equivalent to being $AOS$ and a Riesz sequence. Finally, the Gram matrix corresponding to $\{x_j\}$ is the matrix $G = \left(\langle x_n, x_m \rangle\right)_{n, m \ge 1}$.
It is well known that if $\{\lambda_n\}$ is a Blaschke sequence with simple zeros and corresponding Blaschke product $B$, then $\{k_{\lambda_n}\}$, where $$k_{\lambda_n}(z)=\frac{(1-\left\vert \lambda_n\right\vert^2)^{\frac{1}{2}}}{(1-\overline{\lambda_n}z)},$$ is a complete minimal system in $K_B$ and we also know that $\{\lambda_n\}$ is interpolating if and only if $\{k_{\lambda_n}\}$ is a Riesz basis. The following beautiful theorem provides the connection to thin sequences.
\[Volberg\] The following are equivalent:
1. $\{\lambda_n\}$ is a thin interpolating sequence;
2. The sequence $\{k_{\lambda_n}\}$ is a complete $AOB$ in $K_B$;
3. There exist a separable Hilbert space $\mathcal{K}$, an orthonormal basis $\{e_n\}$ for $\mathcal{K}$ and $U, K: \mathcal{K} \to K_B$, $U$ unitary, $K$ compact, $U + K$ invertible, such that $$(U + K)(e_n) = k_{\lambda_n} \text{ for all } n \in {\mathbb{N}}.$$
In [@F]\*[Section 3]{} and [@CFT]\*[Proposition 3.2]{}, the authors note that [@V]\*[Theorem 3]{} implies the following.
\[propCFT\] Let $\{x_n\}$ be a sequence in ${\mathcal{H}}$. The following are equivalent:
1. $\{x_n\}$ is an AOB;
2. There exist a separable Hilbert space $\mathcal{K}$, an orthonormal basis $\{e_n\}$ for $\mathcal{K}$ and $U, K: \mathcal{K} \to \mathcal{H}$, $U$ unitary, $K$ compact, $U + K$ left invertible, such that $$(U + K)(e_n) = x_n;$$
3. The Gram matrix $G$ associated to $\{x_n\}$ defines a bounded invertible operator of the form $I + K$ with $K$ compact.
We also have the following, which we will use later in this paper.
\[prop5.1CFT\] If $\{\lambda_n\}$ is a sequence of distinct points in $\mathbb{D}$ and $\{k_{\lambda_n}^\Theta\}$ is an $AOS$, then $\{\lambda_n\}$ is a thin interpolating sequence.
\[theorem5.2CFT\] Suppose $\sup_{n \ge 1} |\Theta(\lambda_n)| < 1$. If $\{\lambda_n\}$ is a thin interpolating sequence, then either
\(i) $\{k_{\lambda_n}^\Theta\}_{n\ge1}$ is an $AOB$ or
\(ii) there exists $p \ge 2$ such that $\{k_{\lambda_n}^\Theta\}_{n \ge p}$ is a complete $AOB$ in $K_\Theta$.
Hilbert Space Versions {#HSV}
======================
Asymptotic and Eventual Interpolating Sequences {#asip}
-----------------------------------------------
Let $\mathcal{H}$ be a reproducing kernel Hilbert space of analytic functions over a domain $\Omega\subset{\mathbb{C}}^n$ with reproducing kernel $K_\lambda$ at the point $\lambda \in \Omega$. We define two properties that a sequence $\{\lambda_n\}\subset \Omega$ can have.
A sequence $\{\lambda_n\}\subset\Omega$ is an [eventual $1$-interpolating sequence for $\mathcal{H}$]{.nodecor}, denoted $EIS_{\mathcal{H}}$, if for every $\varepsilon > 0$ there exists $N$ such that for each $\{a_n\} \in \ell^2$ there exists $f_{N, a} \in \mathcal{H}$ with $$f_{N, a}(\lambda_n) {\ensuremath{\left\|K_{\lambda_n}\right\|}}_{\mathcal{H}}^{-1}=f_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} = a_n ~\mbox{for}~ n \ge N ~\mbox{and}~ \|f_{N, a}\|_{\mathcal{H}} \le (1 + \varepsilon) \|a\|_{N, \ell^2}.$$
A sequence $\{\lambda_n\}\subset\Omega$ is a [strong asymptotic interpolating sequence for $\mathcal{H}$]{.nodecor}, denoted $AIS_{\mathcal{H}}$, if for all $\varepsilon > 0$ there exists $N$ such that for all sequences $\{a_n\} \in \ell^2$ there exists a function $G_{N, a} \in \mathcal{H}$ such that $\|G_{N, a}\|_\mathcal{H} \le \|a\|_{N,\ell^2}$ and $$\|\{G_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_n\}\|_{N, \ell^2} < \varepsilon \|a\|_{N, \ell^2}.$$
We now wish to prove Theorem \[EISiffASI\] below. The proof, which is a modification of the proof of the open-mapping theorem, also yields a proof of the following proposition.
\[Banachspace\]
Let $X$ and $Y$ be Banach spaces and let $T: X \to Y$ be a bounded operator and $\varepsilon > 0$. If $$\sup_{\|y\| = 1} \inf_{\|x\| \le 1} \|Tx - y\| < \varepsilon < 1,$$ then for all $y \in Y$, there exists $x \in X$ such that $\|x\| \le \frac{1}{1 - \varepsilon} \|y\|$ and $Tx = y$.
Theorem \[EISiffASI\] follows from Proposition \[Banachspace\], but doing so requires dealing with several technicalities that obfuscate the underlying ideas, and so we present a direct proof of our desired implication. When we turn to Banach algebras, the corresponding implication (in Theorem \[main\_algebra\]) will be a direct consequence of Proposition \[Banachspace\]. We thank the referee for pointing out Proposition \[Banachspace\] to us.
\[EISiffASI\] Let $\mathcal{H}$ be a reproducing kernel space of analytic functions over the domain $\Omega\subset\mathbb{C}^n$ with reproducing kernel at the point $\lambda$ given by $K_\lambda$. Then $\{\lambda_n\}$ is an $EIS_{\mathcal{H}}$ sequence if and only if $\{\lambda_n\}$ is an $AIS_{\mathcal{H}}$.
If a sequence is an $EIS_{\mathcal{H}}$, then it is trivially $AIS_{\mathcal{H}}$, for given $\varepsilon > 0$ we may take $G_{N, a} = \frac{f_{N, a}}{(1 + \varepsilon)}$.
For the other direction, suppose $\{\lambda_n\}$ is an $AIS_{\mathcal{H}}$ sequence. Let $\varepsilon > 0$, $N := N(\varepsilon)$, and $\{a_j\}:=\{a_{j}^{(0)}\}$ be any sequence. First choose $f_0 \in \mathcal{H}$ so that for $n \ge N$ we have $$\|\{K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} f_0(\lambda_n) - a_{n}^{(0)}\}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a\|_{N, \ell^2}$$ and $$\|f_0\|_{\mathcal{H}} \le \|a\|_{N,\ell^2}.$$ Now let $a_{n}^{(1)} = a_{n}^{(0)} - K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} f_0(\lambda_n)$. Note that $\|a^{(1)}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a\|_{N, \ell^2}$. Since we have an $AIS_{\mathcal{H}}$ sequence, we may choose $f_1$ such that for $n \ge N$ we have $$\|\{f_1(\lambda_n)K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_{n}^{(1)}\}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a^{(1)}\|_{N, \ell^2} < \left(\frac{\varepsilon}{1+\varepsilon}\right)^2\|a\|_{N, \ell^2},$$ and $$\|f_1\|_{\mathcal{H}} \le \|a^{(1)}\|_{N, \ell^2}<\left(\frac{\varepsilon}{1+\varepsilon}\right)\|a\|_{N,\ell^2}.$$ In general, we let $$a_{j}^{(k)} = -f_{k - 1}(\lambda_j)K_{\lambda_j}(\lambda_j)^{-\frac{1}{2}} + a_{j}^{(k-1)}$$ so that $$\|a^{(k)}\|_{N, \ell^2} \le \frac{\varepsilon}{1+\varepsilon} \|a^{(k - 1)}\|_{N, \ell^2} \le \left(\frac{\varepsilon}{1+\varepsilon}\right)^2 \|a^{(k-2)}\|_{N, \ell^2} \le \cdots \le \left(\frac{\varepsilon}{1+\varepsilon}\right)^k \|a\|_{N, \ell^2}$$ and $$\|f_k\|_{\mathcal{H}} \le \|a^{(k)}\|_{N, \ell^2}<\left(\frac{\varepsilon}{1+\varepsilon}\right)^k\|a\|_{N,\ell^2}.$$ Then consider $f(z) = \sum_{k = 0}^\infty f_k(z)$. Since $f_k(\lambda_j) = \left(a_{j}^{(k)} - a_{j}^{(k+1)}\right)K_{\lambda_j}(\lambda_j)^{\frac{1}{2}}$ and $a_{j}^{(k)} \to 0$ as $k \to \infty$, we have for each $j \ge N$, $$f(\lambda_j) = a_{j}^{(0)} K_{\lambda_j}(\lambda_j)^{\frac{1}{2}} = a_jK_{\lambda_j}(\lambda_j)^{\frac{1}{2}}.$$ Further $\|f\|_{\mathcal{H}}\le \sum_{k = 0}^\infty \left(\frac{\varepsilon}{1+\varepsilon}\right)^{k} \|a\|_{N, \ell^2} = \frac{1}{1 - \frac{\varepsilon}{1+\varepsilon}} \|a\|_{N, \ell^2}=(1+\varepsilon)\|a\|_{N, \ell^2}$. This proves that $\{\lambda_n\}$ is an $EIS_{\mathcal{H}}$ sequence.
The Hardy and Model Spaces
--------------------------
We let $\Theta$ denote a nonconstant inner function and apply Theorem \[EISiffASI\] to the reproducing kernel Hilbert space $K_{\Theta}$. We also include statements and results about Carleson measures. Given a non-negative measure $\mu$ on ${\mathbb{D}}$, let us denote the (possibly infinite) constant $${\mathcal{C}}(\mu) = \sup_{f \in H^2, f \neq 0} \frac{\|f\|^2_{L^2({\mathbb{D}}, \mu)}}{\|f\|^2_2}$$ as the Carleson embedding constant of $\mu$ on $H^2$ and $${\mathcal{R}}(\mu) = \sup_{z\in\mathbb{D}} \frac{\|k_z\|_{L^2({\mathbb{D}}, \mu)}}{\|k_z\|_2}=\sup_{z} \|k_z\|_{L^2({\mathbb{D}}, \mu)}$$ as the embedding constant of $\mu$ on $k_z$, the normalized reproducing kernel of $H^2$. It is well-known that ${\mathcal{C}}(\mu)\approx {\mathcal{R}}(\mu)$, [@MR2417425; @nikolski].
\[main\] Let $\{\lambda_n\}$ be an interpolating sequence in $\mathbb{D}$ and let $\Theta$ be an inner function. Suppose that $\kappa:=\sup_{n} \left\vert \Theta(\lambda_n)\right| < 1$. The following are equivalent:
1. $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence\[eish2\];
2. $\{\lambda_n\}$ is a thin interpolating sequence\[thin1\];
3. \[aob\] Either
1. $\{k_{\lambda_n}^\Theta\}_{n\ge1}$ is an $AOB$, or
2. there exists $p \ge 2$ such that $\{k_{\lambda_n}^\Theta\}_{n \ge p}$ is a complete $AOB$ in $K_\Theta$;
4. $\{\lambda_n\}$ is an $AIS_{H^2}$ sequence\[Aish2\];
5. The measure $$\mu_N = \sum_{k \ge N} (1 - |\lambda_k|^2)\delta_{\lambda_k}$$ is a Carleson measure for $H^2$ with Carleson embedding constant ${\mathcal{C}}(\mu_N)$ satisfying ${\mathcal{C}}(\mu_N) \to 1$ as $N \to \infty$\[C1\];
6. The measure $$\nu_N = \sum_{k \ge N}\frac{(1 - |\lambda_k|^2)}{\delta_k} \delta_{\lambda_k}$$ is a Carleson measure for $H^2$ with embedding constant ${\mathcal{R}}_{\nu_N}$ on reproducing kernels satisfying ${\mathcal{R}}_{\nu_N} \to 1$\[C2\].
Further, and are equivalent to each other and imply each of the statements above. If, in addition, $ \Theta(\lambda_n) \to0$, then - are equivalent.
7. $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence\[eis\];
8. $\{\lambda_n\}$ is an $AIS_{K_\Theta}$ sequence\[ais\].
The equivalence between and is contained in Theorem \[EISiffASI\]. Similarly, this applies to and . In [@GPW]\*[Theorem 4.5]{}, the authors prove that , and are equivalent. The equivalence between , , and is contained in [@GPW]. That implies is Theorem \[theorem5.2CFT\]. That implies also follows from results in [@CFT], for if a sequence is an $AOB$ for some $p \ge 2$ it is an $AOS$ for $p \ge 2$ and hence thin by Proposition \[prop5.1CFT\] for $p \ge 2$. This is, of course, the same as being thin interpolating. Thus, we have the equivalence of equations , , , , , and , as well as the equivalence of and .\
Now we show that and are equivalent under the hypothesis that $\Theta(\lambda_n)\to 0$.\
$\Rightarrow$. Suppose that $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence. We will prove that this implies it is an $EIS_{H^2}$ sequence, establishing .\
Let $\varepsilon>0$ be given. Choose $\varepsilon^\prime < \varepsilon$ and let $N_1 = N(\varepsilon^\prime)$ be chosen according to the definition of $\{\lambda_n\}$ being an $EIS_{K_\Theta}$ sequence. Recall that $$\kappa_m = \sup_{n \ge m} |\Theta(\lambda_n)| \to 0,$$ so we may assume that we have chosen $N_1$ so large that $$\frac{1 + \varepsilon^\prime}{(1 - \kappa_{N_1}^2)^{1/2}} < 1 + \varepsilon.$$ Define $\{\tilde{a}_n\}$ to be $0$ if $n < N_1$ and $\tilde{a}_n=a_n \left(1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2\right)^{-\frac{1}{2}}$ for $n \ge N_1$. Then $\{\tilde{a}_n\} \in \ell^2$. Select $f_a\in K_\Theta\subset H^2$ so that
$$f_a(\lambda_n) \left(\frac{1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2}{1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2}\right)^{-\frac{1}{2}} = \tilde{a}_n =
a_n \left(1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2\right)^{-\frac{1}{2}} \, \textrm{ if } n \ge N_1$$ and $$\|f_a\| \le (1 + \varepsilon^\prime) \|\tilde{a}\|_{N_1, \ell^2} \le \frac{(1 + \varepsilon^\prime)}{(1 - \kappa_{N_1}^2)^{1/2}}\|a\|_{N_1, \ell^2} < (1 + \varepsilon) \|a\|_{N_1, \ell^2}.$$ Since $f_a\in K_\Theta$, we have that $f_a\in H^2$, and canceling out the common factor yields that $f_a(\lambda_n)(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{-\frac{1}{2}}=a_n$ for all $n\geq N_1$. Thus $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence as claimed.\
$\Rightarrow$. Suppose that $\Theta(\lambda_n) \to 0$ and $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence; equivalently, that $\{\lambda_n\}$ is thin. We want to show that the sequence $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence. First we present some observations.\
First, looking at the definition, we see that we may assume that $\varepsilon > 0$ is small, for any choice of $N$ that works for small $\varepsilon$ also works for larger values.\
Second, if $f\in H^2$ and we let $\tilde{f}=P_{K_\Theta}f$, then we have that ${\ensuremath{\left\|\tilde{f}\right\|}}_2\leq {\ensuremath{\left\|f\right\|}}_2$ since $P_{K_\Theta}$ is an orthogonal projection. Next, we have $P_{K_\Theta} = P_+ - \Theta P_+ \overline{\Theta}$, where $P_+$ is the orthogonal projection of $L^2$ onto $H^2$, so letting $T_{\overline{\Theta}}$ denote the Toeplitz operator with symbol $\overline{\Theta}$ we have $$\label{Toeplitz}
\tilde{f}(z)=f(z)-\Theta(z)T_{\overline{\Theta}}(f)(z).$$ In what follows, $\kappa_m := \sup_{n \ge m}|\Theta(\lambda_n)|$ and recall that we assume that $\kappa_m \to 0$.\
Since $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence, there exists $N_1$ such that for any $a\in\ell^2$ there exists a function $f_0\in H^2$ such that $$f_0(\lambda_n)=a_n\left(\frac{1 - |\Theta(\lambda_n)|^2}{1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2}\right)^\frac{1}{2}~\mbox{for all}~n\geq N_1$$ and $${\ensuremath{\left\|f_0\right\|}}_{2}\leq (1+\varepsilon){\ensuremath{\left\|\{a_k (1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}\}\right\|}}_{N_1,\ell^2} \le (1 + \varepsilon){\ensuremath{\left\|a \right\|}}_{N_1,\ell^2}.$$ Here we have applied the $EIS_{H^2}$ property to the sequence $\{a_k(1-\left\vert \Theta(\lambda_k)\right\vert^2)^{\frac{1}{2}}\}\in\ell^2$. By we have that $$\begin{aligned}
\tilde{f}_0(\lambda_k) & = & f_0(\lambda_k)-\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)\\
& = & a_k(1 - |\Theta(\lambda_k)|^2)^\frac{1}{2}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{-\frac{1}{2}}-\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)\quad\forall k\geq N_1 \end{aligned}$$ and ${\ensuremath{\left\|\tilde{f}_0\right\|}}_2\leq{\ensuremath{\left\|f_0\right\|}}_2\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}$. Rearranging the above, for $k \ge N_1$ we have $$\begin{aligned}
{\ensuremath{\left\vert\tilde{f}_0(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-a_k\right\vert}} & = & {\ensuremath{\left\vert\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}\right\vert}}\\
& \leq & \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_0\right\|}}_2\\
&\leq& (1+\varepsilon) \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ We claim that $\{a^{(1)}_n\}=\{\tilde{f}_0(\lambda_n)(1 - |\Theta(\lambda_n)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{\frac{1}{2}} - a_n\}\in\ell^2$ and that there is a constant $N_2$ depending only on $\varepsilon$ and the Carleson measure given by the thin sequence $\{\lambda_n\}$ such that $$\label{a1} {\ensuremath{\left\|a^{(1)}\right\|}}_{N_2,\ell^2}\leq (1+\varepsilon)^2\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$
Since the sequence $\{\lambda_n\}$ is thin and distinct, it hence generates an $H^2$ Carleson measure with norm at most $(1+\varepsilon)$; that is, we have the existence of $N_2 \ge N_1$ such that $\kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} \le \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}}$ and $$\begin{aligned}
{\ensuremath{\left\|a^{(1)}\right\|}}_{N_2,\ell^2} & = & \left(\sum_{k\geq N_2} {\ensuremath{\left\vert\Theta(\lambda_k)\right\vert}}^2 {\ensuremath{\left\vertT_{\overline{\Theta}}(f_0)(\lambda_k)\right\vert}}^2 (1 - |\Theta(\lambda_k)|^2)^{-1}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)\right)^{\frac{1}{2}}\\
& \leq & (1+\varepsilon) \kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} {\ensuremath{\left\|T_{\overline{\Theta}}f_0\right\|}}_2 \nonumber\\
& \leq &(1+\varepsilon) \kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_0\right\|}}_2\nonumber\\
& \leq & (1+\varepsilon)^2 \kappa_{N_1} (1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}\nonumber<\infty,\end{aligned}$$ completing the proof of the claim.
We will now iterate these estimates and ideas. Let $\widetilde{a^{(1)}_n}=-\frac{a^{(1)}_n}{(1 + \varepsilon)^2\kappa_{N_1} (1 - \kappa_{N_1}^2)^{-\frac{1}{2}} }$ for $n \ge N_2$ and $\widetilde{a^{(1)}_n} = 0$ otherwise. Then from (\[a1\]) we have that ${\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_1,\ell^2} = {\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_2,\ell^2} \le {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}$. Since $\{\lambda_n\}$ is an $EIS_{H^2}$ we may choose $f_1\in H^2$ with $$f_1(\lambda_n)=\widetilde{a_n^{(1)}}(1 - |\Theta(\lambda_n)|^2)^{\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{-\frac{1}{2}}~\mbox{for all}~n\geq N_1$$ and, letting $\widetilde{f}_1 = P_{K_\Theta}(f_1)$, we have $${\ensuremath{\left\|\tilde{f}_1\right\|}}_{2}\leq{\ensuremath{\left\|f_1\right\|}}_{2}\leq (1+\varepsilon){\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_1,\ell^2}\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$ As above, $$\begin{aligned}
\widetilde{f}_1(\lambda_k) & = & f_1(\lambda_k)-\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)\\
& = & \widetilde{a_k^{(1)}}(1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{-\frac{1}{2}}-\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)\quad\forall k\geq N_1. \end{aligned}$$ And, for $k \ge N_1$ we have $$\begin{aligned}
{\ensuremath{\left\vert\tilde{f}_1(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-\widetilde{a_k^{(1)}}\right\vert}} & = & {\ensuremath{\left\vert\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}\right\vert}}\\
& \leq & \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_1\right\|}}_2\\
& \leq & (1+\varepsilon)\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ Using the definition of $\widetilde{a^{(1)}}$, for $k \ge N_2$ one arrives at $$\begin{aligned}
{\ensuremath{\left\vert\left((1+\varepsilon)^2\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} \tilde{f}_1(\lambda_k)+\tilde{f}_0(\lambda_k)\right)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-a_k\right\vert}}\\
\leq (1+\varepsilon)^3\kappa_{N_1}^2 (1 - \kappa_{N_1}^2)^{-1}{\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ We continue this procedure, constructing sequences $a^{(j)}\in \ell^2$ and functions $\tilde{f}_j\in K_{\Theta}$ such that $${\ensuremath{\left\|a^{(j)}\right\|}}_{N_1,\ell^2}\leq (1+\varepsilon)^{2j}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^j{\ensuremath{\left\|a\right\|}}_{N_1,\ell^2},$$ $$\left\vert \frac{(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}}{(1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}} \left(\sum_{l=0}^{j} (1+\varepsilon)^{2l}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^{l}\tilde{f}_l(\lambda_k)\right) -a_k \right\vert\leq \left(1+\varepsilon\right)^{2j+1}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^{j+1}\left\Vert a\right\Vert_{N_1,\ell^2},$$ and $${\ensuremath{\left\|\tilde{f}_j\right\|}}_2\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}~\mbox{for all}~j\in {\mathbb{N}}.$$ Define $$F=\sum_{j = 0}^{\infty} (1+\varepsilon)^{2j} \left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^j \tilde{f}_j.$$ Then $F\in K_{\Theta}$ since each $\tilde{f}_j\in K_{\Theta}$ and, since $\kappa_m \to 0$, we may assume that $$(1 + \varepsilon)^2\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right) < 1.$$ So,
$${\ensuremath{\left\|F\right\|}}_2\leq \frac{(1+\varepsilon)}{1 - (1+\varepsilon)^2\left(\frac{\kappa_{N_1}}{\left(1 - \kappa_{N_1}^2\right)^{\frac{1}{2}}}\right)} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$ For this $\varepsilon$, consider $\varepsilon_M < \varepsilon$ with $\frac{(1+\varepsilon_M)}{1 - (1+\varepsilon_M)^2\left(\frac{\kappa_{N_M}}{\left(1 - \kappa_{N_M}^2\right)^{\frac{1}{2}}}\right)}<1+\varepsilon$. Then, using the process above, we obtain $F_M$ satisfying $F_M \in K_\Theta, \|F_M\|_2 \le (1 + \varepsilon) \|a\|_{M, \ell^2}$ and $F_M(\lambda_n)\|K_{\lambda_n}\|^{-1}_\mathcal{H} = a_n$ for $n \ge M$. Taking $N(\varepsilon) = M$, we see that $F_M$ satisfies the exact interpolation conditions, completing the proof of the theorem.
We present an alternate method to prove the equivalence between $(1)$ and $(7)$. As noted above, by Theorem \[EISiffASI\] it is true that $(7)\Leftrightarrow (8)$ and thus it suffices to prove that $(1)\Rightarrow (8)\Leftrightarrow (7)$. Let $\varepsilon>0$ be given. Select a sequence $\{\delta_N\}$ with $\delta_N\to 0$ as $N\to\infty$. Since $(1)$ holds, then for large $N$ and $a\in \ell^2$ it is possible to find $f_N\in H^2$ so that $$f_N(a_n)(1-\left\vert\lambda_n\right\vert^2)^{\frac{1}{2}}=\left\Vert a\right\Vert_{N,\ell^2}^{-1} a_n\quad n\geq N$$ with $\left\Vert f_N\right\Vert_{2}\leq 1+\delta_N$. Now observe that we can write $f_N=h_N+\Theta g_N$ with $h_N\in K_\Theta$. Since $h_N$ and $g_N$ are orthogonal projections of $f_N$ onto subspaces of $H^2$, we also have that $\left\Vert h_N\right\Vert_{2}\leq 1+\delta_N$ and similarly for $g_N$.
By the properties of the functions above we have that: $$h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}=f_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\Theta(\lambda_n)g_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}.$$ Hence, one deduces that $$\begin{aligned}
\left\Vert \left\{h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2} & \leq & \left\Vert \left\{f_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2}\\
& & + \left\Vert \left\{\Theta(\lambda_n)g_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\\
& \leq & \left\Vert \left\{\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\left(\left(\frac{1}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-1\right)\right\}\right\Vert_{N,\ell^2}\\
& & +\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1-\kappa_N^2)^{\frac{1}{2}}} \left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}.\end{aligned}$$ Now for $x$ sufficiently small and positive we have that $\frac{1}{\sqrt{1-x}}-1=\frac{1-\sqrt{1-x}}{\sqrt{1-x}}\lesssim \frac{x}{\sqrt{1-x}}$. Applying this with $x=\sup_{m\geq N} \left\vert\Theta(\lambda_m)\right\vert$ gives that: $$\left\Vert \left\{h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2} \leq \frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right).$$ Define $H_N=(1+\delta_N)^{-1} \left\Vert a\right\Vert_{N,\ell^2} h_N$, and then we have $H_N\in K_{\Theta}$ and $\left\Vert H_N\right\Vert_{2}\leq \left\Vert a\right\Vert_{N,\ell^2}$. Using the last estimate and adding and subtracting the quantity $\frac{a_n}{(1+\delta_N)}$ yields that: $$\begin{aligned}
\left\Vert \left\{H_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-a_n\right\}\right\Vert_{N,\ell^2} \leq & &
\\ \left(\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1+\delta_N)(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right)+\delta_N\right)\left\Vert a\right\Vert_{N,\ell^2}.\end{aligned}$$ Note that the quantity: $$\left(\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1+\delta_N)(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right)+\delta_N\right)\lesssim \delta_N+\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert.$$ Here we have used that the sequence $\{\lambda_n\}$ is by hypothesis an interpolating sequence and hence: $\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\lesssim \left\Vert g_N\right\Vert_{2}\leq 1+\delta_N$. Since by hypothesis we have that $\delta_N+\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert\to 0$ as $N\to\infty$, it is possible to make this less than the given $\varepsilon>0$, and hence we get a function $H_N$ satisfying the properties for $\{\lambda_n\}$ to be $AIS_{K_\Theta}$.
The proof above also gives an estimate on the norm of the interpolating function in the event that $\sup_n |\Theta(\lambda_n)| \le \kappa < 1$, but $(1 + \varepsilon)$ is no longer the best estimate.
Carleson Measures in Model Spaces {#CMMS}
---------------------------------
From Theorem \[main\], and , we have a Carleson measure statement for thin sequences in the Hardy space $H^2$. In this section, we obtain an equivalence in model spaces.
We now consider the embedding constants in the case of model spaces. As before, given a positive measure $\mu$ on ${\mathbb{D}}$, we denote the (possibly infinite) constant
$${\mathcal{C}}_{\Theta}(\mu) = \sup_{f \in K_{\Theta}, f \neq 0} \frac{\|f\|^2_{L^2({\mathbb{D}}, \mu)}}{\|f\|^2_2}$$ as the Carleson embedding constant of $\mu$ on $K_{\Theta}$ and $${\mathcal{R}}_{\Theta}(\mu) = \sup_{z} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \mu)}^2$$ as the embedding constant of $\mu$ on the reproducing kernel of $K_\Theta$ (recall that the kernels $k^{\Theta}_z$ are normalized). It is known that for general measure $\mu$ the constants ${\mathcal{R}}_{\Theta}(\mu)$ and ${\mathcal{C}}_{\Theta}(\mu)$ are not equivalent, [@NV]. The complete geometric characterization of the measures for which ${\mathcal{C}}_{\Theta}(\mu)$ is finite is contained in [@LSUSW]. However, we always have that $${\mathcal{R}}_\Theta(\mu) \le {\mathcal{C}}_\Theta(\mu).$$ For $N > 1$, let $$\sigma_N = \sum_{k \ge N} \left\Vert K_{\lambda_k}^{\Theta}\right\Vert^{-2}\delta_{\lambda_k}=\sum_{k \ge N} \frac{1-\left\vert \lambda_k\right\vert^2}{1-\left\vert \Theta(\lambda_k)\right\vert^2}\delta_{\lambda_k}.$$ Note that for each $f \in K_{\Theta}$ $$\label{munorm}
\| f\|^2_{L^2({\mathbb{D}}, \sigma_N)} = \sum_{k=N}^\infty \frac{(1 - |\lambda_k|^2)}{(1-\left\vert \Theta(\lambda_k)\right\vert^2)} |f(\lambda_k)|^2 = \sum_{k=N}^\infty |\langle f, k^{\Theta}_{\lambda_k}\rangle|^2,$$ and therefore we see that $$\label{e:CETests}
1 \le {\mathcal{R}}_\Theta(\sigma_N) \le {\mathcal{C}}_\Theta(\sigma_N).$$
By working in a restricted setting and imposing a condition on $\{\Theta(\lambda_n)\}$ we have the following.
\[thm:Carleson\] Suppose $\Lambda = \{\lambda_n\}$ is a sequence in $\mathbb{D}$ and $\Theta$ is a nonconstant inner function such that $\kappa_m := \sup_{n \ge m}|\Theta(\lambda_n)|\to 0$. For $N > 1$, let $$\sigma_N = \sum_{k \ge N} \left\Vert K_{\lambda_k}^{\Theta}\right\Vert^{-2}\delta_{\lambda_k}=\sum_{k \ge N} \frac{1-\left\vert \lambda_k\right\vert^2}{1-\left\vert \Theta(\lambda_k)\right\vert^2}\delta_{\lambda_k}.$$ Then the following are equivalent:
1. $\Lambda$ is a thin sequence;
2. $ {\mathcal{C}}_\Theta(\sigma_N) \to 1$ as $N \to \infty$;
3. $ {\mathcal{R}}_\Theta(\sigma_N) \to 1$ as $N \to \infty$.
We have $(2)\Rightarrow (3)$ by testing on the function $f=k_z^{\Theta}$ for all $z\in\mathbb{D}$, which is nothing more then .
We next focus on $(1)\Rightarrow(2)$. Let $f \in K_{\Theta}$ and let the sequence $a$ be defined by $a_j = \left\|K_{\lambda_j}\right\|^{-1}f(\lambda_j)$. By , $\left\|a\right\|_{N, \ell^2}^2 = \left\| f\right\|^2_{L^2({\mathbb{D}}, \sigma_N)}$, and since $\{k_{\lambda_j}^{\Theta}\}$ is an $AOB$, there exists $C_N$ such that $$\begin{aligned}
\left\|a\right\|_{N,\ell^2}^2 & = \sum_{j \ge N} \left\|K_{\lambda_j}^\Theta\right\|^{-2}_{K_{\Theta}} |f(\lambda_j)|^2 = \left\langle f, \sum_{j \ge N} a_j k_{\lambda_j}^\Theta \right\rangle_{K_{\Theta}} \le \left\| f\right\|_{2} \left\|\sum_{j \ge N} a_j k_{\lambda_j}^\Theta\right\|_{K_{\Theta}} \le C_N \left\| f\right\|_{2} \left\|a\right\|_{N,\ell^2}.\end{aligned}$$ By (1) and [@CFT]\*[Theorem 5.2]{}, we know that $C_N \to 1$ and since we have established that $\|f\|_{L^2({\mathbb{D}}, \sigma_N)} \le C_N \|f\|_2$, (1) $\Rightarrow$ (2) follows.
An alternate way to prove this is to use Theorem \[main\], $(2)\Rightarrow(5)$, and the hypothesis on $\Theta$. Since it is possible to then show that $\frac{\mathcal{C}_{\Theta}(\sigma_N)}{\mathcal{C}(\mu_N)}\to 1$. Indeed, given $\varepsilon>0$, we have that $1\leq\mathcal{C}(\mu_M)$ for all $M$, and since $\{\lambda_n\}$ is thin there exists a $N$ such that $\mathcal{C}(\mu_M)<1+\varepsilon$ for all $M\geq N$. Hence, $1\leq \mathcal{C}(\mu_M)<1+\varepsilon$ for all $M\geq N$. These facts easily lead to: $$\frac{1}{1+\varepsilon}\leq\frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}$$ Further, since $\Theta$ tends to zero on the sequence $\{\lambda_n\}$ there is an integer, without loss we may take it to be $N$, so that $\frac{1}{1-\left\vert \Theta(\lambda_n)\right\vert^2}<1+\varepsilon$ for all $n\geq N$. From this we deduce that: $$\frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}< (1+\varepsilon)\frac{\sup\limits_{f\in K_{\theta}} \sum_{n\geq M} (1-\left\vert \lambda_m\right\vert^2)\left\vert f(\lambda_m)\right\vert^2}{\mathcal{C}(\mu_M)}\leq (1+\varepsilon)$$ in the last estimate we used that $K_\Theta\subset H^2$ and so the suprema appearing in the numerator is always at most the expression in the denominator. Combining the estimates we have that for $M\geq N$, that: $$\frac{1}{1+\varepsilon}\leq \frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}<1+\varepsilon$$ which yields the conclusion about the ratio tending to $1$ as $N\to \infty$.
Now consider $(3)\Rightarrow (1)$ and compute the quantity $\mathcal{R}_\Theta(\sigma_N)$. In what follows, we let $\Lambda_N$ denote the tail of sequence, $\Lambda_N=\{\lambda_k: k\geq N\}$. Note that we have $\left\vert 1-\overline{a}b\right\vert\geq 1-\left\vert a\right\vert$. Using this estimate we see that: $$\begin{aligned}
\sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2 & = & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)}{(1-\left\vert \Theta(\lambda_k)\right\vert^2)} \frac{(1-\left\vert z\right\vert^2)}{(1-\left\vert \Theta(z)\right\vert^2)}\frac{\left\vert 1-\Theta(z)\overline{\Theta(\lambda_k)}\right\vert^2}{\left\vert 1-z\overline{\lambda_k}\right\vert^2}\\
& \geq & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{(1-\left\vert \Theta(z)\right\vert)(1-\left\vert \Theta(\lambda_k)\right\vert)}{(1-\left\vert \Theta(z)\right\vert^2)(1-\left\vert \Theta(\lambda_k)\right\vert^2)}\\
& = & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{1}{(1+\left\vert \Theta(z)\right\vert)(1+\left\vert \Theta(\lambda_k)\right\vert)}\\
& \geq & \sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{1}{(1+\left\vert \Theta(z)\right\vert)(1+\left\vert \Theta(\lambda_k)\right\vert)}\\
& \geq & \frac{1}{(1+\kappa_N)^2}\sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2}.\end{aligned}$$ By the Weierstrass Inequality, we obtain for $M \ge N$ that $$\begin{aligned}
\label{wi}
\prod_{k \geq N, k \neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2
& = & \prod_{k \geq N, k \neq M} \left( 1- \frac{(1 - |\lambda_k|^2)(1 - |\lambda_M|^2)}{|1 - \bar \lambda_k \lambda_M|^2} \right)\nonumber\\
& \ge & 1 - \sum_{k \geq N, k \neq M} \frac{(1- |\lambda_M|^2)(1- |\lambda_k|^2)}{ | 1 - \bar \lambda_k \lambda_M|^2}.\end{aligned}$$ Thus, by we have for $M \ge N$, $$\begin{aligned}
\frac{1}{(1+\kappa_N)^2}\sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2}
& \ge & \frac{1}{(1+\kappa_N)^2}\left(\sum_{k \geq N, k\neq M} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert \lambda_M\right\vert^2)}{\left\vert 1-\lambda_M\overline{\lambda_k}\right\vert^2} + 1\right)\\
& \ge & \frac{1}{(1+\kappa_N)^2}\left(1 - \prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2 + 1\right).\end{aligned}$$ Now by assumption, recalling that $\kappa_N := \sup_{n \ge N}|\Theta(\lambda_n)|$, we have $$\lim_{N \to \infty}\sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2 = 1~\mbox{ and }~\lim_{N \to \infty} \kappa_N = 0,$$ so $$1 = \lim_{N \to \infty}\sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2
\ge \lim_{N \to \infty} \frac{1}{(1+\kappa_N)^2}\left(1 - \prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2 + 1\right) \ge 1.$$ Therefore, for any $M \ge N$ $$\label{e:large}
\prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right| > 1 - \varepsilon~\mbox{as}~N \to \infty.$$ Also, for any $\varepsilon>0$ there is an integer $N_0$ such that for all $M> N_0$ we have: $$\label{e:bigk}
\prod_{k \geq N_0, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right| >1-\varepsilon.$$ Fix this value of $N_0$, and consider $k<N_0$. Further, for $k \ne M$ and $k<N_0$, $$\begin{aligned}
1- \rho(\lambda_M, \lambda_k)^2 & = 1- \left|\frac{\lambda_k - \lambda_M}{1 - \bar\lambda_k \lambda_M}\right|^2 = \frac{(1- |\lambda_M|^2)(1- |\lambda_k|^2)}{ | 1 - \bar\lambda_k \lambda_M|^2}\\
&= (1 - |\lambda_k|^2)\frac{(1 - |\lambda_M|^2)}{(1 - |\Theta(\lambda_M)|^2)} \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} \left|\frac{1 - \bar\Theta(\lambda_M) \Theta(\lambda_k)}{1 - \lambda_k \bar\lambda_M}\right|^2\\
& = \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2}(1 - |\lambda_k|^2) |k_{\lambda_M}^\Theta(\lambda_k)|^2\\
& = \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} (1 - |\Theta(\lambda_k)|^2) \frac{(1 - |\lambda_k|^2)}{1 - |\Theta(\lambda_k)|^2} |k_{\lambda_M}^\Theta(\lambda_k)|^2\\
& \le \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} \left( \|k_{\lambda_M}^\Theta\|_{L^2(\mathbb{D}, \sigma_M)}^2 - 1\right) \\
& \leq \frac{1}{(1-\kappa_M)^2}\left( \|k_{\lambda_M}^\Theta\|_{L^2(\mathbb{D}, \sigma_M)}^2 - 1\right)\to 0 ~\mbox{as}~ M \to \infty,\end{aligned}$$ since $1\leq \|k_{\lambda_N}^\Theta\|_{L^2(\mathbb{D}, \sigma_N)}^2\leq\sup_{z} \|k_{z}^\Theta\|_{L^2(\mathbb{D}, \sigma_N)}^2$ and, by hypothesis, we have that $\kappa_N \to 0$ and $\mathcal{R}_{\Theta}(\sigma_N)\to 1$. Hence, it is possible to choose an integer $M_0$ sufficiently large compared to $N_0$ so that for all $M>M_0$ $$\rho(\lambda_k,\lambda_{M})>\left(1-\varepsilon\right)^{\frac{1}{N_0}}\quad k<N_0$$ which implies that $$\label{e:smallk}
\prod_{k<N_0} \rho(\lambda_k,\lambda_{M})>1-\varepsilon.$$
Now given $\varepsilon>0$, first select $N_0$ as above in . Then select $M_0$ so that holds. Then for any $M>M_0$ by writing the product $$\prod_{k\neq M} \rho(\lambda_k,\lambda_M)=\prod_{k<N_0} \rho(\lambda_k,\lambda_{M})\prod_{k>N_0, k\neq M} \rho(\lambda_k,\lambda_M)>(1-\varepsilon)^2.$$ For the first term in the product we have used to conclude that it is greater than $1-\varepsilon$. And for $M$ sufficiently large, by , we have that the second term in the product is greater than $1-\varepsilon$ as well. Hence, $B$ is thin as claimed.
Algebra Version {#asip_algebra}
===============
We now compare the model-space version of our results with an algebra version. Theorem \[main\] requires that our inner function satisfy $\Theta(\lambda_n) \to 0$ for a thin interpolating sequence $\{\lambda_n\}$ to be an $AIS_{K_\Theta}$ sequence. Letting $B$ denote the Blaschke product corresponding to the sequence $\{\lambda_n\}$, denoting the algebra of continuous functions on the unit circle by $C$, and letting $H^\infty + C = \{f + g: f \in H^\infty, g \in C\}$ (see [@Sarason1] for more on this algebra), we can express this condition in the following way: $\Theta(\lambda_n) \to 0$ if and only if $\overline{B} \Theta \in H^\infty + C$. In other words, if and only if $B$ divides $\Theta$ in $H^\infty + C$, [@AG; @GIS].
We let $\mathcal{B}$ be a Douglas algebra; that is, a uniformly closed subalgebra of $L^\infty$ containing $H^\infty$. It will be helpful to use the maximal ideal space of our algebra. Throughout $M(\mathcal{B})$ denotes the maximal ideal space of the algebra $\mathcal{B}$; that is, the set of nonzero continuous multiplicative linear functionals on $\mathcal{B}$.
We now consider thin sequences in uniform algebras. This work is closely connected to the study of such sequences in general uniform algebras (see [@GM]) and the special case $B = H^\infty$ is considered in [@HIZ]. With the weak-$\star$ topology, $M(\mathcal{B})$ is a compact Hausdorff space. In interpreting our results below, it is important to recall that each $x \in M(H^\infty)$ has a unique extension to a linear functional of norm one and, therefore, we may identify $M(\mathcal{B})$ with a subset of $M(H^\infty)$. In this context, the condition we will require (see Theorem \[main\_algebra\]) for an $EIS_\mathcal{B}$ sequence to be the same as an $AIS_\mathcal{B}$ sequence is that the sequence be thin near $M(\mathcal{B})$. We take the following as the definition (see [@SW]):
An interpolating sequence $\{\lambda_n\}$ with corresponding Blaschke product $b$ is said to be [thin near $M(\mathcal{B})$]{.nodecor} if for any $0<\eta < 1$ there is a factorization $b = b_1 b_2$ with $b_1$ invertible in $\mathcal{B}$ and $$|b_2^\prime(\lambda_n)|(1 - |\lambda_n|^2) > \eta$$ for all $n$ such that $b_2(\lambda_n) = 0$.
We will be interested in two related concepts that a sequence can have. We first introduce a norm on a sequence $\{a_n\}\in \ell^\infty$ that is induced by a second sequence $\{\lambda_n\}$ and a set $\mathcal{O}\supset M(\mathcal{B})$ that is open in $M(H^\infty)$. Set $I_\mathcal{O}=\{n\in{\mathbb{Z}}: \lambda_n\in\mathcal{O}\}$. Then we define $${\ensuremath{\left\|a\right\|}}_{\mathcal{O},\ell^\infty}=\sup\{ {\ensuremath{\left\verta_n\right\vert}}: n\in I_\mathcal{O}\}.$$
A Blaschke sequence $\{\lambda_n\}$ is an [eventual $1$-interpolating sequence in a Douglas algebra $\mathcal{B}$]{.nodecor}, denoted $EIS_{\mathcal{B}}$, if for every $\varepsilon > 0$ there exists an open set $\mathcal{O}\supset M(\mathcal{B})$ such that for each $\{a_n\} \in \ell^\infty$ there exists $f_{\mathcal{O}, a} \in H^\infty$ with $$f_{\mathcal{O}, a}(\lambda_n) = a_n ~\mbox{for}~ \lambda_n\in\mathcal{O} ~\mbox{and}~ \|f_{\mathcal{O}, a}\|_{\infty} \le (1 + \varepsilon) \|a\|_{\mathcal{O}, \ell^\infty}.$$
A Blaschke sequence $\{\lambda_n\}$ is a [strong asymptotic interpolating sequence in a Douglas algebra $\mathcal{B}$]{.nodecor}, denoted $AIS_{\mathcal{B}}$, if for all $\varepsilon > 0$ there exists an open set $\mathcal{O}\supset M(\mathcal{B})$ such that for all sequences $\{a_n\} \in \ell^\infty$ there exists a function $G_{\mathcal{O}, a} \in H^\infty$ such that $\|G_{\mathcal{O}, a}\|_{\infty} \le \|a\|_{\mathcal{O},\ell^\infty}$ and $$\|\{G_{\mathcal{O}, a}(\lambda_n) - a_n\}\|_{\mathcal{O}, \ell^\infty} < \varepsilon \|a\|_{\mathcal{O}, \ell^\infty}.$$
\[EISiffASI\_algebra\] Let $\mathcal{B}$ be a Douglas algebra. Let $\{\lambda_n\}$ be a Blaschke sequence of points in ${\mathbb{D}}$. Then $\{\lambda_n\}$ is an $EIS_{\mathcal{B}}$ sequence if and only if $\{\lambda_n\}$ is an $AIS_{\mathcal{B}}$.
If a sequence is an $EIS_{\mathcal{B}}$, then it is trivially $AIS_{\mathcal{B}}$, for given $\varepsilon > 0$ we may take $G_{N, a} = \frac{f_{N, a}}{(1 + \varepsilon)}$.
For the other direction, suppose $\{\lambda_n\}$ is an $AIS_{\mathcal{B}}$ sequence. Let $\varepsilon > 0$ be given and let $\varepsilon^\prime < \frac{\varepsilon}{1 + \varepsilon}$. Let $\mathcal{O} \supset M(\mathcal{B})$ denote the open set we obtain from the definition of $AIS_{\mathcal{B}}$ corresponding to $\varepsilon^\prime$. Reordering the points of the sequence in $\mathcal{O}$ so that they begin at $n = 1$ and occur in the same order, we let $T: H^\infty \to \ell^\infty$ be defined by $T(g) = \{g(\lambda_{n})\}$. We let $y_\mathcal{O}$ denote the corresponding reordered sequence. Then $T$ is a bounded linear operator between Banach spaces, so we may use Proposition \[Banachspace\] to choose $f \in H^\infty$ so that $Tf = y_\mathcal{O}$ and $\|f\| < \frac{1}{1 - \varepsilon^\prime} \|y_\mathcal{O}\|_{\ell^\infty} < (1 + \varepsilon) \|y\|_{\mathcal{O}, \ell^\infty}$ to complete the proof.
Letting $\overline{B}$ denote the set of functions with conjugate in $B$, we mention one more set of equivalences. In [@SW Theorem 1] Sundberg and Wolff showed that an interpolating sequence $\{\lambda_n\}$ is thin near $M(\mathcal{B})$ if and only if for any bounded sequence of complex numbers $\{w_n\}$ there exists a function in $f \in H^\infty \cap \overline{B}$ such that $f(\lambda_n) = w_n$ for all $n$.
Finally, we note that Earl ([@E Theorem 2] or [@E2]) proved that given an interpolating sequence for the algebra $H^\infty$ satisfying $$\inf_n \prod_{j \ne n} \left|\frac{z_j - z_n}{1 - \overline{z_j} z_n}\right| \ge \delta > 0$$ then for any bounded sequence $\{\omega_n\}$ and $$\label{Earl}
M > \frac{2 - \delta^2 + 2(1 - \delta^2)^{1/2}}{\delta^2} \sup_n |\omega_n|$$ there exists a Blaschke product $B$ and a real number $\alpha$ so that $$M e^{i \alpha} B(\lambda_j) = \omega_j~\mbox{for all}~j.$$ Using the results of Sundberg-Wolff and Earl, we obtain the following theorem.
\[main\_algebra\] Let $\{\lambda_n\}$ in $\mathbb{D}$ be an interpolating Blaschke sequence and let $\mathcal{B}$ be a Douglas algebra. The following are equivalent:
1. $\{\lambda_n\}$ is an $EIS_{\mathcal{B}}$ sequence; \[EIS\_Douglas\]
2. $\{\lambda_n\}$ is a $AIS_{\mathcal{B}}$ sequence; \[AIS\_Douglas\]
3. $\{\lambda_n\}$ is thin near $M(\mathcal{B})$;\[nearthin\]
4. for any bounded sequence of complex numbers $\{w_n\}$ there exists a function in $f \in H^\infty \cap \overline{B}$ such that $f(\lambda_n) = w_n$ for all $n$.\[SW\]
The equivalence between and is contained in Theorem \[EISiffASI\_algebra\]. The equivalence of and is the Sundberg-Wolff theorem.
We next prove that if a sequence is thin near $M(\mathcal{B})$, then it is an $EIS_{\mathcal{B}}$ sequence. We let $b$ denote the Blaschke product associated to the sequence $\{\lambda_n\}$.
Given $\varepsilon>0$, choose $\gamma$ so that $$\left(\frac{1 + \sqrt{1 - \gamma^2}}{\gamma}\right)^2 < 1 + \varepsilon.$$ Choose a factorization $b = b_1^\gamma b_2^\gamma$ so that $\overline{b_1^\gamma} \in \mathcal{B}$ and $\delta(b_2) = \inf (1 - |\lambda|^2)|b_2^\gamma \,^\prime(\lambda)| > \gamma$. Since $|b_1^\gamma| = 1$ on $M(\mathcal{B})$ and $\gamma < 1$, there exists an open set $\mathcal{O} \supset M(\mathcal{B})$ such that $|b_1^\gamma| > \gamma$ on $\mathcal{O}$. Note that if $b(\lambda) = 0$ and $\lambda \in \mathcal{O}$, then $b_2(\lambda) = 0$.
The condition on $b_2^\gamma$ coupled with Earl’s Theorem (see ), gives rise to functions $\{f_k^\gamma\}$ in $H^\infty$ (!), and hence in $\mathcal{B}$ so that $$\label{estimate}
f_j^\gamma(\lambda_k) = \delta_{jk} \, ~\mbox{whenever}~ b_2^\gamma(\lambda_k) = 0 ~\mbox{and}~\sup_{z \in \mathbb{D}}\sum_{j}{\ensuremath{\left\vertf_j^\gamma(z)\right\vert}}\leq \left(\frac{1 + \sqrt{1 - \gamma^2}}{\gamma}\right)^2.$$
Now given $a\in\ell^\infty$, choose the corresponding P. Beurling functions (as in ) and let $$f^\gamma_{\mathcal{O}, a}=\sum_{j} a_j f_j^\gamma.$$ By construction we have that $f_{\mathcal{O},a}(\lambda_n)=a_n$ for all $\lambda_n\in\mathcal{O}$. Also, by Earl’s estimate , we have that $${\ensuremath{\left\|f_{\mathcal{O},a}^{\gamma}\right\|}}_{\infty} \leq (1+\varepsilon)\|a\|_\infty.$$ Thus, implies .
Finally, we claim implies . Suppose $\{\lambda_n\}$ is a $EIS_{\mathcal{B}}$ sequence. Let $0 < \eta < 1$ be given and choose $\eta_1$ with $1/(1 + \eta_1) > \eta$, a function $f \in H^\infty$ and $\mathcal{O} \supset M(\mathcal{B})$ open in $M(H^\infty)$ with $$f_{\mathcal{O}, n}(\lambda_m) = \delta_{nm}~\mbox{for}~\lambda_m \in \mathcal{O}~\mbox{and}~\|f\|_{\mathcal{O}, n} \le 1 + \eta_1.$$ Let $b_2$ denote the Blaschke product with zeros in $\mathcal{O}$, $b_1$ the Blaschke product with the remaining zeros and let $$f_{\mathcal{O}, n}(z) = \left(\prod_{j \ne n: b_2(\lambda_j) = 0} \frac{z - \lambda_j}{1 - \overline{\lambda_j}z}\right) h(z),$$ for some $h \in H^\infty.$ Then $\|h\|_{\infty} \le 1 + \eta_1$ and $$1 = |f_{\mathcal{O}, n}(\lambda_n)| = \left|\left(\prod_{j \ne n; b_2(\lambda_j) = 0} \frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right) h(\lambda_n)\right| \le (1 + \eta_1) \prod_{j \ne n} \left|\frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right|.$$ Therefore $$(1 - |\lambda_n|^2)|b_2^\prime(\lambda_n)| = \prod_{j \ne n: b_2(\lambda_j) = 0} \left|\frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right| \ge 1/(1 + \eta_1) > \eta.$$
Now because we assume that $\{\lambda_n\}$ is interpolating, the Blaschke product $b = b_1 b_2$ with zeros at $\{\lambda_n\}$ will vanish at $x \in M(H^\infty)$ if and only if $x$ lies in the closure of the zeros of $\{\lambda_n\}$, [@Hoffman]\*[p. 206]{} or [@Garnett]\*[p. 379]{}. Now, if we choose $\mathcal{V}$ open in $M(H^\infty)$ with $M(\mathcal{B}) \subset \mathcal{V} \subset \overline{\mathcal{V}} \subset \mathcal{O}$, then $b_1$ has no zeros in ${\mathcal{V}} \cap \mathbb{D}$ and, therefore, no point of $M(\mathcal{B})$ can lie in the closure of the zeros of $b_1$. So $b_1$ has no zeros on $M(\mathcal{B})$. Thus we see that $b_1$ is bounded away from zero on $M(\mathcal{B})$ and, consequently, $b_1$ is invertible in $\mathcal{B}$.
We note that we do not need the full assumption that $b$ is interpolating; it is enough to assume that $b$ does not vanish identically on a Gleason part contained in $M(\mathcal{B})$. Our goal, however, is to illustrate the difference in the Hilbert space and uniform algebra setting and so we have stated the most important setting for our problem.
[^1]: $\dagger$ Research supported in part by Simons Foundation Grant 243653
[^2]: $\ddagger$ Research supported in part by a National Science Foundation DMS grant \# 0955432.
|
{
"pile_set_name": "arxiv"
}
|
/*
* Copyright 2010-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
#import "SimpleDBMissingParameterException.h"
@implementation SimpleDBMissingParameterException
@synthesize boxUsage;
-(id)initWithMessage:(NSString *)theMessage
{
if (self = [super initWithMessage:theMessage]) {
}
return self;
}
-(void)setPropertiesWithException:(AmazonServiceException *)theException
{
[super setPropertiesWithException:theException];
if ([theException.additionalFields valueForKey:@"BoxUsage"] != nil) {
self.boxUsage = [AmazonSDKUtil convertStringToNumber:[theException.additionalFields valueForKey:@"BoxUsage"]];
}
}
-(NSString *)description
{
NSMutableString *buffer = [[NSMutableString alloc] initWithCapacity:256];
[buffer appendString:@"{"];
[buffer appendString:[[[NSString alloc] initWithFormat:@"BoxUsage: %@,", boxUsage] autorelease]];
[buffer appendString:[super description]];
[buffer appendString:@"}"];
return [buffer autorelease];
}
-(void)dealloc
{
[boxUsage release];
[super dealloc];
}
@end
|
{
"pile_set_name": "github"
}
|
--- contrib/virt.te 2012-11-25 21:35:09.181247450 +0100
+++ contrib/virt.te 2012-11-25 21:34:09.223216815 +0100
@@ -281,7 +281,11 @@
userdom_search_user_home_dirs(virt_domain)
userdom_read_all_users_state(virt_domain)
-qemu_exec(virt_domain)
+ifdef(`distro_gentoo',`
+ optional_policy(`
+ qemu_exec(virt_domain)
+ ')
+')
tunable_policy(`virt_use_execmem',`
allow virt_domain self:process { execmem execstack };
|
{
"pile_set_name": "github"
}
|
Oh lawd oh lawd! I'm tired and weary of pain Please lawd! please lawd! forgive me if I complain Up in the mornin' out on the job work like the devil for my pay But that lucky old sun has nothin' to do but roll around heaven all day
Fuss with my woman toil for my kids Sweat 'til I'm wrinkled and gray While that lucky old sun has nothin' to do But roll around heaven all day
Good lawd above, can't you know I'm pinin', tears all in my eyes; Send down that cloud with a silver linin', lift me to paradise Show me that river take me across and wash all my trou-bles away. Like that lucky old sun, give me nothin' to do But roll around heaven all day
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: |
In this work, we apply the factorization technique to the Benjamin-Bona-Mahony like equations in order to get travelling wave solutions. We will focus on some special cases for which $m\neq n$, and we will obtain these solutions in terms of Weierstrass functions.
Email: kuru@science.ankara.edu.tr
title: 'Travelling wave solutions of BBM-like equations by means of factorization'
---
Ş. Kuru\
[*Department of Physics, Faculty of Science, Ankara University, 06100 Ankara, Turkey*]{}
Introduction
============
In this paper, we will consider the Benjamin-Bona-Mahony (BBM) [@benjamin] like equation ($B(m,n)$) with a fully nonlinear dispersive term of the form $$u_{t}+u_{x}+a\,(u^m)_x-(u^n)_{xxt}=0, \quad\quad m,\,n>1,\,\,m \neq
n\, .\label{1.3}$$ This equation is similar to the nonlinear dispersive equation $K(m,n)$, $$u_{t}+(u^m)_x+(u^n)_{xxx}=0, \quad\quad m>0,\,1<n\leq3 \label{1.2}$$ which has been studied in detail by P. Rosenau and J.M. Hyman [@rosenau]. In the literature there are many studies dealing with the travelling wave solutions of the $K(m,n)$ and $B(m,n)$ equations, but in general they are restricted to the case $m=n$ [@rosenau; @rosenau1; @wazwaz; @wazwaz1; @wazwaz2; @wazwaz3; @wazwaz5; @taha; @ludu; @wazwaz4; @yadong; @wang; @kuru]. When $m\neq n$, the solutions of $K(m,n)$ were investigated in [@rosenau; @rosenau1]. Our aim here is just to search for solutions of the equations $B(m,n)$, with $m\neq n$, by means of the factorization method.
We remark that this method [@pilar1; @pilar; @Perez; @pilar2], when it is applicable, allows to get directly and systematically a wide set of solutions, compared with other methods used in the BBM equations. For example, the direct integral method used by C. Liu [@liu] can only be applied to the $B(2,1)$ equation. However, the factorization technique can be used to more equations than the direct integral method and also, in some cases, it gives rise to more general solutions than the sine-cosine and the tanh methods [@wazwaz4; @yadong; @wazwaz6]. This factorization approach to find travelling wave solutions of nonlinear equations has been extended to third order nonlinear ordinary differential equations (ODE’s) by D-S. Wang and H. Li [@li].
When we look for the travelling wave solutions of Eq. (\[1.3\]), first we reduce the form of the $B(m,n)$ equation to a second order nonlinear ODE and then, we can immediately apply factorization technique. Here, we will assume $m \neq n$, since the case $m = n$ has already been examined in a previous article following this method [@kuru].
This paper is organized as follows. In section 2 we introduce factorization technique for a special type of the second order nonlinear ODE’s. Then, we apply straightforwardly the factorization to the related second order nonlinear ODE to get travelling wave solutions of $B(m,n)$ equation in section 3. We obtain the solutions for these nonlinear ODE’s and the $B(m,n)$ equation in terms of Weierstrass functions in section 4. Finally, in section 5 we will add some remarks.
Factorization of nonlinear second order ODE’s
=============================================
Let us consider, the following nonlinear second order ODE $$\label{9}
\frac{d^2 W}{d \theta^2}-\beta \frac{d W}{d \theta}+F(W)=0$$ where $\beta$ is constant and $F(W)$ is an arbitrary function of $W$. The factorized form of this equation can be written as $$\label{10}
\left[\frac{d}{d \theta}-f_2(W,\theta)\right]\left[\frac{d}{d
\theta}-f_1(W,\theta)\right] W(\theta)=0\,.$$ Here, $f_1$ and $f_2$ are unknown functions that may depend explicitly on $W$ and $\theta$. Expanding (\[10\]) and comparing with (\[9\]), we obtain the following consistency conditions $$\label{12}
f_1f_2=\frac{F(W)}{W}+\frac{\partial f_1}{\partial \theta}, \qquad
f_2+\frac{\partial(W f_1)}{\partial W}=\beta.$$ If we solve (\[12\]) for $f_{1}$ or $f_{2}$, it will supply us to write a compatible first order ODE $$\label{14}
\left[\frac{d}{d \theta}-f_1(W,\theta)\right] W(\theta)=0$$ that provides a solution for the nonlinear ODE (\[9\]) [@pilar1; @pilar; @Perez; @pilar2]. In the applications of this paper $f_{1}$ and $f_{2}$ will depend only on $W$.
Factorization of the BBM-like equations
========================================
When Eq. (\[1.3\]) has the travelling wave solutions in the form $$\label{15}
u(x,t)=\phi(\xi),\quad\quad \xi=hx+wt$$ where $h$ and $w$ are real constants, substituting (\[15\]) into (\[1.3\]) and after integrating, we get the reduced form of Eq. (\[1.3\]) to the second order nonlinear ODE $$\label{16}
(\phi^n)_{\xi\xi}-A\,\phi-B\,\phi^m+D=0\,.$$ Notice that the constants in Eq. (\[16\]) are $$\label{17}
A=\frac{h+w}{h^2\,w},\quad\quad B=\frac{a}{h\,w},\quad\quad
D=\frac{R}{h^2\,w}$$ and $R$ is an integration constant. Now, if we introduce the following natural transformation of the dependent variable $$\label{18}
\phi^n(\xi)=W(\theta),\quad\quad\xi=\theta$$ Eq. (\[16\]) becomes $$\label{19}
\frac{d^2 W}{d \theta^2}-A\,W^{\frac{1}{n}}-B\,W^{\frac{m}{n}}+D=0.$$ Now, we can apply the factorization technique to Eq. (\[19\]). Comparing Eq. (\[9\]) and Eq. (\[19\]), we have $\beta=0$ and $$\label{20}
F(W)=-(A\,W^{\frac{1}{n}}+B\,W^{\frac{m}{n}}-D)\,.$$ Then, from (\[12\]) we get only one consistency condition $$\label{23}
f_1^2+f_1\,W\frac{df_1}{dW}-A\,W^{\frac{1-n}{n}}-B\,
W^{\frac{m-n}{n}}+D\,W^{-1}=0\,$$ whose solutions are $$\label{24}
f_1(W)=\pm\frac{1}{W}\sqrt{\frac{2\,n\,A}{n+1}\,W^{\frac{n+1}{n}}+
\frac{2\,n\,B}{m+n}\,W^{\frac{m+n}{n}}-2\, D\,W+C}\,$$ where $C$ is an integration constant. Thus, the first order ODE (\[14\]) takes the form$$\label{25}
\frac{dW}{d
\theta}\mp\sqrt{\frac{2\,n\,A}{n+1}\,W^{\frac{n+1}{n}}+\frac{2\,n\,B}{m+n}\,W^{\frac{m+n}{n}}-2\,
D\,W+C}=0\,.$$ In order to solve this equation for $W$ in a more general way, let us take $W$ in the form $W=\varphi^p,\,p\neq0,1$, then, the first order ODE (\[25\]) is rewritten in terms of $\varphi$ as $$\label{26}
(\frac{d\varphi}{d \theta})^2=
\frac{2\,n\,A}{p^2\,(n+1)}\,\varphi^{p(\frac{1-n}{n})+2}+
\frac{2\,n\,B}{p^2\,(m+n)}\,\varphi^{p(\frac{m-n}{n})+2}-\frac{2\,D}{p^2}\,\varphi^{2-p}
+\frac{C}{p^2}\,\varphi^{2-2\,p}\,.$$ If we want to guarantee the integrability of (\[26\]), the powers of $\varphi$ have to be integer numbers between $0$ and $4$ [@ince]. Having in mind the conditions on $n, m$ ($n\neq m >1$) and $p$ ($p\neq0$), we have the following possible cases:
- If $C=0,\,\,D=0$, we can choose $p$ and $m$ in the following way $$\label{27a}
p=\pm \frac{2n}{1-n} \quad {\rm{with}} \quad
m=\frac{n+1}{2},\frac{3\,n-1}{2}, 2\,n-1$$ and $$\label{27b}
p=\pm \frac{n}{1-n}\quad {\rm{with}} \quad m= 2\,n-1,3\,n-2\,.$$
It can be checked that the two choices of sign in (\[27a\]) and (\[27b\]) give rise to the same solutions for Eq. (\[1.3\]). Therefore, we will consider only one of them. Then, taking $p=-
\frac{2n}{1-n}$, Eq. (\[26\]) becomes $$\label{29}
(\frac{d\varphi}{d
\theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n+1)}\,\varphi,
\quad m=\frac{n+1}{2}
%, \quad (n\,\,\rm {is\,odd})$$ $$\label{30}
(\frac{d\varphi}{d
\theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(5\,n-1)}\,\varphi^3,
\quad m=\frac{3\,n-1}{2}
%, \quad (n\,\,\rm {is\,odd})$$ $$\label{31}
(\frac{d\varphi}{d
\theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^4,
\quad m=2\,n-1$$ and for $p=- \frac{n}{1-n}\,$, $$\label{32}
(\frac{d\varphi}{d
\theta})^2=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{2\,B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^3,
\quad m=2\,n-1$$ $$\label{33}
(\frac{d\varphi}{d
\theta})^2=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{B\,(n-1)^2}{n\,(2\,n-1)}\,\varphi^4,
\quad m=3\,n-2\,.$$
- If $C=0$, we have the special cases, $p=\pm 2$, $n=2$ with $m=3,4$.
Due to the same reason in the above case, we will consider only $p=2$. Then, Eq. (\[26\]) takes the form: $$\label{e23}
(\frac{d\varphi}{d
\theta})^2=-\frac{D}{2}+\frac{A}{3}\,\varphi+\frac{B}{5}\,\varphi^3,
\quad m=3$$ $$\label{e24}
(\frac{d\varphi}{d
\theta})^2=-\frac{D}{2}+\frac{A}{3}\,\varphi+\frac{B}{6}\,\varphi^4,
\quad m=4\,.$$
- If $A=C=0$, we have $p=\pm 2$ with $m=\displaystyle \frac{n}{2},\frac{3\,n}{2},2\,n$.
In this case, for $p=2$, Eq. (\[26\]) has the following form: $$\label{e2n2}
(\frac{d\varphi}{d
\theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{3}\,\varphi^3, \quad
m=\frac{n}{2}
%, \quad (n\,\,\rm {is\,even})$$ $$\label{e3n2}
(\frac{d\varphi}{d
\theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{5}\,\varphi, \quad
m=\frac{3\,n}{2}
%, \quad (n\,\,\rm {is\,even})$$ $$\label{e22n}
(\frac{d\varphi}{d \theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{6},
\quad m=2\,n\,.$$
- If $A=0$, we have $p=\pm 1$ with $m=2\,n,3\,n$.
Here, also we will take only the case $p=1$, then, we will have the equations: $$\label{e2n}
(\frac{d\varphi}{d
\theta})^2=-2\,D\,\varphi+\frac{2}{3}\,B\,\varphi^3+C\varphi^4,
\quad m=2n$$ $$\label{e3n}
(\frac{d\varphi}{d
\theta})^2=-2\,D\,\varphi+\frac{B}{2}\,\varphi^4+C, \quad m=3n\,.$$
- If $A=D=0$, we have $p=\displaystyle \pm \frac{1}{2}$ with $m=3\,n,5\,n$.
Thus, for $p=\displaystyle \frac{1}{2}$, Eq. (\[26\]) becomes: $$\label{e3n1}
(\frac{d\varphi}{d \theta})^2=2\,B\,\varphi^3+4\,C\varphi, \quad
m=3n$$ $$\label{e3nn}
(\frac{d\varphi}{d
\theta})^2=\frac{4}{3}\,B\,\varphi^4+4\,C\,\varphi, \quad m=5n\,.$$
Travelling wave solutions for BBM-like equations
================================================
In this section, we will obtain the solutions of the differential equations (\[29\])-(\[33\]) in terms of Weierstrass function, $\wp(\theta;g_{2},g_{3})$, which allow us to get the travelling wave solutions of $B(m,n)$ equations (\[1.3\]). The rest of equations (\[e23\])-(\[e3nn\]) can be dealt with a similar way, but they will not be worked out here for the sake of shortness.
First, we will give some properties of the $\wp$ function which will be useful in the following [@Bateman; @watson].
Relevant properties of the $\wp$ function
-----------------------------------------
Let us consider a differential equation with a quartic polynomial $$\label{ef}
\big(\frac{d\varphi}{d\theta}\big)^2 =P(\varphi) =
a_{0}\,\varphi^4+4\,a_{1}\,\varphi^3+6\,a_{2}\,\varphi^2+4\,a_{3}\,\varphi+a_{4}\,.$$ The solution of this equation can be written in terms of the Weierstrass function where the invariants $g_2$ and $g_3$ of (\[ef\]) are $$\label{gg}
g_{2}= a_{0}\,a_{4}-4\,a_{1}\,a_{3}+3\,a_{2}^2,\ \ g_{3}=
a_{0}\,a_{2}\,a_{4}+2\,a_{1}\,a_{2}\,a_{3}-a_{2}^{3}-a_{0}\,a_{3}^2-a_{1}^{2}\,a_{4}$$ and the discriminant is given by $\Delta=g_2^3-27\,g_3^2$. Then, the solution $\varphi$ can be found as $$\label{x}
\varphi(\theta)=\varphi_0+\frac{1}{4}P_\varphi(\varphi_0)\left(\wp(\theta;g_{2},g_{3})-
\frac{1}{24}P_{\varphi\varphi}(\varphi_0)\right)^{-1}$$ where the subindex in $P_{\varphi}(\varphi_0)$ denotes the derivative with respect to $\varphi$, and $\varphi_0$ is one of the roots of the polynomial $P(\varphi)$ (\[ef\]). Depending of the selected root $\varphi_0$, we will have a solution with a different behavior [@kuru].
Here, also we want to recall some other properties of the Weierstrass functions [@stegun]:
i\) The case $g_2=1$ and $g_3=0$ is called lemniscatic case $$\label{lc}
\wp(\theta;g_{2},0)=g_2^{1/2}\,\wp(\theta\,g_{2}^{1/4};1,0),\qquad
g_2>0\,$$
ii\) The case $g_2=-1$ and $g_3=0$ is called pseudo-lemniscatic case $$\label{plc}
\wp(\theta;g_{2},0)=|g_2|^{1/2}\,\wp(\theta\,|g_{2}|^{1/4};-1,0),\qquad
g_2<0\,$$
iii\) The case $g_2=0$ and $g_3=1$ is called equianharmonic case $$\label{ec}
\wp(\theta;g_{2},0)=g_3^{1/3}\,\wp(\theta\,g_{3}^{1/6};0,1),\qquad
g_3>0\,.$$
Once obtained the solution $W(\theta)$, taking into account (\[15\]), (\[18\]) and $W=\varphi^{p}$, the solution of Eq. (\[1.3\]) is obtanied as $$\label{uxt}
u(x,t)=\phi(\xi)=W^{\frac{1}{n}}(\theta)=\varphi^{\frac{p}{n}}(\theta),\quad\quad\theta=\xi=h\,x+w\,t.$$
The case $C=0,\,D=0$, $\displaystyle p=-\frac{2n}{1-n}$
-------------------------------------------------------
- $m=\displaystyle \frac{n+1}{2}$
Equation (\[29\]) can be expressed as $$(\frac{d\varphi}{d\theta})^2=P(\varphi)=\frac{A\,(n-1)^2}
{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n+1)}\,\varphi$$ and from $P(\varphi)=0$, we get the root of this polynomial $$\label{f01}
\varphi_0=-\frac{A\,(3\,n+1)}{2\,B\,(n+1)}\,.$$ The invariants (\[gg\]) are: $g_{2}=g_{3}=0$, and $\Delta=0$. Therefore, having in mind $\wp(\theta;0,0)=\displaystyle
\frac{1}{\theta^2}$, we can find the solution of (\[29\]) from (\[x\]) for $\varphi_0$, given by (\[f01\]), $$\label{35}
\varphi(\theta)=
\frac{B^2\,(n-1)^2\,(n+1)\,\theta^2-2\,A\,n\,(3\,n+1)^2}{4\,B\,n\,(n+1)\,(3\,n+1)}\,.$$ Now, the solution of Eq. (\[1.3\]) reads from (\[uxt\]) $$\label{u1}
u(x,t)=\left[\frac{B^2\,(n-1)^2\,(n+1)\,(h\,x+w\,t)^2-2\,A\,n\,(3\,n+1)^2}
{4\,B\,n\,(n+1)\,(3\,n+1)}\right]^{\frac{2}{n-1}}\,.$$
- $m=\displaystyle \frac{3\,n-1}{2}$ In this case, our equation to solve is (\[30\]) and the polynomial has the form $$P(\varphi)=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(5\,n-1)}\,\varphi^3$$ with one real root: $\varphi_0=\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{1/3}$. Here, the discriminant is different from zero with the invariants $$g_2=0,\qquad
g_3=\frac{-A\,B^2\,(n-1)^6}{32\,n^3\,(n+1)\,(5\,n-1)^2}\,.$$ Then, the solution of (\[30\]) is obtained by (\[x\]) for $\varphi_0$, $$\label{phi2}
\varphi=\varphi_0\,\left[\frac{4\,n\,(5\,n-1)\,\wp(\theta;0,g_3)+2\,B\,(n-1)^2\,\varphi_0
}{4\,n\,(5\,n-1)\,\wp(\theta;0,g_3)-B\,(n-1)^2\,\varphi_0}\right]$$ and we get the solution of Eq. (\[1.3\]) from (\[uxt\]) as $$\label{u2}
u(x,t)=\left[\varphi_0^2\,\left(\frac{4\,n\,(5\,n-1)\,\wp(h\,x+w\,t;0,g_3)+2\,B\,(n-1)^2\,\varphi_0
}{4\,n\,(5\,n-1)\,\wp(h\,x+w\,t;0,g_3)-B\,(n-1)^2\,\varphi_0}\right)^2\right]^{\frac{1}{n-1}}$$ with the conditions: $A<0,g_3>0$, for $\varphi_0=\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{1/3}$. Using the relation (\[ec\]), we can write the solution (\[u2\]) in terms of equianharmonic case of the Weierstrass function: $$\label{u21}
u(x,t)=\left[\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{2/3}\,
\left(\frac{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)+2
}{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)-1}\right)^2\right]^{\frac{1}{n-1}}\,.$$
- $m=2\,n-1$
In Eq. (\[31\]), the quartic polynomial is $$P(\varphi)=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{2\,n\,(3\,n-1)}\,\varphi^4$$ and has two real roots: $\varphi_0=\pm\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/4}$ for $A<0,\,B>0$ or $A>0,\,B<0$. In this case, the invariants are $$g_2=\frac{A\,B\,(n-1)^4}{4\,n^2\,(n+1)\,(3\,n-1)},\qquad g_3=0\,.$$ Here, also the discriminant is different from zero, $\Delta\neq0$. We obtain the solution of (\[31\]) from (\[x\]) for $\varphi_0$, $$\label{phi3}
\varphi=\varphi_0\,\left[\frac{4\,n\,(n+1)\,\varphi_0^2\,\wp(\theta;g_2,0)-A\,(n-1)^2
}{4\,n\,(n+1)\,\varphi_0^2\,\wp(\theta;g_2,0)+A\,(n-1)^2}\right]$$ and we get the solution of Eq. (\[1.3\]) from (\[uxt\]) as $$\label{u3}
u(x,t)=\left[\varphi_0^2\,\left(\frac{4\,n\,(n+1)\,\varphi_0^2\,\wp(h\,x+w\,t;g_2,0)-A\,(n-1)^2
}{4\,n\,(n+1)\,\varphi_0^2\,\wp(h\,x+w\,t;g_2,0)+A\,(n-1)^2}\right)^2\right]^{\frac{1}{n-1}}$$ with the conditions for real solutions: $A<0,\,B>0,\,g_2<0$ or $A>0,\,B<0,\,g_2<0$.
Having in mind the relation (\[plc\]), the solution (\[u3\]) can be expressed in terms of the pseudo-lemniscatic case of the Weierstrass function: $$\label{u32}
u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\,
\left(\frac{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)+1
}{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)-1}\right)^2\right]^{\frac{1}{n-1}}$$ for $A<0,\,B>0,\,g_2<0$ and $$\label{u31}
u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\,
\left(\frac{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)-1
}{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)+1}\right)^2\right]^{\frac{1}{n-1}}$$ for $A>0,\,B<0,\,g_2<0$.
The case $C=0,\,D=0$, $\displaystyle p=- \frac{n}{1-n}$
-------------------------------------------------------
- $m=2\,n-1$
Now, the polynomial is cubic $$P(\varphi)=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{2\,B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^3$$ and has three distinct real roots: $\varphi_0=0$ and $\varphi_0=\pm\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$ for $A<0,\,B>0$ or $A>0,\,B<0$. Now, the invariants are $$g_2=\frac{-A\,B\,(n-1)^4}{n^2\,(n+1)\,(3\,n-1)},\qquad g_3=0$$ and $\Delta\neq0$. The solution of (\[32\]) is obtained from (\[x\]) for $\varphi_0$, $$\label{phi4}
\varphi=\varphi_0\,\left[\frac{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;g_2,0)-A\,(n-1)^2
}{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;g_2,0)+A\,(n-1)^2}\right]$$ and substituting (\[phi4\]) in (\[uxt\]), we get the solution of Eq. (\[1.3\]) as $$\label{u4}
u(x,t)=\left[\varphi_0\,\left(\frac{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;g_2,0)-A\,(n-1)^2
}{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;g_2,0)+A\,(n-1)^2}\right)\right]^{\frac{1}{n-1}}$$ with the conditions: $A<0,\,B>0,\,g_2>0$ and $A>0,\,B<0,\,g_2>0$ for $\varphi_0=\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$. While the root $\varphi_0=0$ leads to the trivial solution, $u(x,t)=0$, the other root $\varphi_0=-\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$ gives rise to imaginary solutions.
Now, we can rewrite the solution (\[u4\]) in terms of the lemniscatic case of the Weierstrass function using the relation (\[lc\]) in (\[u4\]): $$\label{u41}
u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\,
\left(\frac{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)+1
}{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)-1}\right)\right]^{\frac{1}{n-1}}$$ for $A<0,\,B>0,\,g_2>0$ and $$\label{u42}
u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\,
\left(\frac{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)-1
}{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)+1}\right)\right]^{\frac{1}{n-1}}$$ for $A>0,\,B<0,\,g_2>0$.
- $m=3\,n-2$
In this case, we have also a quartic polynomial $$P(\varphi)=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{B\,(n-1)^2}{n\,(2\,n-1)}\,\varphi^4
\, .$$ It has two real roots: $\varphi_0=0$ and $\varphi_0=\left(-\frac{2\,A\,(2\,n-1)}{B\,(n+1)}\right)^{1/3}$. For the equation (\[33\]), the invariants are $$g_2=0,\qquad g_3=\frac{-A^2\,B\,(n-1)^6}{4\,n^3\,(n+1)^2\,(2\,n-1)}$$ and $\Delta\neq0$. Now, the solution of (\[33\]) reads from (\[x\]) for $\varphi_0$, $$\label{phi5}
\varphi=\varphi_0\,\left[\frac{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;0,g_3)-A\,(n-1)^2
}{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;0,g_3)+2\,A\,(n-1)^2}\right]\,.$$ Then, the solution of Eq. (\[1.3\]) is from (\[uxt\]) as $$\label{u5}
u(x,t)=\left[\varphi_0\,\left(\frac{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;0,g_3)-A\,(n-1)^2
}{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;0,g_3)+2\,A\,(n-1)^2}\right)\right]^{\frac{1}{n-1}}$$ with the conditions: $B<0,\,g_3>0$. Taking into account the relation (\[ec\]), this solution also can be expressed in terms of the equianharmonic case of the Weierstrass function: $$\label{u51}
u(x,t)=\left[\left(-\frac{2\,A\,(2\,n-1)}{B\,(n+1)}\right)^{1/3}\,
\left(\frac{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)-1
}{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)+2}\right)\right]^{\frac{1}{n-1}}\,.$$
We have also plotted these solutions for some special values in Figs. (\[figuras1\])-(\[figuras333\]). We can appreciate that for the considered cases, except the parabolic case (\[35\]), they consist in periodic waves, some are singular while others are regular. Their amplitude is governed by the non-vanishing constants $A,B$ and their formulas are given in terms of the special forms (\[lc\])-(\[ec\]) of the $\wp$ function.
![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$.[]{data-label="figuras1"}](235b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$.[]{data-label="figuras1"}](235a.eps "fig:"){width="40.00000%"}
![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$.[]{data-label="figuras111"}](223b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$.[]{data-label="figuras111"}](223a.eps "fig:"){width="40.00000%"}
![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$. []{data-label="figuras2"}](135b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$. []{data-label="figuras2"}](135a.eps "fig:"){width="40.00000%"}
![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$. []{data-label="figuras222"}](123b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$. []{data-label="figuras222"}](123a.eps "fig:"){width="40.00000%"}
![The left figure corresponds to the solution (\[u21\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=5/2$ and the right one corresponds to the solution (\[u51\]) for $h=1$, $w=1$, $a=-1$, $n=3/2$, $m=5/2$.[]{data-label="figuras333"}](2252b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u21\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=5/2$ and the right one corresponds to the solution (\[u51\]) for $h=1$, $w=1$, $a=-1$, $n=3/2$, $m=5/2$.[]{data-label="figuras333"}](13252a.eps "fig:"){width="40.00000%"}
Lagrangian and Hamiltonian
==========================
Since Eq. (\[19\]) is a motion-type, we can write the corresponding Lagrangian $$\label{lag}
L_W=\frac{1}{2}\,W_{\theta}^2+\frac{A\,n}{n+1}\,W^\frac{n+1}{n}+\frac{B\,n}{m+n}\,W^\frac{m+n}{n}-D\,W\,$$ and, the Hamiltonian $H_W=W_{\theta} P_W -L_W$ reads $$H_W(W,P_W,\theta)=\frac{1}{2}\left[P_W^2-\left(\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+
\frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W\right)\right]
\label{hamil}$$ where the canonical momentum is $$P_W=\frac{\partial L_W}{\partial W_\theta}=W_\theta.\label{mo}$$ The independent variable $\theta$ does not appear explicitly in (\[hamil\]), then $H_W$ is a constant of motion, $H_W=E$, with $$E=\frac{1}{2}
\left[\left(\frac{dW}{d\theta}\right)^2-\left(\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+
\frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W\right)\right].\label{ee}$$ Note that this equation also leads to the first order ODE (\[25\]) with the identification $C=2\,E$. Now, the energy $E$ can be expressed as a product of two independent constant of motions $$E=\frac{1}{2}\, I_+\,I_- \label{5.1}$$ where $$I_{\pm}(z)=\left(W_\theta\mp
\sqrt{\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+
\frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W}\,\right) \,e^{\pm
S(\theta)} \label{const}$$ and the phase $S(\theta)$ is chosen in such a way that $I_\pm(\theta)$ be constants of motion ($dI_\pm(\theta)/d\theta=0
$) $$S(\theta)=\int
\frac{A\,W^{\frac{1}{n}}+B\,W^{\frac{m}{n}}-D}{\sqrt{\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+
\frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W}}\,d\theta.$$
Conclusions
===========
In this paper, we have applied the factorization technique to the $B(m,n)$ equations in order to get travelling wave solutions. We have considered some representative cases of the $B(m,n)$ equation for $m\neq n$. By using this method, we obtained the travelling wave solutions in a very compact form, where the constants appear as modulating the amplitude, in terms of some special forms of the Weierstrass elliptic function: lemniscatic, pseudo-lemniscatic and equiaharmonic. Furthermore, these solutions are not only valid for integer $m$ and $n$ but also non integer $m$ and $n$. The case $m=n$ for the $B(m,n)$ equations has been examined by means of the factorization technique in a previous paper [@kuru] where the compactons and kink-like solutions recovering all the solutions previously reported have been constructed. Here, for $m\neq n$, solutions with compact support can also be obtained following a similar procedure. We note that, this method is systematic and gives rise to a variety of solutions for nonlinear equations. We have also built the Lagrangian and Hamiltonian for the second order nonlinear ODE corresponding to the travelling wave reduction of the $B(m,n)$ equation. Since the Hamiltonian is a constant of motion, we have expressed the energy as a product of two independent constant of motions. Then, we have seen that these factors are related with first order ODE’s that allow us to get the solutions of the nonlinear second order ODE. Remark that the Lagrangian underlying the nonlinear system also permits to get solutions of the system. There are some interesting papers in the literature, where starting with the Lagrangian show how to obtain compactons or kink-like travelling wave solutions of some nonlinear equations [@arodz; @adam; @gaeta1; @gaeta2; @gaeta3].
Acknowledgments {#acknowledgments .unnumbered}
===============
Partial financial support is acknowledged to Junta de Castilla y León (Spain) Project GR224. The author acknowledges to Dr. Javier Negro for useful discussions.
[99]{}
T.B. Benjamin, J.L. Bona and J.J. Mahony, [Philos. Trans. R. Soc., Ser. A]{} 272 (1972) 47.
P. Rosenau, J.M. Hyman, [Phys. Rev. Lett.]{} 70 (1993) 564.
P. Rosenau, [Phys. Lett. A]{} 275 (2000) 193.
A.-M. Wazwaz, T. Taha, [Math. Comput. Simul.]{} 62 (2003) 171.
A.-M. Wazwaz, [Appl. Math. Comput.]{} 133 (2002) 229.
A.-M. Wazwaz, [Math. Comput. Simul.]{} 63 (2003) 35.
A.-M. Wazwaz, [Appl. Math. Comput.]{} 139 (2003) 37.
A.-M. Wazwaz, [Chaos, Solitons and Fractals]{} 28 (2006) 454.
M.S. Ismail, T.R. Taha, [Math. Comput. Simul.]{} 47 (1998) 519.
A. Ludu, J.P. Draayer, [Physica D]{} 123 (1998) 82.
A.-M. Wazwaz, M.A. Helal, [Chaos, Solitons and Fractals]{} 26 (2005) 767.
S. Yadong, [Chaos, Solitons and Fractals]{} 25 (2005) 1083.
L. Wang, J. Zhou, L. Ren, [Int. J. Nonlinear Science]{} 1 (2006) 58.
Ş. Kuru, (2008) arXiv:0810.4166.
P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, to appear in [Chaos, Solitons and Fractals]{} (2007) arXiv:0707.0760.
P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, [J. Phys. A: Math. Gen.]{} 39 (2006) 11441.
O. Cornejo-Pérez, J. Negro, L.M. Nieto and H.C. Rosu, [Found. Phys.]{} 36 (2006) 1587.
P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, [J. Phys. A: Math. Theor.]{} 40 (2007) 9819.
C. Liu, (2006) arXiv.org:nlin/0609058.
A.-M. Wazwaz, [Phys. Lett. A]{} 355 (2006) 358.
D.-S. Wang and H. Li, [J. Math. Anal. Appl.]{} 243 (2008) 273.
M.A. Helal, [Chaos, Solitons and Fractals]{} 13 (2002) 1917.
Ji-H. He, Xu-H. Wu, [Chaos, Solitons and Fractals]{} 29 (2006) 108.
E.L. Ince, [Ordinary Differential Equations]{}, Dover, New York, 1956.
A. Erdelyi et al, [The Bateman Manuscript Project. Higher Transcendental Functions]{},FL: Krieger Publishing Co., Malabar, 1981.
E.T. Whittaker and G. Watson, [A Course of Modern Analysis]{}, Cambridge University Press, Cambridge, 1988.
M. Abramowitz and I.A. Stegun, [Handbook of Mathematical Functions]{}, Dover, New York, 1972.
H. Arodź, [Acta Phys. Polon. B]{} 33 (2002) 1241.
C. Adam, J. Sánchez-Guillén and A. Wereszczyński, [J. Phys. A:Math. Theor.]{} 40 (2007) 13625.
M. Destrade, G. Gaeta, G. Saccomandi, [Phys. Rev. E]{} 75 (2007) 047601.
G. Gaeta, T. Gramchev and S. Walcher, [J. Phys. A: Math. Theor.]{} 40 (2007) 4493.
G. Gaeta, [Europhys. Lett.]{} 79 (2007) 20003.
|
{
"pile_set_name": "arxiv"
}
|
Tetsuya Nakashima
Tetsuya Nakashima (中島哲也) (born 1959) is a Japanese film director. He was born in Fukuoka, attending high school in Chikushino. Nakashima was given the Best Director award at the 2005 Yokohama Film Festival for his film Kamikaze Girls.
His 2010 film Confessions was selected as the Japanese entry for the Best Foreign Language Film at the 83rd Academy Awards and made the final shortlist in January 2011.
He was originally slated to direct an adaptation of the hit manga Attack on Titan, but in December 2012 he left the project due to differences with the rest of the production team.
Filmography
Bakayaro! I'm Plenty Mad (1988) (segment 2)
Happy-Go-Lucky (1997)
Beautiful Sunday (1998)
Kamikaze Girls (2004)
Rolling Bomber Special (2005)
Memories of Matsuko (2006)
Paco and the Magical Picture Book (2008)
Confessions (2010)
The World of Kanako (2014)
It Comes (2018)
References
External links
Category:1959 births
Category:Living people
Category:Japanese film directors
Category:People from Fukuoka Prefecture
|
{
"pile_set_name": "wikipedia_en"
}
|
Molly Henderson
Molly Henderson (born September 14, 1953) is a former Commissioner of Lancaster County, Pennsylvania.
The Commissioners are the chief executive and legislative officials of the County, which has 500,000 residents spread over and an annual County budget of $300 million.
Henderson was elected in 2003 to a four-year term
and was the lone Democrat on the Board of Commissioners in a County where Republicans outnumber Democrats two to one.
Henderson was previously Head of Public Health for the City of Lancaster, Pennsylvania, the County seat.
Henderson was not re-elected as Lancaster County Commissioner on November 7, 2007. Henderson was succeeded by Craig Lehman as the minority Commissioner.
Other careers
She is a former high school and college teacher, holding a doctorate degree from Temple University, a master's degree from West Chester University and her B.S. from James Madison University. Henderson is also a Respiratory Therapist and worked at Lancaster General Hospital prior to her teaching and government careers.
Henderson’s book Pressed: Public Money, Private Profit - A Cautionary Tale tells the story of the development, building, and financing of the Lancaster County Convention Center and Marriott Hotel in downtown Lancaster. The highly controversial “convention center project,” as it was known to those in Lancaster County (pop. 510,000), was originally proposed in 1999 as a $75 million “public-private” partnership. The project included a publicly-owned convention center ($30 million) and a privately-owned hotel ($45 million). By the time the convention center and hotel opened in 2009, the project’s cost had ballooned to more than $170 million, with more than 90% of the total cost of both the convention center and hotel borne by Pennsylvania taxpayers.
Political views
Henderson is a notable opponent of the Lancaster County Convention Center Authority's controversial $170 million hotel/convention center in downtown Lancaster on the site of the former Watt & Shand building.
The project's supporters believe it would promote the revitalization of the city's center. Its opponents, however, feel it poses an unacceptable risk to taxpayers.
The hotel portion of the project is owned 50% by Lancaster Newspapers, Inc. which have been accused of using their monopoly print position in the County to promote the project and stifle opposition. Henderson has been referenced in more than 2,200 newspaper articles, over 700 of which concern the Lancaster County Convention Center project, many of them attacking her position.
Personal life
Henderson is married to Alex Henderson and has two children, Alexander "Ander" Henderson and Leslie Henderson.
See also
Lancaster County
Lancaster City
Lancaster Newspapers
References
External links
Official Lancaster County Site
Campaign Site
Category:1953 births
Category:Living people
Category:County commissioners in Pennsylvania
Category:Temple University alumni
Category:Politicians from Lancaster, Pennsylvania
Category:People from Cumberland, Maryland
Category:West Chester University alumni
Category:James Madison University alumni
Category:Women in Pennsylvania politics
Category:Pennsylvania Democrats
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: |
We express the averages of products of characteristic polynomials for random matrix ensembles associated with compact symmetric spaces in terms of Jack polynomials or Heckman and Opdam’s Jacobi polynomials depending on the root system of the space. We also give explicit expressions for the asymptotic behavior of these averages in the limit as the matrix size goes to infinity.
[**MSC-class**]{}: primary 15A52; secondary 33C52, 05E05.\
[**Keywords**]{}: characteristic polynomial, random matrix, Jacobi polynomial, Jack polynomial, Macdonald polynomial, compact symmetric space.
author:
- '<span style="font-variant:small-caps;">Sho MATSUMOTO</span> [^1]'
title: '**Moments of characteristic polynomials for compact symmetric spaces and Jack polynomials**'
---
Introduction
============
In recent years, there has been considerable interest in the averages of the characteristic polynomials of random matrices. This work is motivated by the connection with Riemann zeta functions and $L$-functions identified by Keating and Snaith [@KS_zetafunctions; @KS_Lfunctions]. The averages of the characteristic polynomials in the cases of compact classical groups and Hermitian matrix ensembles have already calculated, see [@Mehta] and references in [@BG]. In these studies, Bump and Gamburd [@BG] obtain simple proofs for the cases corresponding to compact classical groups by using symmetric polynomial theory. Our aim in this note is to use their technique to calculate averages of the characteristic polynomials for random matrix ensembles associated with compact symmetric spaces.
We deal with the compact symmetric spaces $G/K$ classified by Cartan, where $G$ is a compact subgroup in $GL(N,{\mathbb{C}})$ for some positive integer $N$, and $K$ is a closed subgroup of $G$. Assume $G/K$ is realized as a subspace $S$ in $G$, i.e., $S \simeq G/K$, and the probability measure $\dd M$ on $S$ is then induced from $G/K$. We call the probability space $(S, \dd M)$ the random matrix ensemble associated with $G/K$.
For example, $U(n)/O(n)$ is the symmetric space with a restricted root system of type A, and is realized by $S=\{M \in U(n) \ | \ M = \trans{M} \}$. Here $\trans{M}$ stands for the transposed matrix of $M$ while $U(n)$ and $O(n)$ denote the unitary and orthogonal group of matrices or order $n$ respectively. The induced measure $\dd M$ on $S$ satisfies the invariance $\dd (H M \trans{H})= \dd M$ for any $H \in U(n)$. This random matrix ensemble $(S, \dd M)$ is well known as the circular orthogonal ensemble (COE for short), see e.g. [@Dyson; @Mehta].
We also consider the classical compact Lie groups $U(n)$, $SO(n)$, and $Sp(2n)$. Regarding these groups as symmetric spaces, the random matrix space $S$ is just the group itself with its Haar measure.
The compact symmetric spaces studied by Cartan are divided into A and BC type main branches according to their root systems. There are three symmetric spaces of type A, with their corresponding matrix ensembles called circular orthogonal, unitary, and symplectic ensembles. For these ensembles, the probability density functions (p.d.f.) for the eigenvalues are proportional to $$\Delta^{{\mathrm{Jack}}}(\bz;2/\beta)= \prod_{1 \le i<j \le n} |z_i -z_j|^{\beta},$$ with $\beta=1,2,4$, where $\bz =(z_1,\dots, z_n)$, with $|z_i|=1$, denotes the sequence of eigenvalues of the random matrix. We will express the average of the product of characteristic polynomials $\det(I+ xM)$ for a random matrix $M$ as a Jack polynomial ([@Mac Chapter VI-10]) of a rectangular-shaped Young diagram. Jack polynomials are orthogonal with respect to the weight function $\Delta^{{\mathrm{Jack}}}$. Our theorems are obtained in a simple algebraic way, and contain results given in [@KS_zetafunctions].
For compact symmetric spaces of type BC root systems, the corresponding p.d.f. is given by $$\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3) =
\prod_{1 \le i <j \le n} |1-z_i z_j^{-1}|^{2k_3} |1-z_i z_j|^{2k_3}
\cdot \prod_{1 \le j \le n} |1-z_j|^{2k_1} |1-z_j^2|^{2k_2}.$$ Here the $k_i$’s denote multiplicities of roots in the root systems of the symmetric spaces. For example, the p.d.f. induced from the symmetric space $SO(4n+2)/(SO(4n+2) \cap Sp(4n+2))$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2, \frac{1}{2},2)$. For this class of compact symmetric spaces, Opdam and Heckman’s Jacobi polynomials ([@Diejen; @Heckman]), which are orthogonal with respect to $\Delta^{{\mathrm{HO}}}$, will play the same role as Jack polynomials for type A cases. Namely, we will express the average of the product of characteristic polynomials $\det(I+ xM)$ as the Jacobi polynomial of a rectangular-shaped diagram.
This paper is organized as follows:
Our main results, which are expressions for the averages of products of characteristic polynomials, will be given §6. As described above, the symmetric spaces corresponding to the two root systems, type A and BC, will be discussed separately. For type A spaces, we use Jack polynomial theory. These discussions can be generalized to Macdonald polynomials. Thus, after preparations in §2, we give some generalized identities involving Macdonald polynomials and a generalization of the weight function $\Delta^{{\mathrm{Jack}}}$ in §3 and §4. In particular, we obtain $q$-analogues of Keating and Snaith’s formulas [@KS_zetafunctions] for the moments of characteristic polynomials and a generalization of the strong Szegö limit theorem for Toeplitz determinants. These identities are reduced to characteristic polynomial expressions for symmetric spaces of the A type root system in §6.1 - §6.3. On the other hand, for type BC spaces, we employ Opdam and Heckman’s Jacobi polynomials. We review the definition and several properties of these polynomials in §5, while in §6.4 - §6.12 we apply them to obtain expressions for the products of characteristic polynomials of random matrix ensembles associated with symmetric spaces of type BC.
Basic Properties of Macdonald symmetric functions
=================================================
We recall the definition of Macdonald symmetric functions, see [@Mac Chapter VI] for details. Let $\lambda$ be a partition, i.e., $\lambda=(\lambda_1,\lambda_2,\dots)$ is a weakly decreasing ordered sequence of non-negative integers with finitely many non-zero entries. Denote by $\ell(\lambda)$ the number of non-zero $\lambda_j$ and by $|\lambda|$ the sum of all $\lambda_j$. These values $\ell(\lambda)$ and $|\lambda|$ are called the length and weight of $\lambda$ respectively. We identify $\lambda$ with the associated Young diagram $\{(i,j) \in {\mathbb{Z}}^2 \ | \ 1 \le j \le \lambda_i \}$. The conjugate partition $\lambda'=(\lambda'_1,\lambda'_2,\dots)$ is determined by the transpose of the Young diagram $\lambda$. It is sometimes convenient to write this partition in the form $\lambda=(1^{m_1} 2^{m_2} \cdots )$, where $m_i=m_i(\lambda)$ is the multiplicity of $i$ in $\lambda$ and is given by $m_i=\lambda'_i-\lambda'_{i+1}$. For two partitions $\lambda$ and $\mu$, we write $\lambda \subset \mu$ if $\lambda_i \le \mu_i$ for all $i$. In particular, the notation $\lambda \subset (m^n)$ means that $\lambda$ satisfies $\lambda_1 \le m$ and $\lambda_1' \le n$. The dominance ordering associated with the root system of type A is defined as follows: for two partitions $\lambda=(\lambda_1,\lambda_2,\dots)$ and $\mu=(\mu_1,\mu_2,\dots)$, $$\mu \le_{{\mathrm{A}}} \lambda \qquad \Leftrightarrow \qquad
|\lambda|=|\mu|
\quad \text{and} \quad
\mu_1 + \cdots+\mu_i \le \lambda_1+ \cdots +\lambda_i \quad \text{for all $i \ge 1$}.$$
Let $q$ and $t$ be real numbers such that both $|q|<1$ and $|t|<1$. Put $F={\mathbb{Q}}(q,t)$ and ${\mathbb{T}}^n=\{\bz =(z_1,\dots,z_n) \ | \ |z_i|=1 \ (1 \le i \le n)\}$. Denote by $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ the algebra of symmetric polynomials in variables $x_1,\dots,x_n$. Define an inner product on $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ by $$\langle f, g \rangle_{\Delta^{{\mathrm{Mac}}}} = \frac{1}{n!}
\int_{{\mathbb{T}}^n} f(\bz) g(\bz^{-1}) \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz$$ with $$\Delta^{{\mathrm{Mac}}}(\bz;q,t)= \prod_{1 \le i<j \le n} \Bigg|
\frac{(z_i z_j^{-1};q)_\infty}{(t z_i z_j^{-1};q)_\infty} \Bigg|^2,$$ where $\bz^{-1}=(z_1^{-1},\dots,z_n^{-1})$ and $(a;q)_\infty= \prod_{r=0}^\infty(1-aq^r)$. Here $\dd \bz$ is the normalized Haar measure on ${\mathbb{T}}^n$.
For a partition $\lambda$ of length $\ell(\lambda) \le n$, put $$\label{eq:monomialA}
m_{\lambda}^{{\mathrm{A}}} (x_1,\dots,x_n) =
\sum_{\nu=(\nu_1,\dots,\nu_n) \in {\mathfrak{S}}_n \lambda} x_1^{\nu_1} \cdots x_n^{\nu_n},$$ where the sum runs over the ${\mathfrak{S}}_n$-orbit ${\mathfrak{S}}_n \lambda = \{ (\lambda_{\sigma(1)},\dots, \lambda_{\sigma(n)}) \ | \ \sigma \in {\mathfrak{S}}_n\}$. Here we add the suffix “A” because ${\mathfrak{S}}_n$ is the Weyl group of type A. Then Macdonald polynomials (of type A) $P_\lambda^{{\mathrm{Mac}}}=P_{\lambda}^{{\mathrm{Mac}}}(x_1,\dots,x_n;q,t)
\in F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ are characterized by the following conditions: $$P_{\lambda}^{{\mathrm{Mac}}} = m_{\lambda}^{{\mathrm{A}}} + \sum_{\mu <_{{\mathrm{A}}} \lambda} u_{\lambda \mu}
m_{\mu}^{{\mathrm{A}}}
\quad \text{with $u_{\lambda\mu} \in F$}, \qquad\qquad
\langle P_{\lambda}^{{\mathrm{Mac}}}, P_{\mu}^{{\mathrm{Mac}}}
\rangle_{\Delta^{{\mathrm{Mac}}}}=0 \quad \text{if $\lambda \not=\mu$}.$$
Denote by $\Lambda_F$ the $F$-algebra of symmetric functions in infinitely many variables $\bx=(x_1,x_2,\dots)$. That is, an element $f =f(\bx) \in \Lambda_F$ is determined by the sequence $(f_n)_{n \ge 0}$ of polynomials $f_n$ in $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$, where these polynomials satisfy $\sup_{n \ge 0} \deg (f_n) < \infty$ and $f_m(x_1,\dots,x_n,0,\dots,0)=f_n(x_1,\dots,x_n)$ for any $m \ge n$, see [@Mac Chapter I-2]. Macdonald polynomials satisfy the stability property $$P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n,x_{n+1};q,t) \Big|_{x_{n+1}=0} =
P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n;q,t)$$ for any partition $\lambda$ of length $\ell(\lambda) \le n$, and therefore for all partitions $\lambda$, [*Macdonald functions*]{} $P_{\lambda}^{{\mathrm{Mac}}}(\bx ;q,t)$ can be defined.
For each square $s=(i,j)$ of the diagram $\lambda$, let $$a(s)=\lambda_i-j, \qquad a'(s)=j-1, \qquad l(s)= \lambda'_j-i, \qquad l'(s)= i-1.$$ These numbers are called the arm-length, arm-colength, leg-length, and leg-colength respectively. Put $$c_{\lambda}(q,t)= \prod_{s \in \lambda} (1-q^{a(s)}t^{l(s)+1}), \qquad
c_{\lambda}'(q,t)= \prod_{s \in \lambda} (1-q^{a(s)+1} t^{l(s)}).$$ Note that $c_{\lambda}(q,t)= c'_{\lambda'}(t,q)$. Defining the $Q$-function by $Q_{\lambda}(\bx;q,t)= c_{\lambda}(q,t) c'_{\lambda}(q,t)^{-1} P_\lambda(\bx;q,t)$, we have the dual Cauchy identity [@Mac Chapter VI (5.4)] $$\begin{aligned}
& \sum_{\lambda} P_\lambda(\bx;q,t) P_{\lambda'}(\by;t,q)=
\sum_{\lambda} Q_\lambda(\bx;q,t) Q_{\lambda'}(\by;t,q) \label{EqDualCauchy} \\
=& \prod_{i\ge 1} \prod_{j\ge 1} (1+x_i y_j)
=\exp \(\sum_{k=1}^\infty \frac{(-1)^{k-1}}{k}p_k(\bx)p_k(\by) \), \notag \end{aligned}$$ where $\by=(y_1,y_2,\dots)$. Here $p_k$ is the power-sum function $p_k(\bx)=x_1^k+x_2^k+ \cdots$.
We define the generalized factorial $(a)_{\lambda}^{(q,t)}$ by $$(a)_{\lambda}^{(q,t)} = \prod_{s \in \lambda} (t^{l'(s)} - q^{a'(s)} a).$$ Let $u$ be an indeterminate and define the homomorphism $\epsilon_{u,t}$ from $\Lambda_F$ to $F$ by $$\label{EqSpecialPowerSum}
\epsilon_{u,t}(p_r) = \frac{1-u^r}{1-t^r} \qquad \text{for all $r \ge 1$}.$$ In particular, we have $\epsilon_{t^n,t}(f)= f(1,t,t^2,\dots, t^{n-1})$ for any $f \in \Lambda_F$. Then we have ([@Mac Chapter VI (6.17)]) $$\label{EqSpecialMac}
\epsilon_{u,t}(P_{\lambda}^{{\mathrm{Mac}}})= \frac{(u)_{\lambda}^{(q,t)}}{c_{\lambda}(q,t)}.$$ Finally, the following orthogonality property is satisfied for any two partitions $\lambda$ and $\mu$ of length $\le n$: $$\label{EqOrthogonality}
\langle P_{\lambda}^{{\mathrm{Mac}}}, Q_{\mu}^{{\mathrm{Mac}}} \rangle_{\Delta^{{\mathrm{Mac}}}}=
\delta_{\lambda \mu} \langle 1,1 \rangle_{\Delta^{{\mathrm{Mac}}}}
\prod_{s \in \lambda}
\frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}.$$
Averages with respect to $\Delta^{{\mathrm{Mac}}}(\bz;q,t)$ {#sectionMacAverage}
===========================================================
As in the previous section, we assume $q$ and $t$ are real numbers in the interval $(-1,1)$. For a Laurent polynomial $f$ in variables $z_1,\dots,z_n$, we define $$\langle f \rangle_{n}^{(q,t)} =
\frac{\int_{{\mathbb{T}}^n} f(\bz) \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz}
{\int_{{\mathbb{T}}^n} \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz}.$$ In this section, we calculate averages of the products of the polynomial $$\Psi^{{\mathrm{A}}}(\bz;\eta)= \prod_{j=1}^n (1+ \eta z_j), \qquad \eta \in {\mathbb{C}}$$ with respect to $\langle \cdot \rangle_{n}^{(q,t)}$. Denoting the eigenvalues of a unitary matrix $M$ by $z_1,\dots,z_n$, the polynomial $\Psi^{{\mathrm{A}}}(\bz;\eta)$ is the characteristic polynomial $\det(I+\eta M)$.
The following theorems will induce averages of the products of characteristic polynomials for random matrix ensembles associated with type A root systems, see §\[sectionCBEq\] and §\[subsectionA\] - §\[subsectionAII\] below.
\[ThmAverageMac\] Let $K$ and $L$ be positive integers. Let $\eta_1,\dots, \eta_{L+K}$ be complex numbers such that $\eta_j \not=0 \ ( 1\le j \le L)$. Then we have $$\left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n}^{(q,t)} =
(\eta_1 \cdots \eta_L)^{-n}
\cdot P_{(n^L)}^{{\mathrm{Mac}}} (\eta_1, \dots, \eta_{L+K};t,q).$$
By the dual Cauchy identity , we have $$\begin{aligned}
& \prod_{l=1}^L
\Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+k})
= \prod_{l=1}^L \eta_l^{-n} \cdot (z_1 \cdots z_n)^{-L} \cdot
\prod_{k=1}^{L+K} \prod_{j=1}^n (1+\eta_k z_j) \\
=& \prod_{l=1}^L \eta_l^{-n} \cdot (z_1 \cdots z_n)^{-L}
\sum_{\lambda} Q_{\lambda}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q)
Q_{\lambda'}^{{\mathrm{Mac}}}(\bz;q,t).\end{aligned}$$ Therefore, since $P_{(L^n)}^{{\mathrm{Mac}}}(\bz;q,t)=(z_1\cdots z_n)^L$ ([@Mac Chapter VI (4.17)]), we see that $$\begin{aligned}
\left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n}^{(q,t)}
=& \prod_{l=1}^L \eta_l^{-n}
\sum_{\lambda} Q_{\lambda}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q)
\frac{\langle Q_{\lambda'}^{{\mathrm{Mac}}},
P_{(L^n)}^{{\mathrm{Mac}}} \rangle_{ \Delta^{{\mathrm{Mac}}} } }
{\langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}} } }
\\
=& \prod_{l=1}^L \eta_l^{-n} \cdot Q_{(n^L)}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q)
\prod_{s \in (L^n)}
\frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}\end{aligned}$$ by the orthogonality property . It is easy to check that $$\prod_{s \in (L^n)}
\frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}
=\frac{c_{(L^n)}(q,t)}{c'_{(L^n)}(q,t)}= \frac{c'_{(n^L)}(t,q)}{c_{(n^L)}(t,q)},$$ and so we obtain the claim.
It may be noted that the present proof of Theorem \[ThmAverageMac\] is similar to the corresponding one in [@BG].
\[CorMomentValue\] For each positive integer $k$ and $\xi \in {\mathbb{T}}$, we have $$\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz; q^{i+1/2} \xi )|^2 \right\rangle_{n}^{(q,t)}
=
\prod_{i=0}^{k-1} \prod_{j=0}^{n-1} \frac{1-q^{k+i+1} t^j}{1-q^{i+1} t^j}.$$
Set $L=K=k$ and $\overline{\eta_i}^{-1} =\eta_{i+k} =q^{i-1/2} \xi \ (1 \le i \le k)$ in Theorem \[ThmAverageMac\]. Then we have $$\begin{aligned}
\left\langle \prod_{i=0}^{k-1} |
\Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi)|^2 \right\rangle_{n}^{(q,t)}
=& \prod_{i=0}^{k-1} q^{(i+1/2)n} \cdot P_{(n^k)} (q^{-k+1/2}, q^{-k+3/2}, \dots, q^{-1/2},
q^{1/2}, \cdots, q^{k-1/2};t,q) \\
=& q^{n k^2/2} \cdot q^{(-k+1/2)kn} P_{(n^k)}(1,q,\cdots, q^{2k-1};t,q) \\
=& q^{-n k(k-1)/2} \epsilon_{q^{2k},q} (P_{(n^k)}(\cdot;t,q)). \end{aligned}$$ From expression , the right-hand side of the above expression equals $$q^{-n k(k-1)/2} \frac{(q^{2k})_{(n^k)}^{(t,q)}}{c_{(n^k)}(t,q)}
=q^{-n k(k-1)/2} \prod_{i=1}^k \prod_{j=1}^n \frac{q^{i-1}-t^{j-1} q^{2k}}{1-t^{n-j}q^{k-i+1}}
= \prod_{j=0}^{n-1} \prod_{i=1}^k \frac{1-t^{j} q^{2k-i+1}}{1-t^j q^{k-i+1}},$$ and the result follows.
Kaneko [@Kaneko2] defines the multivariable $q$-hypergeometric function associated with Macdonald polynomials by $${_2 \Phi_1}^{(q,t)}(a,b;c;x_1,\dots,x_n)= \sum_\lambda
\frac{ (a)_{\lambda}^{(q,t)} (b)_{\lambda}^{(q,t)}}{(c)_{\lambda}^{(q,t)}}
\frac{P^{{\mathrm{Mac}}}_{\lambda}(x_1,\dots,x_n;q,t)}{c'_{\lambda}(q,t)},$$ where $\lambda$ runs over all partitions of length $\ell(\lambda) \le n$. The $q$-shifted moment $\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi )|^2 \right\rangle_{n}^{(q,t)}$ given in Corollary \[CorMomentValue\] can also be expressed as a special value of the generalized $q$-hypergeometric function ${_2 \Phi_1}^{(q,t)}$ as follows:
\[PropMomentHypergeometric\] For any complex number with $|\eta|<1$ and real number $u$, $$\left\langle \prod_{j=1}^n \left|
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)}
= {_2 \Phi_1}^{(q,t)}(u^{-1},u^{-1} ;q t^{n-1};
(u|\eta|)^2, (u|\eta|)^2 t,\dots, (u|\eta|)^2t^{n-1}).$$ In particular, letting $u=q^k$ and $\eta=q^{1/2}\xi$ with $\xi \in {\mathbb{T}}$, we have $$\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi)|^2 \right\rangle_{n}^{(q,t)}
= {_2 \Phi_1}^{(q,t)}(q^{-k},q^{-k} ;q t^{n-1};
q^{2k+1}, q^{2k+1}t , \dots, q^{2k+1} t^{n-1}).$$
A simple calculation gives $$\prod_{j=1}^n
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}}
= \exp \(\sum_{k=1}^\infty \frac{(-1)^{k-1}}{k} \frac{1-u^k}{1-q^k}
p_{k}(-\eta z_1, \dots, -\eta z_n) \).$$ From expressions and , we have $$\prod_{j=1}^n
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}}
= \sum_{\lambda} (-\eta)^{|\lambda|}
\epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) Q^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t)
= \sum_{\lambda} (-\eta)^{|\lambda|}
\epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) P^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t).$$ Thus we have $$\prod_{j=1}^n \left|
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2
= \sum_{\lambda, \mu} (-\eta)^{|\lambda|} (-\overline{\eta})^{|\mu|}
\epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) \epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\mu'}(\cdot;t,q))
P^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t) Q^{{\mathrm{Mac}}}_{\mu}(\bz^{-1};q,t).$$ The average is given by $$\begin{aligned}
\left\langle \prod_{j=1}^n \left|
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)}
=& \sum_{\lambda} |\eta|^{2|\lambda|}
\epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q))
\epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q))
\frac{ \langle P^{{\mathrm{Mac}}}_{\lambda}, Q^{{\mathrm{Mac}}}_{\lambda}
\rangle_{\Delta^{{\mathrm{Mac}}}} }
{\langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} } \\
=& \sum_{\lambda} |\eta|^{2|\lambda|}
\frac{\{(u)_{\lambda'}^{(t,q)}\}^2}{c_{\lambda'}(t,q) c'_{\lambda'}(t,q)}
\prod_{s \in \lambda} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}\end{aligned}$$ by expression and the orthogonality property . It is easy to check that $$\begin{aligned}
&(u)_{\lambda'}^{(t,q)} =(-u)^{|\lambda|} (u^{-1})_{\lambda}^{(q,t)}, \qquad
c_{\lambda'}(t,q) c'_{\lambda'}(t,q)= c_{\lambda}(q,t) c'_{\lambda}(q,t), \\
&\prod_{s \in \lambda} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}
= \prod_{s \in \lambda} \frac{t^{l'(s)}-q^{a'(s)}t^{n}}{t^{l'(s)} -q^{a'(s)+1}t^{n-1}}
= \frac{(t^n)_{\lambda}^{(q,t)}}{(q t^{n-1})_{\lambda}^{(q,t)}}.\end{aligned}$$ Finally, we obtain $$\left\langle \prod_{j=1}^n \left|
\frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)}
= \sum_{\lambda} (u|\eta|)^{2|\lambda|} \frac{\{(u^{-1})_{\lambda}^{(q,t)}\}^2}{(q t^{n-1})_{\lambda}^{(q,t)}}
\frac{P^{{\mathrm{Mac}}}_{\lambda}(1,t,\dots,t^{n-1};q,t)}{c'_{\lambda}(q,t)},$$ which equals ${_2 \Phi_1}^{(q,t)}(u^{-1},u^{-1} ;q t^{n-1};
(u|\eta|)^2,\dots, (u|\eta|)^2t^{n-1})$.
Now we derive the asymptotic behavior of the moment of $|\Psi(\bz;\eta)|$ when $|\eta| < 1$ in the limit as $n \to \infty$. The following theorem is a generalization of the well-known strong Szegö limit theorem as stated in §\[subsectionCBEJack\] below.
\[Thm:SzegoMacdonald\] Let $\phi(z)=\exp(\sum_{k \in {\mathbb{Z}}} c(k) z^k)$ be a function on ${\mathbb{T}}$ and assume $$\label{Eq:AssumptionSzego}
\sum_{k \in {\mathbb{Z}}} |c(k)|< \infty \qquad \text{and} \qquad
\sum_{k \in {\mathbb{Z}}} |k| |c(k)|^2 < \infty.$$ Then we have $$\lim_{n \to \infty} e^{-n c(0)}
\left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)}
= \exp \( \sum_{k=1}^\infty kc(k)c(-k) \frac{1-q^k}{1-t^k} \).$$
First we see that $$\begin{aligned}
& \prod_{j=1}^n \phi(z_j)= e^{n c(0)} \prod_{k=1}^\infty \exp(c(k) p_k(\bz))
\exp(c(-k) \overline{p_k(\bz)}) \\
=& e^{n c(0)} \prod_{k=1}^\infty \( \sum_{a=0}^\infty \frac{c(k)^{a}}{a!} p_{(k^{a})}(\bz) \)
\( \sum_{b=0}^\infty \frac{c(-k)^{b}}{b!} \overline{p_{(k^{b})}(\bz)} \) \\
=& e^{n c(0)} \sum_{(1^{a_1} 2^{a_2} \cdots )} \sum_{(1^{b_1} 2^{b_2} \cdots )}
\( \prod_{k=1}^\infty \frac{c(k)^{a_k}c(-k)^{b_k}}{a_k! \, b_k!} \)
p_{(1^{a_1}2^{a_2} \cdots )}(\bz)\overline{p_{(1^{b_1}2^{b_2} \cdots )}(\bz)},\end{aligned}$$ where both $(1^{a_1}2^{a_2} \cdots )$ and $(1^{b_1}2^{b_2} \cdots )$ run over all partitions. Therefore we have $$e^{-n c(0)}
\left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)}
= \sum_{(1^{a_1} 2^{a_2} \cdots )} \sum_{(1^{b_1} 2^{b_2} \cdots )}
\( \prod_{k=1}^\infty \frac{c(k)^{a_k}}{a_k!} \frac{c(-k)^{b_k}}{b_k!} \)
\frac{ \langle p_{(1^{a_1} 2^{a_2} \cdots )}, p_{(1^{b_1} 2^{b_2} \cdots )}
\rangle_{\Delta^{{\mathrm{Mac}}}} }
{ \langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} }.$$ We recall the asymptotic behavior $$\frac{ \langle p_{(1^{a_1} 2^{a_2} \cdots )}, p_{(1^{b_1} 2^{b_2} \cdots )}
\rangle_{\Delta^{{\mathrm{Mac}}}} }
{ \langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} } \qquad \longrightarrow \qquad
\prod_{k=1}^\infty \delta_{a_k b_k} k^{a_k} a_k! \( \frac{1-q^k}{1-t^k}\)^{a_k}$$ in the limit as $n \to \infty$, see [@Mac Chapter VI (9.9) and (1.5)]. It follows from this that $$\begin{aligned}
&\lim_{n \to \infty} e^{-n c(0)}
\left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)}
= \sum_{(1^{a_1} 2^{a_2} \cdots )}
\prod_{k=1}^\infty \frac{(k c(k) c(-k))^{a_k}}{a_k!} \(\frac{1-q^k}{1-t^k}\)^{a_k} \\
=& \prod_{k=1}^\infty \( \sum_{a=0}^\infty \frac{(k c(k) c(-k)\frac{1-q^k}{1-t^k})^{a}}{a!}\)
= \exp \( \sum_{k=1}^\infty k c(k) c(-k) \frac{1-q^k}{1-t^k}\).\end{aligned}$$ Here $\sum_{k=1}^\infty k c(k) c(-k) \frac{1-q^k}{1-t^k}$ converges absolutely by the second assumption in and the Cauchy-Schwarz inequality, because $| \frac{1-q^k}{1-t^k}| \le \frac{1+|q|^k}{1-|t|^{k}} \le 1+|q|$.
Note that the present proof is similar to the corresponding one in [@BD]. The result in [@BD] is the special case of Theorem \[Thm:SzegoMacdonald\] with $q=t$. As an example of this theorem, the asymptotic behavior of the moment of $|\Psi^{{\mathrm{A}}}(\bz;\eta)|$ is given as follows. A further asymptotic result is given by Corollary \[AsymMomentQ\] below.
\[ExampleMomentLimit\] Let $\gamma \in {\mathbb{R}}$ and let $\eta$ be a complex number such that $|\eta| < 1$. Then we have $$\lim_{n \to \infty} \left\langle |\Psi^{{\mathrm{A}}}(\bz;\eta)|^{2\gamma} \right\rangle_n^{(q,t)}
= \( \frac{(q |\eta|^2;t)_{\infty}}{(|\eta|^2;t)_{\infty}} \)^{\gamma^2}.$$ This result is obtained by applying Theorem \[Thm:SzegoMacdonald\] to $\phi(z)= |1+\eta z|^{2\gamma}$. Then the Fourier coefficients of $\log \phi$ are $c(k)=(-1)^{k-1} \eta^k \gamma/k$ and $c(-k)=(-1)^{k-1} \overline{\eta}^k \gamma/k$ for $k >0$, and $c(0)=0$.
Circular ensembles and its $q$-analogue {#sectionCBEq}
=======================================
Special case: $t=q^{\beta/2}$
-----------------------------
In this subsection, we examine the results of the last section for the special case $t=q^{\beta/2}$ with $\beta>0$, i.e., we consider the weight function $\Delta^{{\mathrm{Mac}}}(\bz;q,q^{\beta/2})$. Denote by $\langle \cdot \rangle_{n,\beta}^q$ the corresponding average. Define the $q$-gamma function (see e.g. [@AAR (10.3.3)]) by $$\Gamma_q(x)= (1-q)^{1-x} \frac{(q;q)_\infty}{(q^x;q)_{\infty}}.$$
\[ThmMomentGamma\] Let $\beta$ be a positive real number. For a positive integer $k$ and $\xi \in {\mathbb{T}}$, we have $$\begin{aligned}
\left\langle \prod_{i=1}^k | \Psi(\bz;q^{i-1/2}\xi )|^2 \right\rangle_{n,\beta}^q
=& \prod_{i=0}^{k-1} \frac{\Gamma_t(\frac{2}{\beta}(i+1)) \Gamma_t(n+\frac{2}{\beta}(k+i+1))}
{\Gamma_t(\frac{2}{\beta}(k+i+1)) \Gamma_t(n+\frac{2}{\beta}(i+1))} \qquad
\text{(with $t=q^{\beta/2}$)}
\label{eqCBEmomentQ} \\
=& \prod_{j=0}^{n-1} \frac{\Gamma_q (\frac{\beta}{2}j +2k+1) \Gamma_q(\frac{\beta}{2} j+1)}
{\Gamma_q(\frac{\beta}{2}j+k+1)^2}. \notag\end{aligned}$$
The claim follows immediately from Corollary \[CorMomentValue\] and the functional equation $\Gamma_q(1+x) = \frac{1-q^x}{1-q} \Gamma_q(x)$.
Consider now the asymptotic behavior of this average in the limit as $n \to \infty$. Put $[n]_q = (1-q^n)/(1-q)$.
\[AsymMomentQ\] For a positive integer $k$ and $\xi \in {\mathbb{T}}$, it holds that $$\label{eqCBEmomentLimit}
\lim_{n \to \infty} ([n]_t)^{-2k^2/\beta}
\left\langle \prod_{i=1}^k | \Psi(\bz;q^{i-1/2}\xi )|^2 \right\rangle_{n,\beta}^q
= \prod_{i=0}^{k-1} \frac{\Gamma_t(\frac{2}{\beta}(i+1))}
{\Gamma_t(\frac{2}{\beta}(k+i+1))} \qquad \text{with $t=q^{\beta/2}$}.$$
Verify that $$\label{eq:GammaQasym}
\lim_{n \to \infty} \frac{\Gamma_t(n+a)}{\Gamma_t(n) ([n]_t)^a} =1$$ for any constant $a$. Then the claim is clear from expression .
\[ExFq\] Denote by ${\mathcal{F}}_\beta^q(k)$ the right-hand side of equation . Then we obtain $$\begin{aligned}
{\mathcal{F}}_{1}^q(k) =& \prod_{j=0}^{k-1} \frac{[2j+1]_{q^{\frac{1}{2}}} !}{[2k+2j+1]_{q^{\frac{1}{2}}}!},
\label{eqf1q} \\
{\mathcal{F}}_{2}^q(k) =& \prod_{j=0}^{k-1} \frac{[j]_{q} !}{[j+k]_q!}, \label{eqf2q} \\
{\mathcal{F}}_{4}^q(2k) =&
\frac{([2]_q)^{2k^2}}{[2k-1]_q !!} \prod_{j=1}^{2k-1} \frac{[j]_q!}{[2j]_q!}.
\label{eqf4q}\end{aligned}$$ Here $[n]_q!=[n]_q [n-1]_q \cdots [1]_q$ and $[2k-1]_q!! = [2k-1]_q [2k-3]_q \cdots [3]_q [1]_q$. Equalities and are trivial because $\Gamma_q(n+1)=[n]_q!$. We check relation . By definition, we have $${\mathcal{F}}_4^q(2k)=
\prod_{i=0}^{2k-1} \frac{\Gamma_{q^2}(\frac{1}{2}(i+1))}
{\Gamma_{q^2}(k+\frac{1}{2}(i+1))}
= \prod_{p=0}^{k-1}
\frac{\Gamma_{q^2} (p+\frac{1}{2}) \Gamma_{q^2}(p+1)}
{\Gamma_{q^2} (k+p+\frac{1}{2}) \Gamma_{q^2}(k+p+1)}.$$ Using the $q$-analogue of the Legendre duplication formula (see e.g. [@AAR Theorem 10.3.5(a)]) $$\Gamma_q(2x) \Gamma_{q^2}(1/2) = (1+q)^{2x-1} \Gamma_{q^2}(x) \Gamma_{q^2}(x+1/2),$$ we have $${\mathcal{F}}_4^q(2k)= \prod_{p=0}^{k-1} \frac{(1+q)^{2k} \Gamma_q(2p+1)}{\Gamma_q(2k+2p+1)}=
([2]_q)^{2k^2} \prod_{p=0}^{k-1} \frac{[2p]_q! }{[2k+2p]_q !}.$$ Expression can then be proven by induction on $n$.
Circular $\beta$-ensembles and Jack polynomials {#subsectionCBEJack}
-----------------------------------------------
We take the limit as $q \to 1$ of the results of the previous subsection. Recall the formula $$\lim_{q \to 1} \frac{(q^a x;q)_{\infty}}{(x;q)_{\infty}} =(1-x)^{-a}$$ for $|x|<1$ and $a \in {\mathbb{R}}$, see [@AAR Theorem 10.2.4] for example. Then we have $$\lim_{q \to 1} \Delta^{{\mathrm{Mac}}}(\bz;q,q^{\beta/2})
= \prod_{1 \le i<j \le n} |z_i-z_j|^\beta =: \Delta^{{\mathrm{Jack}}}(\bz;2/\beta),$$ which is a constant times the p.d.f. for Dyson’s circular $\beta$-ensembles (see §6). Denote by $\langle \cdot \rangle_{n,\beta}$ the corresponding average, i.e., for a function $f$ on ${\mathbb{T}}^n$ define $$\langle f \rangle_{n,\beta} = \lim_{q \to 1} \langle f \rangle_{n,\beta}^q
= \frac{\int_{{\mathbb{T}}^n} f(\bz) \prod_{1 \le i<j \le n} |z_i-z_j|^\beta \dd \bz}
{\int_{{\mathbb{T}}^n} \prod_{1 \le i<j \le n} |z_i-z_j|^\beta \dd \bz}.$$
Let $\alpha >0$. The Jack polynomial $P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;\alpha)$ for each partition $\lambda$ is defined by the limit approached by the corresponding Macdonald polynomial, $$P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;\alpha)
= \lim_{q \to 1} P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n;q,q^{1/\alpha}),$$ see [@Mac Chapter VI-10] for detail. Jack polynomials are orthogonal polynomials with respect to the weight function $\Delta^{{\mathrm{Jack}}}(\bz;\alpha)$. In particular, $s_{\lambda}(x_1,\dots,x_n)=P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;1)$ are called Schur polynomials, and are irreducible characters of $U(n)$ associated with $\lambda$.
From the theorems in the last section, we have the following: from Theorem \[ThmAverageMac\], we see that $$\label{AverageProductA}
\left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n,\beta} =
(\eta_1 \cdots \eta_L)^{-n}
\cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1, \dots, \eta_{L+K};\beta/2).$$ For a positive real number $\gamma$ and complex number $\eta$ with $|\eta|<1$, we have from Proposition \[PropMomentHypergeometric\] that $$\label{MomentHypergeometricJack}
\left\langle |\Psi(\bz;\eta)|^{2\gamma} \right\rangle_{n, {\mathrm{C}\beta\mathrm{E}_{}}}
= {_2 F_1}^{(2/\beta)}(-\gamma, -\gamma; \frac{\beta}{2}(n-1)+1;
|\eta|^2, \dots, |\eta|^2),$$ where ${_2 F_1}^{(\alpha)}(a,b; c; x_1,\dots,x_n)$ is the hypergeometric function associated with Jack polynomials [@Kaneko1] defined by $${_2 F_1}^{(\alpha)}(a,b; c; x_1,\dots,x_n)= \sum_{\lambda}
\frac{[a]^{(\alpha)}_\lambda [b]^{(\alpha)}_\lambda}{[c]^{(\alpha)}_\lambda}
\frac{\alpha^{|\lambda|} P_\lambda^{{\mathrm{Jack}}}(x_1,\dots,x_n;\alpha)}{c'_\lambda(\alpha)}$$ with $$[u]_\lambda^{(\alpha)}=\prod_{s \in \lambda} (u-l'(s)/\alpha +a'(s)), \qquad
\text{and} \qquad
c'_\lambda(\alpha)= \prod_{s \in \lambda}(\alpha(a(s)+1)+l(s)).$$ For a positive integer $k$, and $\xi \in {\mathbb{T}}$, by Theorem \[ThmMomentGamma\] and Corollary \[AsymMomentQ\] it holds that $$\label{MomentAsymptoticA}
\left\langle | \Psi^{{\mathrm{A}}}(\bz;\xi )|^{2k} \right\rangle_{n,\beta}
= \prod_{i=0}^{k-1} \frac{\Gamma(\frac{2}{\beta}(i+1)) \Gamma(n+\frac{2}{\beta}(k+i+1))}
{\Gamma(\frac{2}{\beta}(k+i+1)) \Gamma(n+\frac{2}{\beta}(i+1))}
\sim
\prod_{i=0}^{k-1} \frac{\Gamma(\frac{2}{\beta}(i+1)) }
{\Gamma(\frac{2}{\beta}(k+i+1))} \cdot n^{2k^2/\beta}$$ in the limit as $n \to \infty$. For a function $\phi(z)=\exp(\sum_{k \in {\mathbb{Z}}} c(k) z^k)$ on ${\mathbb{T}}$ satisfying inequalities , by Theorem \[Thm:SzegoMacdonald\] it holds that $$\label{eq:SzegoJack}
\lim_{n \to \infty} e^{-n c(0)}
\left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_{n, \beta}
= \exp \( \frac{2}{\beta}\sum_{k=1}^\infty kc(k)c(-k) \).$$ In particular, for $\gamma \in {\mathbb{R}}$ and a complex number $\eta$ such that $|\eta|< 1$, we have $$\lim_{n \to \infty} \left\langle |\Psi^{{\mathrm{A}}}(\bz;\eta)|^{2\gamma} \right\rangle_{n,\beta}
= (1-|\eta|^2)^{-2 \gamma^2/\beta}.$$
Several observations may be made concerning the above identities: equation is obtained by verifying the limits $$\lim_{t \to 1} \frac{(q^a)_\lambda^{(q,t)}}{(1-t)^{|\lambda|}}
=\alpha^{|\lambda|} [a]_\lambda^{(\alpha)}, \qquad
\lim_{t \to 1} \frac{c'_\lambda(q,t)}{(1-t)^{|\lambda|}} =c'_\lambda(\alpha),$$ with $q=t^\alpha$. The expression for the moment is obtained in [@FK] using a different proof, which employs a Selberg type integral evaluation. Equation is also obtained in [@KS_zetafunctions] essentially by the Selberg integral evaluation. When $\beta=2$, equation presents the strong Szegö limit theorem for a Toeplitz determinant. Indeed, the average of the left-hand side of is then equal to the Toeplitz determinant $\det(d_{i-j})_{1 \le i,j \le n}$ of $\phi$, where $d_i$ are Fourier coefficients of $\phi$. Equation with general $\beta>0$ is seen in [@Johansson1; @Johansson2], but it may be noted that the present proof, employing symmetric function theory, is straightforward. This expression is applied in [@Hyper] in order to observe an asymptotic behavior for Toeplitz ‘hyperdeterminants’.
Jacobi polynomials due to Heckman and Opdam
===========================================
The results obtained in §\[sectionMacAverage\] and §\[sectionCBEq\] will be applied to random matrix polynomials from symmetric spaces of the type A root system in the next section. In order to evaluate the corresponding polynomials of the BC type root system, we here recall Heckman and Opdam’s Jacobi polynomials and give some identities corresponding to and .
The dominance ordering associated with the root system of type BC is defined as follows: for two partitions $\lambda=(\lambda_1,\lambda_2,\dots)$ and $\mu=(\mu_1,\mu_2,\dots)$, $$\mu \le \lambda \qquad \Leftrightarrow \qquad
\mu_1 + \cdots+\mu_i \le \lambda_1+ \cdots +\lambda_i \quad \text{for all $i \ge 1$}.$$ Let ${\mathbb{C}}[\bx^{\pm 1}] = {\mathbb{C}}[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ be the ring of all Laurent polynomials in $n$ variables $\bx=(x_1,\dots,x_n)$. The Weyl group $W={\mathbb{Z}}_2 \wr {\mathfrak{S}}_n = {\mathbb{Z}}_2^n \rtimes {\mathfrak{S}}_n$ of type $BC_n$ acts naturally on ${\mathbb{Z}}^n$ and ${\mathbb{C}}[\bx^{\pm 1}]$, respectively. Denote by ${\mathbb{C}}[\bx^{\pm 1}]^W$ the subring of all $W$-invariants in ${\mathbb{C}}[\bx^{\pm 1}]$. Let $\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)$ be a function on ${\mathbb{T}}^n$ defined by $$\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)
= \prod_{1 \le i < j \le n}
|1-z_i z_j^{-1}|^{2 k_3} |1-z_i z_j|^{2 k_3}
\cdot
\prod_{1 \le j \le n}
|1-z_j|^{2k_1} |1-z_j^2|^{2k_2}.$$ Here we assume $k_1$, $k_2$, and $k_3$ are real numbers such that $$k_1+k_2>-1/2, \quad k_2 > -1/2, \quad k_3 \ge 0.$$ Define an inner product on ${\mathbb{C}}[\bx^{\pm 1}]^W$ by $$\langle f,g \rangle_{\Delta^{{\mathrm{HO}}}} =
\frac{1}{2^n n!} \int_{{\mathbb{T}}^n} f(\bz) g(\bz^{-1}) \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)
\dd \bz.$$
For each partition $\mu$, we let $$m^{{\mathrm{BC}}}_{\mu}(\bx)=\sum_{\nu \in W \mu} x_1^{\nu_1} \cdots x_n^{\nu_n},$$ where $W\mu$ is the $W$-orbit of $\mu$ (cf. ). These polynomials form a ${\mathbb{C}}$-basis of ${\mathbb{C}}[\bx^{\pm 1}]^W$. Then, there exists a unique family of polynomials $P^{{\mathrm{HO}}}_{\lambda}= P^{{\mathrm{HO}}}_{\lambda}(\bx;k_1,k_2,k_3) \in {\mathbb{C}}[\bx^{\pm 1}]^W$ ($\lambda$ are partitions such that $\ell(\lambda) \le n$) satisfying two conditions: $$P^{{\mathrm{HO}}}_{\lambda}(\bx)= m_{\lambda}^{{\mathrm{BC}}}(\bx)+ \sum_{\mu: \mu < \lambda}
u_{\lambda \mu} m^{{\mathrm{BC}}}_{\mu}(\bx),
\quad \text{with $u_{\lambda \mu} \in {\mathbb{C}}$},
\qquad\qquad \langle P^{{\mathrm{HO}}}_{\lambda}, P_{\mu}^{{\mathrm{HO}}} \rangle_{\Delta^{{\mathrm{HO}}}} = 0
\quad
\text{if $\lambda \not= \mu$}.$$ The Laurent polynomials $P_{\lambda}$ are known as Jacobi polynomials associated with the root system of type $BC_n$ due to Heckman and Opdam, see e.g. [@Diejen; @Heckman; @Mimachi]. They can be seen as BC-analogues of Jack polynomials.
For a function $f$ on ${\mathbb{T}}^n$, we denote by $\langle f \rangle_{n}^{k_1,k_2,k_3}$ the mean value of $f$ with respect to the weight function $\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)$: $$\langle f \rangle_{n}^{k_1,k_2,k_3}
= \frac{\int_{{\mathbb{T}}^n} f(\bz) \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)\dd \bz}{
\int_{{\mathbb{T}}^n} \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)\dd \bz}.$$ From the three parameters $k_1,k_2,k_3$, we define new parameters $$\tilde{k}_1 = k_1/k_3, \qquad \tilde{k}_2=(k_2+1)/k_3-1, \qquad \tilde{k}_3=1/k_3.$$ Put $$\Psi^{{\mathrm{BC}}}(\bz;x)= \prod_{j=1}^n(1+x z_j)(1+x z_j^{-1}).$$
\[Thm:MainTheorem\] The following relation holds $$\label{eq:MainEq}
\left\langle
\Psi^{{\mathrm{BC}}}(\bz;x_1)\Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m)
\right\rangle_{n}^{k_1,k_2,k_3}
= (x_1 \cdots x_m)^n
P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3).$$
In order to prove this, we need the following dual Cauchy identity obtained by Mimachi [@Mimachi].
\[Thm:Mimachi\] Let $\bx=(x_1,\dots, x_n)$ and $\by=(y_1,\dots, y_m)$ be sequences of indeterminates. Jacobi polynomials $P^{{\mathrm{HO}}}_{\lambda}$ satisfy the equality $$\prod_{i=1}^n \prod_{j=1}^m (x_i+x_i^{-1} - y_j - y_j^{-1})
= \sum_{\lambda \subset (m^n)} (-1)^{|\tilde{\lambda}|}
P^{{\mathrm{HO}}}_{\lambda}(\bx;k_1,k_2,k_3)
P^{{\mathrm{HO}}}_{\tilde{\lambda}}(\by;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3),$$ where $\tilde{\lambda}=(n-\lambda_m', n-\lambda_{m-1}', \dots, n-\lambda_1')$.
We see that $$\Psi^{{\mathrm{BC}}}(\bz;x_1) \Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m)
= (x_1 \cdots x_m)^n \prod_{i=1}^m
\prod_{j=1}^n (x_i + x_i^{-1} + z_j + z_j^{-1}).$$ Using Proposition \[Thm:Mimachi\] we have $$\begin{aligned}
& \left\langle
\Psi^{{\mathrm{BC}}}(\bz;x_1)\Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m)
\right\rangle_{n}^{k_1,k_2,k_3} \\
= & (x_1 \cdots x_m)^n \sum_{\lambda \subset (m^n)}
P^{{\mathrm{HO}}}_{\tilde{\lambda}}(x_1,\dots, x_m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3)
\langle P^{{\mathrm{HO}}}_{\lambda}(\bz;k_1,k_2,k_3) \rangle_{n}^{k_1,k_2,k_3}.\end{aligned}$$ By the orthogonality relation for Jacobi polynomials, we have $$\langle P^{{\mathrm{HO}}}_{\lambda}(\bz;k_1,k_2,k_3) \rangle_{n}^{k_1,k_2,k_3}
= \begin{cases}
1, & \text{if $\lambda = (0)$}, \\ 0, & \text{otherwise},
\end{cases}$$ and we thus obtain the theorem.
Using Theorem 2.1 in [@Mimachi], we derive a more general form of equation including a Macdonald-Koornwinder polynomial.
\[Thm:Main2\] Let $${\mathcal{F}}(m;k_1,k_2,k_3)= \prod_{j=0}^{m-1} \frac{\sqrt{\pi}}{2^{k_1 +2 k_2+j k_3-1}
\Gamma(k_1+k_2+\frac{1}{2}+j k_3)}.$$ The $m$-th moment of $\Psi^{{\mathrm{BC}}}(\bz;1)$ is given by $$\left\langle
\Psi^{{\mathrm{BC}}}(\bz;1)^m
\right\rangle_{n}^{k_1,k_2,k_3}
= {\mathcal{F}}(m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3) \cdot \prod_{j=0}^{m-1}
\frac{\Gamma(n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 )
\Gamma(n+ \tilde{k}_1+\tilde{k}_2+\frac{1}{2}+j \tilde{k}_3 )}
{\Gamma(n+ \frac{\tilde{k}_1}{2}+\tilde{k}_2+\frac{j \tilde{k}_3}{2} )
\Gamma(n+ \frac{\tilde{k}_1}{2}+\tilde{k}_2+\frac{1+j \tilde{k}_3}{2} )}.$$
By Theorem \[Thm:MainTheorem\] we have $$\label{eq:MTspecial}
\left\langle
\Psi^{{\mathrm{BC}}}(\bz;1)^m
\right\rangle_{n}^{k_1,k_2,k_3}
= P^{{\mathrm{HO}}}_{(n^m)}(1^m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3).$$ The special case $P_{\lambda}^{{\mathrm{HO}}}(1,1,\dots,1;k_1,k_2,k_3)$ is known and is given as follows (see e.g. [@Diejen] [^2]): for a partition $\lambda$ of length $\le m$, $$\begin{aligned}
P^{{\mathrm{HO}}}_{\lambda}(\underbrace{1, \dots, 1}_m;k_1,k_2,k_3)
=& 2^{2|\lambda|} \prod_{1 \le i <j \le m}
\frac{(\rho_i+ \rho_j+k_3)_{\lambda_i+\lambda_j}
(\rho_i- \rho_j+k_3)_{\lambda_i-\lambda_j}}
{(\rho_i+ \rho_j)_{\lambda_i+\lambda_j}
(\rho_i- \rho_j)_{\lambda_i-\lambda_j}} \\
& \quad \times \prod_{j=1}^m
\frac{(\frac{k_1}{2} +k_2 + \rho_j)_{\lambda_j} (\frac{k_1+1}{2} + \rho_j)_{\lambda_j}}
{(2 \rho_j)_{2 \lambda_j}} \end{aligned}$$ with $\rho_j= (m-j)k_3 + \frac{k_1}{2}+k_2$. Here $(a)_n = \Gamma(a+n) / \Gamma(a)$ is the Pochhammer symbol. Substituting $(n^m)$ for $\lambda$, we have $$\begin{aligned}
& P^{{\mathrm{HO}}}_{(n^m)} (1^m; k_1,k_2,k_3) \notag \\
=& \prod_{1 \le i <j \le m} \frac{(k_1+2 k_2+(2m-i-j+1)k_3)_{2n}}
{(k_1+2 k_2+(2m-i-j)k_3)_{2n}}
\cdot \prod_{j=0}^{m-1}
\frac{2^{2n} (k_1+2 k_2+j k_3)_n (k_1+k_2+\frac{1}{2}+j k_3)_n}
{(k_1 +2 k_2+2 j k_3)_{2n}}. \label{eq:moment_product1}\end{aligned}$$
A simple algebraic manipulation of the first product on the right-hand side of yields $$\prod_{1 \le i <j \le m} \frac{(k_1+2 k_2+(2m-i-j+1)k_3)_{2n}}
{(k_1+2 k_2+(2m-i-j)k_3)_{2n}}
= \prod_{j=0}^{m-1} \frac{(k_1+2k_2+ 2jk_3)_{2n}}{(k_1+2k_2+j k_3)_{2n}}$$ and therefore we obtain $$P_{(n^m)}^{{\mathrm{HO}}} (1^m; k_1,k_2,k_3) =
\prod_{j=0}^{m-1} \frac{2^{2n} (k_1+k_2+\frac{1}{2}+j k_3)_n}{(n+k_1+2k_2+jk_3)_{n}}.$$ Combining the above result with equation , we have $$\label{eq:Main2}
\left\langle
\Psi^{{\mathrm{BC}}}(\bz;1)^m
\right\rangle_{n}^{k_1,k_2,k_3}
= \prod_{j=0}^{m-1}
\frac{2^{2n} \Gamma(n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 )
\Gamma(n+ \tilde{k}_1+\tilde{k}_2+\frac{1}{2}+j \tilde{k}_3 )}
{\Gamma( \tilde{k}_1+\tilde{k}_2+\frac{1}{2} +j \tilde{k}_3)
\Gamma(2n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 )}.$$
Finally, we apply the formula $$\Gamma(2a) = \frac{2^{2a-1}}{\sqrt{\pi}} \Gamma(a) \Gamma(a+\frac{1}{2})$$ to $\Gamma(2n+\tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3)$ in equation and we then have the theorem.
\[cor:Main\] It holds that $$\left\langle
\Psi^{{\mathrm{BC}}}(\bz;1)^m
\right\rangle_{n}^{k_1,k_2,k_3}
\sim
{\mathcal{F}}(m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3) \cdot
n^{m(\tilde{k}_1+\tilde{k}_2)+\frac{1}{2}m(m-1)\tilde{k}_3},$$ as $n \to \infty$.
The claim follows from the previous theorem and the asymptotics of the gamma function (cf ): $\Gamma(n+a) \sim \Gamma(n) n^a$ for a constant $a$.
Random matrix ensembles associated with compact symmetric spaces
================================================================
Finally, we apply the theorems obtained above to compact symmetric spaces as classified by Cartan. These symmetric spaces are labeled A I, BD I, C II, and so on, see e.g. Table 1 in [@CM]. Let $G/K$ be such a compact symmetric space. Here $G$ is a compact subgroup of $GL(N,{\mathbb{C}})$ for some positive integer $N$, and $K$ is a closed subgroup of $G$. Then the space $G/K$ is realized as the subset $S$ of $G$: $S \simeq G/K$ and the probability measure $\dd M$ on $S$ is induced from the quotient space $G/K$. We consider $S$ as a probability space with the measure $\dd M$ and call the random matrix ensemble associated with $G/K$. See [@Duenez] for details.
The random matrix ensembles considered in §\[subsectionA\], §\[subsectionAI\], and §\[subsectionAII\] are called Dyson’s circular $\beta$-ensembles, see [@Dyson; @Mehta]. The identities in these subsections follow immediately from expressions and (see also Example \[ExFq\]) . Similarly, identities after §\[subsectionB\] follows from Theorem \[Thm:MainTheorem\], Theorem \[Thm:Main2\], and Corollary \[cor:Main\].
Note that the results in §\[subsectionA\], §\[subsectionB\], §\[subsectionC\], and §\[subsectionD\] are results for compact Lie groups (which are not proper symmetric spaces) previously presented in [@BG].
$U(n)$ – type A {#subsectionA}
---------------
Consider the unitary group $U(n)$ with the normalized Haar measure. This space has a simple root system of type A. The corresponding p.d.f. for eigenvalues $z_1,\dots,z_n$ of $M \in U(n)$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;1)$. This random matrix ensemble is called the circular unitary ensemble (CUE).
For complex numbers $\eta_1,\dots,\eta_L, \eta_{L+1},\dots, \eta_{L+K}$, it follows from equation that $$\begin{aligned}
& \left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1}) \cdot \prod_{i=1}^K
\det(I+\eta_{L+i} M) \right\rangle_{U(n)} \\
=& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,2} =
\prod_{i=1}^L \eta_i^{-n} \cdot s_{(n^L)} (\eta_1,\dots,\eta_{L+K}).\end{aligned}$$ In addition, from equation we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{U(n)}
= \prod_{j=0}^{m-1}\frac{j! (n+j+m)!}{(j+m)! (n+j)!}
\sim
\prod_{j=0}^{m-1}\frac{j!}{(j+m)!} \cdot n^{m^2}$$ for any $\xi \in {\mathbb{T}}$.
$U(n)/O(n)$ – type A I {#subsectionAI}
----------------------
Consider the ensemble $S(n)$ associated with the symmetric space $U(n)/O(n)$. The space $S(n)$ is the set of all symmetric matrices in $U(n)$. The corresponding p.d.f. for eigenvalues $z_1,\dots,z_n$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;2) = \prod_{1 \le i<j \le n} |z_i-z_j|$. This random matrix ensemble is called the circular orthogonal ensemble (COE). We have $$\begin{aligned}
&\left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1}) \cdot \prod_{i=1}^K
\det(I+\eta_{L+i} M) \right\rangle_{S(n)} \\
=& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,1}
= \prod_{i=1}^L \eta_i^{-n} \cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1,\dots,\eta_{L+K};1/2).\end{aligned}$$ For $\xi \in {\mathbb{T}}$, we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{S(n)}
= \prod_{j=0}^{m-1} \frac{(2j+1)! (n+2m+2j+1)!}{(2m+2j+1)! (n+2j+1)!}
\sim
\prod_{j=0}^{m-1}\frac{(2j+1)!}{(2m+2j+1)!} \cdot n^{2m^2}.$$
$U(2n)/Sp(2n)$ – type A II {#subsectionAII}
--------------------------
Consider the ensemble $S(n)$ associated with the symmetric space $U(2n)/Sp(2n)$. The space $S(n)$ is the set of all self-dual matrices in $U(2n)$, i.e., $M \in S(n)$ is a unitary matrix satisfying $M=J \trans{M} \trans{J}$ with $J=\(\begin{smallmatrix} 0 & I_n \\ -I_n & 0 \end{smallmatrix} \)$. This random matrix ensemble is called the circular symplectic ensemble (CSE). The eigenvalues of $M \in S(n)$ are of the form $z_1,z_1,z_2,z_2,\dots,z_n,z_n$ and so the characteristic polynomial is given as $\det(I+xM)= \prod_{j=1}^n (1+x z_j)^2$. The corresponding p.d.f. for $z_1,\dots,z_n$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;1/2)=\prod_{1 \le i<j \le n} |z_i-z_j|^4$. We have $$\begin{aligned}
&\left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1})^{1/2} \cdot \prod_{i=1}^K
\det(I+\eta_{L+i} M)^{1/2} \right\rangle_{S(n)} \\
=& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K
\Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,4}
= \prod_{i=1}^L x_i^{-n} \cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1,\dots,\eta_{L+K};2).\end{aligned}$$ For $\xi \in {\mathbb{T}}$, we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{S(n)}
= \prod_{j=0}^{2m-1} \frac{\Gamma(\frac{j+1}{2}) \Gamma(n+m+\frac{j+1}{2})}
{\Gamma(m+\frac{j+1}{2}) \Gamma(n+\frac{j+1}{2})}
\sim
\frac{2^m}{(2m-1)!! \ \prod_{j=1}^{2m-1}(2j-1)!!} \cdot n^{2m^2}.$$
$SO(2n+1)$ – type B {#subsectionB}
-------------------
Consider the special orthogonal group $SO(2n+1)$. An element $M$ in $SO(2n+1)$ is an orthogonal matrix in $SL(2n+1,{\mathbb{R}})$, with eigenvalues given by $z_1,z_1^{-1},\cdots, z_n,z_n^{-1},1$. From Weyl’s integral formula, the corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;1,0,1)$, and therefore it follows from Theorem \[Thm:MainTheorem\] that $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{SO(2n+1)}
= \prod_{i=1}^m (1+x_i) \cdot
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{1,0,1}
= \prod_{i=1}^m x_i^n (1+x_i) \cdot
P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;1,0,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;1,0,1)$ is just the irreducible character of $SO(2m+1)$ associated with the partition $\lambda$. Theorem \[Thm:Main2\], Corollary \[cor:Main\], and a simple calculation lead to $$\left\langle \det(I+ M)^m \right\rangle_{SO(2n+1)}
= 2^{m} \prod_{j=0}^{m-1}
\frac{ \Gamma(2n+ 2j+2 ) }
{2^{j} (2j+1)!! \ \Gamma(2n+ j+1)}
\sim \frac{2^{2m}}{\prod_{j=1}^{m} (2j-1)!!} n^{m^2/2+m/2}$$ in the limit as $n \to \infty$.
$Sp(2n)$ – type C {#subsectionC}
-----------------
Consider the symplectic group $Sp(2n)$, i.e., a matrix $M \in Sp(2n)$ belongs to $U(2n)$ and satisfies $M J \trans{M}=J$, where $J=\(\begin{smallmatrix} O_n & I_n \\ -I_n & O_n \end{smallmatrix} \)$. The eigenvalues are given by $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,1,1)$ and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{Sp(2n)}
=
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,1,1}
= \prod_{i=1}^m x_i^n \cdot
P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,1,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;0,1,1)$ is just the irreducible character of $Sp(2m)$ associated with the partition $\lambda$. We obtain $$\left\langle \det(I+ M)^m \right\rangle_{Sp(2n)}
= \prod_{j=0}^{m-1}
\frac{\Gamma(2n+2j+3) }{2^{j+1} \cdot (2j+1)!!
\ \Gamma(2n+j+2)}
\sim \frac{1}{ \prod_{j=1}^{m} (2j-1)!!} \cdot n^{m^2/2+m/2}.$$
$SO(2n)$ – type D {#subsectionD}
-----------------
Consider the special orthogonal group $SO(2n)$. The eigenvalues of a matrix $M \in SO(2n)$ are of the form $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,0,1)$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{SO(2n)}
=
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,0,1}
= \prod_{i=1}^m x_i^n \cdot
P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,0,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;0,0,1)$ is just the irreducible character of $O(2m)$ (not $SO(2m)$) associated with the partition $\lambda$. We have $$\left\langle \det(I+ M)^m \right\rangle_{SO(2n)}
= \prod_{j=0}^{m-1} \frac{\Gamma(2n+2j)}{2^{j-1} \, (2j-1)!! \ \Gamma(2n+j)}
\sim \frac{2^m}{\prod_{j=1}^{m-1} (2j-1)!!} \cdot n^{m^2/2-m/2}.$$
$U(2n+r)/(U(n+r)\times U(n))$ – type A III
------------------------------------------
Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with $U(2n+r)/(U(n+r)\times U(n))$. The explicit expression of a matrix in $G(n,r)$ is omitted here, but may be found in [@Duenez]. The eigenvalues of a matrix $M \in G(n,r) \subset U(2n+r)$ are of the form $$\label{eq:Eigenvalues}
z_1,z_1^{-1},\cdots, z_n,z_n^{-1},\underbrace{1,1,\dots, 1}_r.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;r,\frac{1}{2},1)$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{G(n,r)}
= \prod_{i=1}^m (1+x_i)^r \cdot
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{r,\frac{1}{2},1}
= \prod_{i=1}^m (1+x_i)^r x_i^n \cdot
P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;r,\frac{1}{2},1).$$ We obtain $$\begin{aligned}
& \left\langle \det(I+ M)^m \right\rangle_{G(n,r)}
= 2^{mr} \left\langle \Psi^{{\mathrm{BC}}}(\bz;1) \right\rangle_{n}^{r,\frac{1}{2},1} \\
=& \frac{\pi^{m/2}}{\prod_{j=0}^{m-1} 2^{j} (r+j)! }
\prod_{j=0}^{m-1} \frac{\Gamma(n+r+j+1)^2}
{\Gamma(n+\frac{r+j+1}{2}) \Gamma(n+\frac{r+j}{2}+1)}
\sim \frac{\pi^{m/2}}{2^{m(m-1)/2} \prod_{j=0}^{m-1} (r+j)! }
\cdot n^{m^2/2 + rm}.\end{aligned}$$
$O(2n+r)/(O(n+r) \times O(n))$ – type BD I
------------------------------------------
Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with the compact symmetric space $O(2n+r)/(O(n+r) \times O(n))$. The eigenvalues of a matrix $M \in G(n,r) \subset O(2n+r)$ are of the form . The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;\frac{r}{2},0,\frac{1}{2})$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{G(n,r)}
= \prod_{i=1}^m (1+x_i)^r \cdot
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{\frac{r}{2},0,\frac{1}{2}}
= \prod_{i=1}^m (1+x_i)^r x_i^n \cdot
P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;r,1,2).$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{G(n,r)}
= 2^{mr}
\prod_{j=0}^{m-1} \frac{\Gamma(2n+4j+2r+3)}{2^{2j+r+1}(4j+2r+1)!! \
\Gamma(2n+2j+r+2)}
\sim \frac{2^{mr}}{\prod_{j=0}^{m-1}(4j+2r+1)!!} \cdot n^{m^2+rm}.$$
$Sp(2n)/U(n)$ – type C I
------------------------
Consider the random matrix ensemble $S(n)$ associated with the compact symmetric space $Sp(2n) /(Sp(2n)\cap SO(2n)) \simeq Sp(2n)/U(n)$. The eigenvalues of a matrix $M \in S(n) \subset Sp(2n)$ are of the form $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,\frac{1}{2},\frac{1}{2})$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{S(n)}
=
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,\frac{1}{2},\frac{1}{2}}
= \prod_{i=1}^m x_i^n \cdot
P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,2,2).$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)}
=
\prod_{j=0}^{m-1} \frac{(n+2j+3) \Gamma(2n+4j+5)}{2^{2j+2}(4j+3)!! \
\Gamma(2n+2j+4)}
\sim \frac{1}{2^m \prod_{j=1}^m (4j-1)!!} \cdot n^{m^2+m}.$$
$Sp(4n+2r)/(Sp(2n+2r) \times Sp(2n))$ – type C II
-------------------------------------------------
Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with the compact symmetric space $Sp(4n+2r)/(Sp(2n+2r) \times Sp(2n))$. The eigenvalues of a matrix $M \in G(n,r) \subset Sp(4n+2r)$ are of the form $$z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}, \underbrace{1,\dots,1}_{2r}.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2r,\frac{3}{2},2)$, and therefore we have $$\begin{aligned}
\left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{G(n,r)}
=&\prod_{i=1}^m (1+x_i)^r
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{2r,\frac{3}{2},2} \\
=& \prod_{i=1}^m (1+x_i)^rx_i^n \cdot
P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;r,\frac{1}{4},\frac{1}{2}).\end{aligned}$$ We obtain $$\begin{aligned}
\left\langle \det(I+ M)^m \right\rangle_{G(n,r)}
=&
\frac{2^{4mr+m^2+m}}{\prod_{j=0}^{m-1} (4j+4r+1)!!} \cdot
\frac{\prod_{p=1}^{4m} \Gamma(n+r+\frac{p+1}{4})}{\prod_{j=1}^{2m}
\Gamma(n+\frac{r}{2}+\frac{j}{4}) \Gamma(n+\frac{r+1}{2}+\frac{j}{4})} \\
\sim &
\frac{2^{4mr+m^2+m}}{\prod_{j=0}^{m-1} (4j+4r+1)!!} n^{m^2+2mr}.\end{aligned}$$
$SO(4n+2)/U(2n+1)$ – type D III-odd
-----------------------------------
Consider the random matrix ensemble $S(n)$ associated with the compact symmetric space $SO(4n+2)/(SO(4n+2) \cap Sp(4n+2)) \simeq SO(4n+2)/U(2n+1)$. The eigenvalues of a matrix $M \in S(n) \subset SO(4n+2)$ are of the form $z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}, 1,1$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2,\frac{1}{2},2)$ and therefore we have $$\begin{aligned}
\left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{S(n)}
=&\prod_{i=1}^m (1+x_i)
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{2,\frac{1}{2},2} \\
=& \prod_{i=1}^m (1+x_i) x_i^n \cdot
P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;1,-\frac{1}{4},\frac{1}{2}).\end{aligned}$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)}
=
\frac{2^{m^2+5m}}{\prod_{j=1}^{m} (4j-1)!!} \cdot
\prod_{j=1}^{2m} \frac{\Gamma(n+\frac{j}{2}+\frac{3}{4}) \Gamma(n+\frac{j}{2})}
{\Gamma(n+\frac{j}{4}) \Gamma(n+\frac{j}{4} +\frac{1}{2})}
\sim
\frac{2^{m^2+5m}}{\prod_{j=1}^{m} (4j-1)!!} \cdot n^{m^2+m}.$$
$SO(4n)/U(2n)$ – type D III-even
--------------------------------
Consider the random matrix ensembles $S(n)$ associated with the compact symmetric space $SO(4n)/(SO(4n) \cap Sp(4n)) \simeq SO(4n)/U(2n)$. The eigenvalues of the matrix $M \in S(n) \subset SO(4n)$ are of the form $$z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,\frac{1}{2},2)$ and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{S(n)}
=
\left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,\frac{1}{2},2}
= P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,-\frac{1}{4},\frac{1}{2}).$$ Hence we obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)}
=
\frac{2^{m^2+m}}{\prod_{j=1}^{m-1} (4j-1)!!} \cdot
\prod_{j=0}^{2m-1} \frac{\Gamma(n+\frac{j}{2}+\frac{1}{4}) \Gamma(n+\frac{j-1}{2})}
{\Gamma(n+\frac{j-1}{4}) \Gamma(n+\frac{j+1}{4})}
\sim
\frac{2^{m^2+m}}{\prod_{j=1}^{m-1} (4j-1)!!} \cdot n^{m^2-m}.$$
Final comments
==============
We have calculated the average of products of the characteristic moments $\langle \prod_{j=1}^m \det(I+x_j M) \rangle$. We would also like to calculate the average of the quotient $$\left\langle \frac{\prod_{j=1}^m \det(I+x_j M)}{\prod_{i=1}^l \det(I+y_i M)}
\right\rangle_n^{k_1,k_2,k_3}.$$ Expressions for these quotients have been obtained for the classical groups (i.e., $(k_1,k_2,k_3)=(1,0,1), (0,1,1), (0,0,1)$ in our notation) in [@BG], but the derivation of expressions for other cases remains an open problem.
<span style="font-variant:small-caps;">Acknowledgements.</span> The author would like to thank Professor Masato Wakayama for bringing to the author’s attention the paper [@Mimachi].
[GaWa]{} G. E. Andrews, R. Askey, and R. Roy, “Special Functions”, Encyclopedia Math. Appl. 71, Cambridge Univ. Press, Cambridge, 1999.
D. Bump and P. Diaconis, Toeplitz minors, J. Combin. Theory Ser. A [**97**]{} (2002), 252–271.
D. Bump and A. Gamburd, On the averages of characteristic polynomials from classical groups, Comm. Math. Phys. [**265**]{} (2006), 227–274.
M. Caselle and U. Magnea, Random matrix theory and symmetric spaces, Physics Reports [**394**]{} (2004), 41–156.
J. F. van Diejen, Properties of some families of hypergeometric orthogonal polynomials in several variables, Trans. Amer. Math. Soc. [**351**]{} (1999), 233–270.
E. Dueñez, Random matrix ensembles associated to compact symmetric spaces, Comm. Math. Phys. [**244**]{} (2004), 29–61.
F. J. Dyson, Statistical theory of the energy levels of complex systems. I, J. Mathematical Phys. [**3**]{} (1962), 140–156.
P. J. Forrester and J. P. Keating, Singularity dominated strong fluctuations for some random matrix averages, Comm. Math. Phys. [**250**]{} (2004), 119–131.
G. Heckman and H. Schlichtkrull, “Harmonic Analysis and Special Functions on Symmetric spaces”, Perspect. Math. [**16**]{}, Academic Press, San Diego, 1994.
K. Johansson, On Szegö’s asymptotic formula for Toeplitz determinants and generalizations, Bull. Sci. Math. (2) [**112**]{} (1988), 257–304.
—–, On fluctuations of eigenvalues of random Hermitian matrices, Duke Math. J. [**91**]{} (1998), 151–204.
J. Kaneko, Selberg integrals and hypergeometric functions associated with Jack polynomials, SIAM J. Math. Anal. [**24**]{} (1993), 1086–1110.
—–, $q$-Selberg integrals and Macdonald polynomials, Ann. scient. Éc. Norm. Sup. $4^{\text{e}}$ série [**29**]{} (1996), 583–637.
J. P. Keating and N. C. Snaith, Random matrix theory and $\zeta(1/2+it)$, Comm. Math. Phys. [**214**]{} (2000), 57–89.
—–, Random matrix theory and $L$-functions at $s=1/2$, Comm. Math. Phys. [**214**]{} (2000), 91–110.
I. G. Macdonald, “Symmetric Functions and Hall Polynomials”, 2nd ed., Oxford University Press, Oxford, 1995.
S. Matsumoto, Hyperdeterminant expressions for Jack functions of rectangular shapes, arXiv:math/0603033.
M. L. Mehta, “Random Matrices”, 3rd ed., Pure and Applied Mathematics (Amsterdam) 142, Elsevier/Academic Press, Amsterdam, 2004.
K. Mimachi, A duality of Macdonald-Koornwinder polynomials and its application to integral representations, Duke Math. J. [**107**]{} (2001), 265–281.
<span style="font-variant:small-caps;">Sho MATSUMOTO</span>\
Faculty of Mathematics, Kyushu University.\
Hakozaki Higashi-ku, Fukuoka, 812-8581 JAPAN.\
`shom@math.kyushu-u.ac.jp`\
[^1]: Research Fellow of the Japan Society for the Promotion of Science, partially supported by Grant-in-Aid for Scientific Research (C) No. 17006193.
[^2]: The connection between ours notation and van Diejen’s [@Diejen] is given by $\nu_0 = k_1+k_2, \ \nu_1=k_2, \ \nu=k_3$.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'This paper introduces a novel feature detector based only on information embedded inside a CNN trained on standard tasks (e.g. classification). While previous works already show that the features of a trained CNN are suitable descriptors, we show here how to extract the feature locations from the network to build a detector. This information is computed from the gradient of the feature map with respect to the input image. This provides a saliency map with local maxima on relevant keypoint locations. Contrary to recent CNN-based detectors, this method requires neither supervised training nor finetuning. We evaluate how repeatable and how ‘matchable’ the detected keypoints are with the repeatability and matching scores. Matchability is measured with a simple descriptor introduced for the sake of the evaluation. This novel detector reaches similar performances on the standard evaluation HPatches dataset, as well as comparable robustness against illumination and viewpoint changes on Webcam and photo-tourism images. These results show that a CNN trained on a standard task embeds feature location information that is as relevant as when the CNN is specifically trained for feature detection.'
author:
- Assia Benbihi
- Matthieu Geist
- 'C[é]{}dric Pradalier'
bibliography:
- '../egbib.bib'
title: 'ELF: Embedded Localisation of Features in pre-trained CNN'
---
Introduction
============
![(1-6) Embedded Detector: Given a CNN trained on a standard vision task (classification), we backpropagate the feature map back to the image space to compute a saliency map. It is thresholded to keep only the most informative signal and keypoints are the local maxima. (7-8): simple-descriptor.[]{data-label="fig:pipeline"}](img.png){width="\linewidth"}
Feature extraction, description and matching is a recurrent problem in vision tasks such as Structure from Motion (SfM), visual SLAM, scene recognition and image retrieval. The extraction consists in detecting image keypoints, then the matching pairs the nearest keypoints based on their descriptor distance. Even though hand-crafted solutions, such as SIFT [@lowe2004distinctive], prove to be successful, recent breakthroughs on local feature detection and description rely on supervised deep-learning methods [@detone18superpoint; @ono2018lf; @yi2016lift]. They detect keypoints on saliency maps learned by a Convolutional Neural Network (CNN), then compute descriptors using another CNN or a separate branch of it. They all require strong supervision and complex training procedures: [@yi2016lift] requires ground-truth matching keypoints to initiate the training, [@ono2018lf] needs the ground-truth camera pose and depth maps of the images, [@detone18superpoint] circumvents the need for ground-truth data by using synthetic one but requires a heavy domain adaptation to transfer the training to realistic images. All these methods require a significant learning effort. In this paper, we show that a trained network already embeds enough information to build State-of-the-Art (SoA) detector and descriptor.
The proposed method for local feature detection needs only a CNN trained on standard task, such as ImageNet [@deng2009imagenet] classification, and no further training. The detector, dubbed ELF, relies on the features learned by such a CNN and extract their locations from the feature map gradients. Previous work already highlights that trained CNN features are relevant descriptors [@fischer2014descriptor] and recent works [@balntas2016learning; @han2015matchnet; @simo2015discriminative] specifically train CNN to produce features suitable for keypoint description. However, no existing approach uses a pre-trained CNN for feature detection.
ELF computes the gradient of a trained CNN feature map with respect to *w.r.t* the image: this outputs a saliency map with local maxima on keypoint positions. Trained detectors learn this saliency map with a CNN whereas we extract it with gradient computations. This approach is inspired by [@simonyan2013deep] which observes that the gradient of classification scores *w.r.t* the image is similar to the image saliency map. ELF differs in that it takes the gradient of feature maps and not the classification score contrary to existing work exploiting CNN gradients [@selvaraju2017grad; @smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic]. These previous works aim at visualising the learning signal for classification specifically whereas ELF extracts the feature locations. The extracted saliency map is then thresholded to keep only the most relevant locations and standard Non-Maxima Suppression (NMS) extracts the final keypoints (Figure \[fig:heatmap\_coco\]).
![ Saliency maps thresholding to keep only the most informative location. Top: original image. (Left-Right: Webcam [@verdie2015tilde], HPatches [@balntas2017hpatches], COCO[@lin2014microsoft]) Middle: blurred saliency maps. Bottom: saliency map after threshold. (Better seen on a computer.) []{data-label="fig:heatmap_coco"}](fig3_heatmap.png){width="\linewidth"}
ELF relies only on six parameters: 2$\times$2 Gaussian blur parameters for the automatic threshold level estimation and for the saliency map denoising; and two parameters for the (NMS) window and the border to ignore. Detection only requires one forward and one backward passes and takes $\sim$0.2s per image on a simple Quadro M2200, which makes it suitable for real-time applications.
ELF is compared to individual detectors with standard *repeatability* [@mikolajczyk2005comparison] but results show that this metric is not discriminative enough. Most of the existing detectors can extract keypoints repeated across images with similar repeatability scores. Also, this metric does not express how ‘useful’ the detected keypoints are: if we sample all pixels as keypoints, we reach 100% of *rep.* but the matching may not be perfect if many areas look alike. Therefore, the detected keypoints are also evaluated on how ‘matchable’ they are with the *matching score* [@mikolajczyk2005comparison]. This metric requires to describe the keypoints so we define a simple descriptor: it is based on the interpolation of a CNN feature map on the detected keypoints, as in [@detone18superpoint]. This avoids biasing the performance by choosing an existing competitive descriptor. Experiments show that even this simple descriptor reaches competitive results which comforts the observation of [@fischer2014descriptor], on the relevance of CNN features as descriptors. More details are provided section 4.1.
ELF is tested on five architectures: three classification networks trained on ImageNet classification: AlexNet, VGG and Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception], as well as SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf] descriptor networks. Although outside the scope of this paper, this comparison provides preliminary results of the influence of the network architecture, task and training data on ELF’s performance. Metrics are computed on HPatches [@balntas2017hpatches] for generic performances. We derive two auxiliary datasets from HPatches to study scale and rotation robustness. Light and 3D viewpoint robustness analysis are run on the Strecha, Webcam and datasets [@strecha2008benchmarking; @verdie2015tilde]. These extensive experiments show that ELF is on par with other sparse detectors, which suggests that the feature representation and location information learnt by a CNN to complete a vision task is as relevant as when the CNN is specifically trained for feature detection. We additionally test ELF’s robustness on 3D reconstruction from images in the context of the CVPR 2019 Image Matching challenge [@cvpr19challenge]. Once again, ELF is on par with other sparse methods even though denser methods, e.g. [@detone18superpoint], are more appropriate for such a task. Our contributions are the following:
- We show that a CNN trained on a standard vision task embeds feature location in the feature gradients. This information is as relevant for feature detection as when a CNN is specifically trained for it.
- We define a systematic method for local feature detection. Extensive experiments show that ELF is on par with other SoA deep trained detectors. They also update the previous result from [@fischer2014descriptor]: self-taught CNN features provide SoA descriptors in spite of recent improvements in CNN descriptors [@choy2016universal].
- We release the python-based evaluation code to ease future comparison together with ELF code[^1]. The introduced robustness datasets are also made public [^2].
Related work
============
Early methods rely on hand-crafted detection and description : SIFT [@lowe2004distinctive] detects 3D spatial-scale keypoints on difference of gaussians and describes them with a 3D Histogram Of Gradients (HOG). SURF [@bay2006surf] uses image integral to speed up the previous detection and uses a sum of Haar wavelet responses for description. KAZE [@alcantarilla2012kaze] extends the previous multi-scale approach by detecting features in non-linear scale spaces instead of the classic Gaussian ones. ORB [@rublee2011orb] combines the FAST [@rosten2006machine] detection, the BRIEF [@calonder2010brief] description, and improves them to make the pipeline scale and rotation invariant. MSER-based detector hand-crafts desired invariance properties for keypoints, and designs a fast algorithm to detect them [@matas2004robust]. Even though these hand-crafted methods have proven to be successful and to reach state-of-the-art performance for some applications, recent research focus on learning-based methods.
One of the first learned detector is TILDE [@verdie2015tilde], trained under drastic changes of light and weather on the Webcam dataset. They use supervision to learn saliency maps which maxima are keypoint locations. Ground-truth saliency maps are generated with ‘good keypoints’: they use SIFT and filter out keypoints that are not repeated in more than 100 images. One drawback of this method is the need for supervision that relies on another detector. However, there is no universal explicit definition of what a good keypoint is. This lack of specification inspires Quad-Networks [@savinov2017quad] to adopt an unsupervised approach: they train a neural network to rank keypoints according to their robustness to random hand-crafted transformations. They keep the top/bottom quantile of the ranking as keypoints. ELF is similar in that it does not requires supervision but differs in that it does not need to further train the CNN.
Other learned detectors are trained within full detection/description pipelines such as LIFT [@yi2016lift], SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf]. LIFT contribution lies in their original training method of three CNNs. The detector CNN learns a saliency map where the most salient points are keypoints. They then crop patches around these keypoints, compute their orientations and descriptors with two other CNNs. They first train the descriptor with patches around ground-truth matching points with contrastive loss, then the orientation CNN together with the descriptor and finally with the detector. One drawback of this method is the need for ground-truth matching keypoints to initiate the training. In [@detone18superpoint], the problem is avoided by pre-training the detector on a synthetic geometric dataset made of polygons on which they detect mostly corners. The detector is then finetuned during the descriptor training on image pairs from COCO [@lin2014microsoft] with synthetic homographies and the correspondence contrastive loss introduced in [@choy2016universal]. LF-Net relies on another type of supervision: it uses ground-truth camera poses and image depth maps that are easier to compute with laser or standard SfM than ground-truth matching keypoints. Its training pipeline builds over LIFT and employs the projective camera model to project detected keypoints from one image to the other. These keypoint pairs form the ground-truth matching points to train the network. ELF differs in that the CNN model is already trained on a standard task. It then extracts the relevant information embedded inside the network for local feature detection, which requires no training nor supervision.
The detection method of this paper is mainly inspired from the initial observation in [@simonyan2013deep]: given a CNN trained for classification, the gradient of a class score *w.r.t* the image is the saliency map of the class object in the input image. A line of works aims at visualizing the CNN representation by inverting it into the image space through optimization [@mahendran2015understanding; @gatys2016image]. Our work differs in that we backpropagate the feature map itself and not a feature loss. Following works use these saliency maps to better understand the CNN training process and justify the CNN outputs. Efforts mostly focus on the gradient definitions [@smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic; @zeiler2014visualizing]. They differ in the way they handle the backpropagation of the non-linear units such as Relu. Grad-CAM [@selvaraju2017grad] introduces a variant where they fuse several gradients of the classification score *w.r.t* feature maps and not the image space. Instead, ELF computes the gradient of the feature map, and not a classification score, *w.r.t* the image. Also we run simple backpropagation which differs in the non-linearity handling: all the signal is backpropagated no matter whether the feature maps or the gradients are positive or not. Finally, as far as we know, this is the first work to exploit the localisation information present in these gradients for feature detection.
The simple descriptor introduced for the sake of the matchability evaluation is taken from UCN [@choy2016universal]. Given a feature map and the keypoints to describe, it interpolates the feature map on the keypoints location. Using a trained CNN for feature description is one of the early applications of CNN [@fischer2014descriptor]. Later, research has taken on specifically training the CNN to generate features suitable for keypoint matching either with patch-based approaches, among which [@simo2015discriminative; @melekhov2016siamese; @han2015matchnet; @zagoruyko2015learning], or image-based approaches [@taira2018inloc; @choy2016universal]. We choose the description method from UCN [@choy2016universal], also used by SuperPoint, for its complexity is only $O(1)$ compared to patch-based approaches that are $O(N)$ with $N$ the number of keypoints. We favor UCN to InLoc [@taira2018inloc] as it is simpler to compute. The motivation here is only to get a simple descriptor easy to integrate with all detectors for fair comparison of the *detector* matching performances. So we overlook the description performance.
Method
======
This section defines ELF, a detection method valid for any trained CNN. Keypoints are local maxima of a saliency map computed as the feature gradient *w.r.t* the image. We use the data adaptive Kapur method [@kapur1985new] to automatically threshold the saliency map and keep only the most salient locations, then run NMS for local maxima detection.
![(Bigger version Figure \[fig:big\_saliency\_coco\].) Saliency maps computed from the feature map gradient $\left| ^TF^l(x) \cdot \frac{\partial F^l}{\partial \mathbf{I}} \right|$. Enhanced image contrast for better visualisation. Top row: gradients of VGG $pool_2$ and $pool_3$ show a loss of resolution from $pool_2$ to $pool_3$. Bottom: $(pool_i)_{i \in [1,2,5]}$ of VGG on Webcam, HPatches and Coco images. Low level saliency maps activate accurately whereas higher saliency maps are blurred.[]{data-label="fig:saliency_coco"}](fig2_saliency_bis.png){width="\linewidth"}
Feature Specific Saliency
-------------------------
We generate a saliency map that activates on the most informative image region for a specific CNN feature level $l$. Let $\mathbf{I}$ be a vector image of dimension $D_I = H_I \cdot W_I \cdot C_I$. Let $F^l$ be a vectorized feature map of dimension $D_F= H_l \cdot W_l \cdot C_l$. The saliency map $S^l$, of dimension $D_I$, is $S^l(\mathbf{I})=\left| ^tF^l(\mathbf{I}) \cdot \nabla_I F^l \right|$, with $\nabla_I F^l$ a $D_F \times D_I$ matrix.
The saliency activates on the image regions that contribute the most to the feature representation $F^l(\mathbf{I})$. The term $\nabla_I F^l$ explicits the correlation between the feature space of $F^l$ and the image space in general. The multiplication by $F^l(\mathbf{I})$ applies the correlation to the features $F^l(\mathbf{I})$ specifically and generate a visualisation in image space $S^l(\mathbf{I})$. From a geometrical point of view, this operation can be seen as the projection $\nabla_I F^l$ of a feature signal $F^l(\mathbf{I})$ into the image space. From a signal processing approach, $F^l(\mathbf{I})$ is an input signal filtered through $\nabla_I F^l$ into the image space. If $C_I>1$, $S^l$ is converted into a grayscale image by averaging it across channels.
Feature Map Selection
---------------------
We provide visual guidelines to choose the feature level $l$ so that $F^l$ still holds high resolution localisation information while providing a useful high-level representation.
CNN operations such as convolution and pooling increase the receptive field of feature maps while reducing their spatial dimensions. This means that $F^{l}$ has less spatial resolution than $F^{l-1}$ and the backpropagated signal $S^l$ ends up more spread than $S^{l-1}$. This is similar to when an image is too enlarged and it can be observed in Figure \[fig:saliency\_coco\], which shows the gradients of the VGG feature maps. On the top row, $pool_2$’s gradient (left) better captures the location details of the dome whereas $pool_3$’s gradient (right) is more spread. On the bottom rows, the images lose their resolution as we go higher in the network. Another consequence of this resolution loss is that small features are not embedded in $F^l$ if $l$ is too high. This would reduce the space of potential keypoint to only large features which would hinder the method. This observation motivates us to favor low-level feature maps for feature detection. We chose the final $F^l$ by taking the highest $l$ which provides accurate localisation. This is visually observable by sparse high intensity signal contrary to the blurry aspect of higher layers.
Automatic Data-Adaptive Thresholding
------------------------------------
The threshold is automatic and adapts to the saliency map distribution to keep only the most informative regions. Figure \[fig:heatmap\_coco\] shows saliency maps before and after thresholding using Kapur’s method [@kapur1985new], which we briefly recall below. It chooses the threshold to maximize the information between the image background and foreground *i.e.* the pixel distribution below and above the threshold. This method is especially relevant in this case as it aims at maintaining as much information on the distribution above the threshold as possible. This distribution describes the set of local maxima among which we choose our keypoints. More formally, for an image $\mathbf{I}$ of $N$ pixels with $n$ sorted gray levels and $(f_i)_{i \in
n}$ the corresponding histogram, $p_i=\frac{f_i}{N}$ is the empirical probability of a pixel to hold the value $f_i$. Let $s \in n$ be a threshold level and $A,B$ the empirical background and foreground distributions. The level $s$ is chosen to maximize the information between A and B and the threshold value is set to $f_s$: $A = \left(
\frac{p_i}{\sum_{i<s}pi}\right)_{i<s}$ and $B =
\left(\frac{p_i}{\sum_{i>=s}pi}\right)_{i>s}$. For better results, we blur the image with a Gaussian of parameters $(\mu_{thr}, \sigma_{thr})$ before computing the threshold level.
Once the threshold is set, we denoise the image with a second Gaussian blur of parameters $(\mu_{noise}, \sigma_{noise})$ and run standard NMS (the same as for SuperPoint) where we iteratively select decreasing global maxima while ensuring that their nearest neighbor distance is higher than the window $w_{\textrm{NMS}} \in \mathbb{N}$. Also we ignore the $b_{\textrm{NMS}} \in \mathbb{N}$ pixels around the image border.
Simple descriptor
-----------------
As mentioned in the introduction, the repeatability score does not discriminate among detectors anymore. So they are also evaluated on how ‘matchable’ their detected keypoints are with the matching score. To do so, the ELF detector is completed with a simple descriptor inspired by SuperPoint’s descriptor. The use of this simple descriptor over existing competitive ones avoids unfairly boosting ELF’s perfomance. Inspired by SuperPoint, we interpolate a CNN feature map on the detected keypoints. Although simple, experiments show that this simple descriptor completes ELF into a competitive feature detection/description method.
The feature map used for description may be different from the one for detection. High-level feature maps have wider receptive field hence take higher context into account for the description of a pixel location. This leads to more informative descriptors which motivates us to favor higher level maps. However we are also constrained by the loss of resolution previously described: if the feature map level is too high, the interpolation of the descriptors generate vector too similar to each other. For example, the VGG $pool_4$ layer produces more discriminative descriptors than $pool_5$ even though $pool_5$ embeds information more relevant for classification. Empirically we observe that there exists a layer level $l'$ above which the description performance stops increasing before decreasing. This is measured through the matching score metric introduced in [@mikolajczyk2005comparison]. The final choice of the feature map is done by testing some layers $l'>l$ and select the lowest feature map before the descriptor performance stagnates.
The compared detectors are evaluated with both their original descriptor and this simple one. We detail the motivation behind this choice: detectors may be biased to sample keypoints that their respective descriptor can describe ‘well’ [@yi2016lift]. So it is fair to compute the matching score with the original detector/descriptor pairs. However, a detector can sample ‘useless points’ (e.g. sky pixels for 3d reconstructions) that its descriptor can characterise ‘well’. In this case, the descriptor ‘hides’ the detector default. This motivates the integration of a common independent descriptor with all detectors to evaluate them. Both approaches are run since each is as fair as the other.
Experiments
===========
This section describes the evaluation metrics and datasets as well as the method’s tuning. Our method is compared to detectors with available public code: the fully hand-crafted SIFT [@lowe2004distinctive], SURF [@bay2006surf], ORB [@rublee2011orb], KAZE [@alcantarilla2012kaze], the learning-based LIFT [@yi2016lift], SuperPoint [@detone18superpoint], LF-Net [@ono2018lf], the individual detectors TILDE [@verdie2015tilde], MSER [@matas2004robust].
Metrics
-------
We follow the standard validation guidelines [@mikolajczyk2005comparison] that evaluates the detection performance with *repeatability (rep)*. It measures the percentage of keypoints common to both images. We also compute the *matching score (ms)* as an additional *detector* metric. It captures the percentage of keypoint pairs that are nearest neighbours in both image space and descriptor space i.e. the ratio of keypoints correctly matched. For fair completeness, the mathematical definitions of the metrics are provided in Appendix and their implementation in the soon-to-be released code.
A way to reach perfect *rep* is to sample all the pixels or sample them with a frequency higher than the distance threshold $\epsilon_{kp}$ of the metric. One way to prevent the first flaw is to limit the number of keypoints but it does not counter the second. Since detectors are always used together with descriptors, another way to think the detector evaluation is: *’a good keypoint is one that can be discriminatively described and matched’*. One could think that such a metric can be corrupted by the descriptor. But we ensure that a detector flaw cannot be hidden by a very performing descriptor with two guidelines. One experiment must evaluate all detector with one fixed descriptor (the simple one defined in 3.4). Second, *ms* can never be higher than *rep* so a detector with a poor *rep* leads to a poor *ms*.
Here the number of detected keypoints is limited to 500 for all methods. As done in [@detone18superpoint; @ono2018lf], we replace the overlap score in [@mikolajczyk2005comparison] to compute correspondences with the 5-pixel distance threshold. Following [@yi2016lift], we also modify the matching score definition of [@mikolajczyk2005comparison] to run a greedy bipartite-graph matching on all descriptors and not just the descriptor pairs for which the distance is below an arbitrary threshold. We do so to be able to compare all state-of-the-art methods even when their descriptor dimension and range vary significantly. (More details in Appendix.)
Datasets
--------
All images are resized to the 480$\times$640 pixels and the image pair transformations are rectified accordingly.
![Left-Right: HPatches: planar viewpoint. Webcam: light. HPatches: rotation. HPatches: scale. Strecha: 3D viewpoint.[]{data-label="fig:datasets"}](fig13.png){width="\linewidth"}
**General performances.** The HPatches dataset [@balntas2017hpatches] gathers a subset of standard evaluation images such as DTU and OxfordAffine [@aanaes2012interesting; @mikolajczyk2005performance]: it provides a total of 696 images, 6 images for 116 scenes and the corresponding homographies between the images of a same scene. For 57 of these scenes, the main changes are photogrammetric and the remaining 59 show significant geometric deformations due to viewpoint changes on planar scenes.
**Illumination Robustness.** The Webcam dataset [@verdie2015tilde] gathers static outdoor scenes with drastic natural light changes contrary to HPatches which mostly holds artificial light changes in indoor scenes.
**Rotation and Scale Robustness.** We derive two datasets from HPatches. For each of the 116 scenes, we keep the first image and rotate it with angles from $0^{\circ}$ to $210^{\circ}$ with an interval of $40^{\circ}$. Four zoomed-in version of the image are generated with scales $[1.25, 1.5, 1.75, 2]$. We release these two datasets together with their ground truth homographies for future comparisons.
**3D Viewpoint Robustness.** We use three Strecha scenes [@strecha2008benchmarking] with increasing viewpoint changes: *Fountain, Castle entry, Herzjesu-P8*. The viewpoint changes proposed by HPatches are limited to planar scenes which does not reflect the complexity of 3D structures. Since the ground-truth depths are not available anymore, we use COLMAP [@schonberger2016structure] 3D reconstruction to obtain ground-truth scaleless depth. We release the obtained depth maps and camera poses together with the evaluation code. ELF robustness is additionally tested in the CVPR19 Image Matching Challenge [@cvpr19challenge] (see results sections).
Baselines
---------
We describe the rationale behind the evaluation. The tests run on a QuadroM2200 with Tensorflow 1.4, Cuda8, Cudnn6 and Opencv3.4. We use the OpenCV implementation of SIFT, SURF, ORB, KAZE, MSER with the default parameters and the author’s code for TILDE, LIFT, SuperPoint, LF-Net with the provided models and parameters. When comparing detectors in the feature matching pipeline, we measure their matching score with both their original descriptor and ELF simple descriptor. For MSER and TILDE, we use the VGG simple descriptor.
**Architecture influence.** ELF is tested on five networks: three classification ones trained on ImageNet (AlexNet, VGG, Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception]) as well as the trained SuperPoint’s and LF-Net’s descriptor ones. We call each variant with the network’s names prefixed with ELF as in saliency. The paper compares the influence of i) architecture for a fixed task (ELF-AlexNet [@krizhevsky2012imagenet] *vs.* ELF-VGG [@simonyan2014very] *v.s.* ELF-Xception [@chollet17xception]), ii) the task (ELF-VGG *vs.* ELF-SuperPoint (SP) descriptor), iii) the training dataset (ELF-LFNet on phototourism *vs.* ELF-SP on MS-COCO). This study is being refined with more independent comparisons of tasks, datasets and architectures soon available in a journal extension.
We use the author’s code and pre-trained models which we convert to Tensorflow [@abadi2016tensorflow] except for LF-Net. We search the blurring parameters $(\mu_{thr}, \sigma_{thr})$, $(\mu_{noise}, \sigma_{noise})$ in the range $ [\![3,21]\!]^2$ and the NMS parameters $(w_{NMS},
b_{NMS})$ in $[\![4,13]\!]^2$.
**Individual components comparison.** Individual detectors are compared with the matchability of their detection and the description of the simple VGG-pool3 descriptor. This way, the *m.s.* only depends on the detection performance since the description is fixed for all detectors. The comparison between ELF and recent deep methods raises the question of whether triplet-like losses are relevant to train CNN descriptors. Indeed, these losses constrain the CNN features directly so that matching keypoints are near each other in descriptor space. Simpler loss, such as cross-entropy for classification, only the constrain the CNN output on the task while leaving the representation up to the CNN.
ELF-VGG detector is also integrated with existing descriptors. This evaluates how useful the CNN self-learned feature localisation compares with the hand-crafted and the learned ones.
**Gradient Baseline.** Visually, the feature gradient map is reminiscent of the image gradients computed with the Sobel or Laplacian operators. We run two variants of our pipeline where we replace the feature gradient with them. This aims at showing whether CNN feature gradients embed more information than image intensity gradients.
Results
=======
Experiments show that ELF compares with the state-of-the-art on HPatches and demonstrates similar robustness properties with recent learned methods. It generates saliency maps visually akin to a Laplacian on very structured images (HPatches) but proves to be more robust on outdoor scenes with natural conditions (Webcam). When integrated with existing feature descriptors, ELF boosts the matching score. Even integrating ELF simple descriptor improves it with the exception of SuperPoint for which results are equivalent. This sheds new light on the representations learnt by CNNs and suggests that deep description methods may underexploit the information embedded in their trained networks. Another suggestion may be that the current metrics are not relevant anymore for deep learning methods. Indeed, all can detect repeatable keypoints with more or less the same performances. Even though the matchability of the points (*m.s*) is a bit more discriminative, neither express how ‘useful’ the *kp* are for the end-goal task. One way to do so is to evaluate an end-goal task (*e.g.* Structure-from-Motion). However, for the evaluation to be rigorous all the other steps should be fixed for all papers. Recently, the Image Matching CVPR19 workshop proposed such an evaluation but is not fully automatic yet. These results also challenge whether current descriptor-training loss are a strong enough signal to constrain CNN features better than a simple cross-entropy.
The tabular version of the following results is provided in Appendix. The graph results are better seen with color on a computer screen. Unless mentioned otherwise, we compute repeatability for each detector, and the matching score of detectors with their respective descriptors, when they have one. We use ELF-VGG-$pool_4$ descriptor for TILDE, MSER, ELF-VGG, ELF-SuperPoint, and ELF-LFNet. We use AlexNet and Xception feature maps to build their respective simple descriptors. The meta-parameters for each variants are provided in Appendix.
**General performances.** Figure \[fig:hpatch\_gle\_perf\] (top) shows that the *rep* variance is low across detectors whereas *ms* is more discriminative, hence the validation method (Section 4.1). On HPatches, SuperPoint (SP) reaches the best *rep*-*ms* \[68.6, 57.1\] closely followed by ELF (e.g. ELF-VGG: \[63.8, 51.8\]) and TILDE \[66.0, 46.7\]. In general, we observe that learning-based methods all outperform hand-crafted ones. Still, LF-Net and LIFT curiously underperform on HPatches: one reason may be that the data they are trained on differs too much from this one. LIFT is trained on outdoor images only and LF-Net on either indoor or outdoor datasets, whereas HPatches is made of a mix of them. We compute metrics for both LF-Net models and report the highest one (indoor). Even though LF-Net and LIFT fall behind the top learned methods, they still outperform hand-crafted ones which suggests that their framework learn feature specific information that hand-crafted methods can not capture. This supports the recent direction towards trained detectors and descriptors.
**Light Robustness** Again, *ms* is a better discriminant on Webcam than *rep* (Figure \[fig:hpatch\_gle\_perf\] bottom). ELF-VGG reaches top *rep*-*ms* \[53.2, 43.7\] closely followed by TILDE \[52.5, 34.7\] which was the state-of-the-art detector.
Overall, there is a performance degradation ($\sim$20%) from HPatches to Webcam. HPatches holds images with standard features such as corners that state-of-the-art methods are made to recognise either by definition or by supervision. There are less such features in the Webcam dataset because of the natural lighting that blurs them. Also there are strong intensity variations that these models do not handle well. One reason may be that the learning-based methods never saw such lighting variations in their training set. But this assumption is rejected as we observe that even SuperPoint, which is trained on Coco images, outperforms LIFT and LF-Net, which are trained on outdoor images. Another justification can be that what matters the most is the pixel distribution the network is trained on, rather than the image content. The top methods are classifier-based ELF and SuperPoint: the first ones are trained on the huge Imagenet dataset and benefit from heavy data augmentation. SuperPoint also employs a considerable data strategy to train their network. Thus these networks may cover a much wider pixel distribution which would explain their robustness to pixel distribution changes such as light modifications.
**Architecture influence** ELF is tested on three classification networks as well as the descriptor networks of SuperPoint and LF-Net (Figure \[fig:hpatch\_gle\_perf\], bars under ‘ELF’).
For a fixed training task (classification) on a fixed dataset (ImageNet), VGG, AlexNet and Xception are compared. As could be expected, the network architecture has a critical impact on the detection and ELF-VGG outperforms the other variants. The *rep* gap can be explained by the fact that AlexNet is made of wider convolutions than VGG, which induces a higher loss of resolution when computing the gradient. As for *ms*, the higher representation space of VGG may help building more informative features which are a stronger signal to backpropagate. This could also justify why ELF-VGG outperforms ELF-Xception that has less parameters. Another explanation is that ELF-Xception’s gradient maps seem smoother. Salient locations are then less emphasized which makes the keypoint detection harder. One could hint at the depth-wise convolution to explain this visual aspect but we could not find an experimental way to verify it. Surprisingly, ELF-LFNet outperforms the original LF-Net on both HPatches and Webcam and ELF-SuperPoint variant reaches similar results as the original.
![HPatches scale. Left-Right: rep, ms.[]{data-label="fig:robust_scale"}](fig7_scale.png){width="\linewidth"}
**Scale Robustness.** ELF-VGG is compared with state-of-the art detectors and their respective descriptors (Figure \[fig:robust\_scale\]). Repeatability is mostly stable for all methods: SIFT and SuperPoint are the most invariant whereas ELF follows the same variations as LIFT and LF-Net. Once again, *ms* better assesses the detectors performance: SuperPoint is the most robust to scale changes, followed by LIFT and SIFT. ELF and LF-Net lose 50% of their matching score with the increasing scale. It is surprising to observe that LIFT is more scale-robust than LF-Net when the latter’s global performance is higher. A reasonable explanation is that LIFT detects keypoints at 21 scales of the same image whereas LF-Net only runs its detector CNN on 5 scales. Nonetheless, ELF outperforms LF-Net without manual multi-scale processing.
![HPatches rotation. Left-Right: rep, ms.[]{data-label="fig:robust_rotation"}](fig7_angle.png){width="\linewidth"}
**Rotation Robustness.** Even though *rep* shows little variations (Figure \[fig:robust\_rotation\]), all learned methods’ *ms* crash while only SIFT survives the rotation changes. This can be explained by the explicit rotation estimation step of SIFT. However LIFT and LF-Net also run such a computation. This suggests that either SIFT’s hand-crafted orientation estimation is more accurate or that HOG are more rotation invariant than learned features. LF-Net still performs better than LIFT: this may be because it learns the keypoint orientation on the keypoint features representation rather than the keypoint pixels as done in LIFT. Not surprisingly, ELF simple descriptor is not rotation invariant as the convolutions that make the CNN are not. This also explains why SuperPoint also crashes in a similar manner. These results suggest that the orientation learning step in LIFT and LF-Net is needed but its robustness could be improved.
![Robustness analysis: 3D viewpoint.[]{data-label="fig:robust_strecha"}](fig7_strecha.png){width="\linewidth"}
**3D Viewpoint Robustness.** While SIFT shows a clear advantage of pure-rotation robustness, it displays similar degradation as other methods on realistic rotation-and-translation on 3D structures. Figure \[fig:robust\_strecha\] shows that all methods degrade uniformly. One could assume that this small data sample is not representative enough to run such robustness analysis. However, we think that these results rather suggest that all methods have the same robustness to 3D viewpoint changes. Even though previous analyses allows to rank the different feature matching pipelines, each has advantages over others on certain situations: ELF or SuperPoint on general homography matches, or SIFT on rotation robustness. This is why this paper only aims at showing ELF reaches the same performances and shares similar properties to existing methods as there is no generic ranking criteria. The recent evaluation run by the CVPR19 Image Matching Challenge [@cvpr19challenge] supports the previous conclusions.
![Left-Middle-Right bars: original method, integration of ELF detection, integration of ELF description.[]{data-label="fig:ind_component"}](fig11.png){width="\linewidth"}
**Individual components performance.** First, all methods’ descriptor are replaced with the simple ELF-VGG-$pool_3$ one. We then compute their new *ms* and compare it to ELF-VGG on HPatches and Webcam (Figure \[fig:ind\_component\], stripes). The description is based on $pool_3$ instead of $pool_4$ here for it produces better results for the other methods while preserving ours. ELF reaches higher *ms* \[51.3\] for all methods except for SuperPoint \[53.7\] for which it is comparable. This shows that ELF is as relevant, if not more, than previous hand-crafted or learned detectors. This naturally leads to the question: *’What kind of keypoints does ELF detect ?’* There is currently no answer to this question as it is complex to explicitly characterize properties of the pixel areas around keypoints. Hence the open question *’What makes a good keypoint ?’* mentioned at the beginning of the paper. Still, we observe that ELF activates mostly on high intensity gradient areas although not all of them. One explanation is that as the CNN is trained on the vision task, it learns to ignore image regions useless for the task. This results in killing the gradient signals in areas that may be unsuited for matching.
Another surprising observation regards CNN descriptors: SuperPoint (SP) keypoints are described with the SP descriptor in one hand and the simple ELF-VGG one in the other hand. Comparing the two resulting matching scores is one way to compare the SP and ELF descriptors. Results show that both approaches lead to similar *ms*. This result is surprising because SP specifically trains a description CNN so that its feature map is suitable for keypoint description [@choy2016universal]. In VGG training, there is no explicit constraints on the features from the cross-entropy loss. Still, both feature maps reach similar numerical description performance. This raises the question of whether contrastive-like losses, which input are CNN features, can better constrain the CNN representation than simpler losses, such as cross-entropy, which inputs are classification logits. This also shows that there is more to CNNs than only the task they are trained on: they embed information that can prove useful for unrelated tasks. Although the simple descriptor was defined for evaluation purposes, these results demonstrate that it can be used as a description baseline for feature extraction.
The integration of ELF detection with other methods’ descriptor (Figure \[fig:ind\_component\], circle) boosts the *ms*. [@yi2016lift] previously suggested that there may be a correlation between the detector and the descriptor within a same method, i.e. the LIFT descriptor is trained to describe only the keypoints output by its detector. However, these results show that ELF can easily be integrated into existing pipelines and even boost their performances.
**Gradient Baseline** The saliency map used in ELF is replaced with simple Sobel or Laplacian gradient maps. The rest of the detection pipeline stays the same and we compute their performance (Figure \[fig:gradient\_perf\] Left). They are completed with simple ELF descriptors from the VGG, AlexNet and Xception networks. These new hybrids are then compared to their respective ELF variant (Right). Results show that these simpler gradients can detect systematic keypoints with comparable *rep* on very structured images such as HPatches. However, the ELF detector better overcomes light changes (Webcam). On HPatches, the Laplacian-variant reaches similar *ms* as ELF-VGG (55 *vs* 56) and outperforms ELF-AlexNet and ELF-Xception. These scores can be explained with the images structure: for heavy textured images, high intensity gradient locations are relevant enough keypoints. However, on Webcam, all ELF detectors outperform Laplacian and Sobel with a factor of 100%. This shows that ELF is more robust than Laplacian and Sobel operators. Also, feature gradient is a sparse signal which is better suited for local maxima detection than the much smoother Laplacian operator (Figure \[fig:sobel\_visu\]).
![Feature gradient (right) provides a sparser signal than Laplacian (middle) which is more selective of salient areas.[]{data-label="fig:sobel_visu"}](fig5_sobel_similar_ter.png){height="3cm"}
**Qualitative results** Green lines show putative matches based only on nearest neighbour matching of descriptors. More qualitative results are available in the video [^3].
![Green lines show putative matches of the simple descriptor before RANSAC-based homography estimation.[]{data-label="fig:matching_pic"}](fig6_matching_ter.png){width="\linewidth"}
**CVPR19 Image Matching Challenge [@cvpr19challenge]** This challenge evaluates detection/description methods on two standard tasks: 1) wide stereo matching and 2) structure from motion from small image sets. The *matching score* evaluates the first task, and the camera pose estimation is used for both tasks. Both applications are evaluated on the photo-tourism image collections of popular landmarks [@thomee59yfcc100m; @heinly2015reconstructing]. More details on the metrics definition are available on the challenge website [@cvpr19challenge].
*Wide stereo matching:* Task 1 matches image pairs across wide baselines. It is evaluated with the keypoints *ms* and the relative camera pose estimation between two images. The evaluators run COLMAP to reconstruct dense ‘ground-truth’ depth which they use to translate keypoints from one image to another and compute the matching score. They use the RANSAC inliers to estimate the camera pose and measure performance with the “angular difference between the estimated and ground-truth vectors for both rotation and translation. To reduce this to one value, they use a variable threshold to determine each pose as correct or not, then compute the area under the curve up to the angular threshold. This value is thus the mean average precision up to x, or mAPx. They consider 5, 10, 15, 20, and 25 degrees" [@cvpr19challenge]. Submissions can contain up to 8000 keypoints and we submitted entries to the sparse category i.e. methods with up to 512 keypoints.
![*Wide stereo matching.* Left: matching score (%) of sparse methods (up to 512 keypoints) on photo-tourism. Right: Evolution of mAP of camera pose for increasing tolerance threshold (degrees).[]{data-label="fig:cvpr19_task1"}](fig14.png){width="\linewidth"}
Figure \[fig:cvpr19\_task1\] (left) shows the *ms* (%) of the submitted sparse methods. It compares ELF-VGG detection with DELF [@noh2017largescale] and SuperPoint, where ELF is completed with either the simple descriptor from pool3 or pool4, and SIFT. The variant are dubbed respectively ELF-256, ELF-512 and ELF-SIFT. This allows us to sketch a simple comparison of descriptor performances between the simple descriptor and standard SIFT.
As previously observed on HPatches and Webcam, ELF and SuperPoint reach similar scores on Photo-Tourism. ELF-performance slightly increases from 25% to 26.4% when switching descriptors from VGG-pool3 to VGG-pool4. One explanation is that the feature space size is doubled from the first to the second. This would allow the pool4 descriptors to be more discriminative. However, the 1.4% gain may not be worth the additional memory use. Overall, the results show that ELF can compare with the SoA on this additional dataset that exhibits more illumination and viewpoint changes than HPatches and Webcam.
This observation is reinforced by the camera pose evaluation (Figure \[fig:cvpr19\_task1\] right). SuperPoint shows as slight advantage over others that increases from 1% to 5% across the error tolerance threshold whereas ELF-256 exhibits a minor under-performance. Still, these results show ELF compares with SoA performance even though it is not trained explicitly for detection/description.
![*SfM from small subsets*. Evolution of mAP of camera pose for increasing tolerance threshold.[]{data-label="fig:cvpr19_task2"}](fig15.png){width="0.7\linewidth"}
*Structure-from-Motion from small subsets.* Task 2 “proposes to to build SfM reconstructions from small (3, 5, 10, 25) subsets of images and use the poses obtained from the entire (much larger) set as ground truth" [@cvpr19challenge].
Figure \[fig:cvpr19\_task2\] shows that SuperPoint reaches performance twice as big as the next best method ELF-SIFT. This suggests that when few images are available, SuperPoint performs better than other approaches. One explanation is that even in ’sparse-mode’, *i.e.* when the number of keypoints is restricted up to 512, SuperPoint samples points more densely than the others ($\sim$383 *v.s.* $\sim$210 for the others). Thus, SuperPoint provides more keypoints to triangulate i.e. more 2D-3D correspondences to use when estimating the camera pose. This suggests that high keypoint density is a crucial characteristic of the detection method for Structure-from-Motion. In this regard, ELF still has room for improvement compared to SuperPoint.
Conclusion
==========
We have introduced ELF, a novel method to extract feature locations from pre-trained CNNs, with no further training. Extensive experiments show that it performs as well as state-of-the art detectors. It can easily be integrated into existing matching pipelines and proves to boost their matching performances. Even when completed with a simple feature-map-based descriptor, it turns into a competitive feature matching pipeline. These results shed new light on the information embedded inside trained CNNs. This work also raises questions on the descriptor training of deep-learning approaches: whether their losses actually constrain the CNN to learn better features than the ones it would learn on its own to complete a vision task. Preliminary results show that the CNN architecture, the training task and the dataset have substantial impact on the detector performances. A further analysis of these correlations is the object of a future work.
{width="\linewidth"}
Metrics definition
==================
We explicit the repeatability and matching score definitions introduced in [@mikolajczyk2005comparison] and our adaptations using the following notations: let $(\mathbf{I}^1, \mathbf{I}^2)$, be a pair of images and $\mathcal{KP}^i = (kp_j^i)_{j<N_i}$ the set of $N_i$ keypoints in image $\mathbf{I_i}$. Both metrics are in the range $[0,1]$ but we express them as percentages for better expressibility.
#### Repeatability
Repeatability measures the percentage of keypoints common to both images. We first warp $\mathcal{KP}^1$ to $\mathbf{I}^2$ and note $\mathcal{KP}^{1,w}$ the result. A naive definition of repeatability is to count the number of pairs $(kp^{1,w}, kp^2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2$ such that $\|kp^{1,w}-kp^2\|_2 < \epsilon$, with $\epsilon$ a distance threshold. As pointed by [@verdie2015tilde], this definition overestimates the detection performance for two reasons: a keypoint close to several projections can be counted several times. Moreover, with a large enough number of keypoints, even simple random sampling can achieve high repeatability as the density of the keypoints becomes high.
We instead use the definition implemented in VLBench [@lenc12vlbenchmarks]: we define a weighted graph $(V,E)$ where the edges are all the possible keypoint pairs between $\mathcal{KP}^{1,w}$ and $\mathcal{KP}^2$ and the weights are the euclidean distance between keypoints. $$\label{eq: graph_dfn}
\begin{split}
V &= (kp^{1,w} \in \mathcal{KP}^{1,w}) \cup (kp^2 \in \mathcal{KP}^2) \\
E &= (kp^{1,w}, kp^2, \|kp^{1,w} - kp^2\|_2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2
\end{split}$$
We run a greedy bipartite matching on the graph and count the matches with a distance less than $\epsilon_{kp}$. With $\mathcal{M}$ be the resulting set of matches:
$$\label{rep_dfn}
repeatability = \frac{\mathcal{M}}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$
We set the distance threshold $\epsilon=5$ as is done in LIFT [@yi2016lift] and LF-Net [@ono2018lf].
#### Matching score
The matching score definition introduced in [@mikolajczyk2005comparison] captures the percentage of keypoint pairs that are nearest neighbours both in image space and in descriptor space, and for which these two distances are below their respective threshold $\epsilon_{kp}$ and $\epsilon_{d}$. Let $\mathcal{M}$ be defined as in the previous paragraph and $\mathcal{M}_d$ be the analog of $\mathcal{M}$ when the graph weights are descriptor distances instead of keypoint euclidean distances. We delete all the pairs with a distance above the thresholds $\epsilon$ and $\epsilon_d$ in $\mathcal{M}$ and $\mathcal{M}_d$ respectively. We then count the number of pairs which are both nearest neigbours in image space and descriptor space i.e. the intersection of $\mathcal{M}$ and $\mathcal{M}_d$:
$$\label{MS}
matching \; score = \frac{\mathcal{M} \cap \mathcal{M}_d}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$
One drawback of this definition is that there is no unique descriptor distance threshold $\epsilon_d$ valid for all methods. For example, the SIFT descriptor as computed by OpenCV is a $[0,255]^{128}$ vector for better computational precision, the SuperPoint descriptor is a $[0,1]^{256}$ vector and the ORB descriptor is a 32 bytes binary vector. Not only the vectors are not defined over the same normed space but their range vary significantly. To avoid introducing human bias by setting a descriptor distance threshold $\epsilon_d$ for each method, we choose to set $\epsilon_d = \infty$ and compute the matching score as in [@mikolajczyk2005comparison]. This means that we consider any descriptor match valid as long as they match corresponding keypoints even when the descriptor distance is high.
Tabular results
===============
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
[@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde]
ELF-VGG 63.81
ELF-AlexNet 51.30 38.54 35.21 31.92
ELF-Xception 48.06 29.81
ELF-SuperPoint 59.7 46.29 44.32 18.11
ELF-LFNet 60.1 41.90 44.56 33.43
LF-Net 61.16 48.27 34.19 18.10
SuperPoint 46.35 32.44
LIFT 54.66 42.21 34.02 17.83
SURF 54.51 33.93 26.10 10.13
SIFT 51.19 28.25 24.58 8.30
ORB 53.44 31.56 14.76 1.28
KAZE 56.88 41.04 29.81 13.88
TILDE 52.53 46.71 34.67
MSER 47.82 52.23 21.08 6.14
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
: Generic performances on HPatches [@balntas2017hpatches]. Robustness to light (Webcam [@verdie2015tilde]). (Fig. 5).[]{data-label="tab:whole_pipeline"}
-- ----------- ----------- ----------- ----------- ----------- -----------
34.19 **57.11** 34.02 24.58 26.10 14.76
**44.19** 53.71 **39.48** **27.03** **34.97** **20.04**
18.10 32.44 17.83 10.13 8.30 1.28
**30.71** **34.60** **26.84** **13.21** **21.43** **13.91**
-- ----------- ----------- ----------- ----------- ----------- -----------
: Individual component performance (Fig. \[fig:ind\_component\]-stripes). Matching score for the integration of the VGG $pool_3$ simple-descriptor with other’s detection. Top: Original description. Bottom: Integration of simple-descriptor. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_des"}
-- ----------- ----------- ----------- ----------- ----------- -----------
34.19 **57.11** 34.02 24.58 26.10 14.76
**39.16** 54.44 **42.48** **50.63** **30.91** **36.96**
18.10 32.44 17.83 10.13 8.30 1.28
**26.70** **39.55** **30.82** **36.83** **19.14** **6.60**
-- ----------- ----------- ----------- ----------- ----------- -----------
: Individual component performance (Fig. \[fig:ind\_component\]-circle). Matching score for the integration of ELF-VGG (on $pool_2$) with other’s descriptor. Top: Original detection. Bottom: Integration of ELF. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_det"}
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
[@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde]
Sobel-VGG 56.99 33.74 42.11 20.99
Lapl.-VGG **65.45** 33.74 **55.25** 22.79
VGG 63.81 **53.23** 51.84 **43.73**
Sobel-AlexNet 56.44 33.74 30.57 15.42
Lapl.-AlexNet **65.93** 33.74 **40.92** 15.42
AlexNet 51.30 **38.54** 35.21 **31.92**
Sobel-Xception 56.44 33.74 34.14 16.86
Lapl.-Xception **65.93** 33.74 **42.52** 16.86
Xception 48.06 **49.84** 29.81 **35.48**
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
: Gradient baseline on HPatches [@balntas2017hpatches] and Webcam [@verdie2015tilde] (Fig. \[fig:gradient\_perf\] ).[]{data-label="tab:cmp_sobel"}
ELF Meta Parameters
===================
This section specifies the meta parameters values for the ELF variants. For all methods, $(w_{NMS}, b_{NMS})=(10,10)$.
- Denoise: $(\mu_{noise}, \sigma_{noise})$.
- Threshold: $(\mu_{thr}, \sigma_{thr})$.
- $F^l$: the feature map which gradient is used for detection.
- simple-des: the feature map used for simple-description. Unless mentioned otherwise, the feature map is taken from the same network as the detection feature map $F^l$.
Nets Denoise Threshold $F^l$ simple-desc
------------ --------- ----------- -------------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Alexnet (5,5) (5,4) pool1 pool2
Xception (9,3) (5,4) block2-conv1 block4-pool
SuperPoint (7,2) (17,6) conv1a VGG-pool3
LF-Net (5,5) (5,4) block2-BN VGG-pool3
: Generic performances on HPatches (Fig. \[fig:hpatch\_gle\_perf\]). (BN: Batch Norm)[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------------ --------- ----------- -------------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Alexnet (5,5) (5,4) pool1 pool2
Xception (9,9) (5,4) block2-conv1 block4-pool
SuperPoint (7,2) (17,6) conv1a VGG-pool3
LF-Net (5,5) (5,4) block2-conv VGG-pool3
: Robustness to light on Webcam (Fig. \[fig:hpatch\_gle\_perf\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to scale on HPatches (Fig. \[fig:robust\_scale\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to rotation on HPatches (Fig. \[fig:robust\_rotation\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to 3D viewpoint on Strecha (Fig. \[fig:robust\_strecha\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,5) (5,5) pool2 pool3
: Individual component analysis (Fig. \[fig:ind\_component\])[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
----------- --------- ----------- ------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Sobel (9,9) (5,4) - pool4
Laplacian (9,9) (5,4) - pool4
: Gradient baseline on HPatches and Webcam (Fig. \[fig:gradient\_perf\]).[]{data-label="tab:meta_params"}
[^1]: ELF code:<https://github.com/ELF-det/elf>
[^2]: Rotation and scale dataset: <https://bit.ly/31RAh1S>
[^3]: <https://youtu.be/oxbG5162yDs>
|
{
"pile_set_name": "arxiv"
}
|
type=driver
plumed_needs=boost_serialization
plumed_modules=drr
arg="--plumed plumed.dat --trajectory-stride 1 --timestep 0.005 --ixyz ala12_trajectory.xyz --dump-forces forces --dump-forces-fmt=%10.6f"
|
{
"pile_set_name": "github"
}
|
---
abstract: 'In state space models, smoothing refers to the task of estimating a latent stochastic process given noisy measurements related to the process. We propose an unbiased estimator of smoothing expectations. The lack-of-bias property has methodological benefits: independent estimators can be generated in parallel, and confidence intervals can be constructed from the central limit theorem to quantify the approximation error. To design unbiased estimators, we combine a generic debiasing technique for Markov chains with a Markov chain Monte Carlo algorithm for smoothing. The resulting procedure is widely applicable and we show in numerical experiments that the removal of the bias comes at a manageable increase in variance. We establish the validity of the proposed estimators under mild assumptions. Numerical experiments are provided on toy models, including a setting of highly-informative observations, and a realistic Lotka-Volterra model with an intractable transition density.'
author:
- |
Pierre E. Jacob[^1]\
Department of Statistics, Harvard University\
Fredrik Lindsten and Thomas B. Schön\
Department of Information Technology, Uppsala University
bibliography:
- 'Biblio.bib'
title: '**Smoothing with Couplings of Conditional Particle Filters**'
---
\#1
[*Keywords:*]{} couplings, particle filtering, particle smoothing, debiasing techniques, parallel computation.
Introduction\[sec:introduction\]
================================
Goal and content
----------------
In state space models, the observations are treated as noisy measurements related to an underlying latent stochastic process. The problem of smoothing refers to the estimation of trajectories of the underlying process given the observations [@cappe:ryden:2004]. For finite state spaces and linear Gaussian models, smoothing can be performed exactly. In general models, numerical approximations are required, and many state-of-the-art methods are based on particle methods [@douc:moulines:2014; @kantas2015particle]. Following this line of work, we propose a new method for smoothing in general state space models. Unlike existing methods, the proposed estimators are unbiased, which has direct benefits for parallelization and for the construction of confidence intervals.
The proposed method combines recently proposed conditional particle filters [@andrieu:doucet:holenstein:2010] with debiasing techniques for Markov chains [@glynn2014exact]. Specifically, we show in Section \[sec:unbiasedsmoothing\] how to remove the bias of estimators constructed with conditional particle filters, in exchange for an increase of variance; this variance can then be controlled with tuning parameters, and arbitrarily reduced by averaging over independent replicates. The validity of the proposed approach relies on the finiteness of the computational cost and of the variance of the proposed estimators, which we establish under mild conditions in Section \[sec:newsmoother:theory\]. Methodological improvements are presented in Section \[sec:newsmoother:practical\], and comparisons with other smoothers in Section \[sec:comparison\]. Numerical experiments are provided in Section \[sec:numerics\], and Section \[sec:discussion\] concludes.
Smoothing in state space models \[sec:intro:smoothing\]
-------------------------------------------------------
The latent stochastic process $(x_{t})_{t\geq 0}$ takes values in $\mathbb{X}\subset
\mathbb{R}^{d_x}$, and the observations $(y_t)_{t\geq 1}$ are in $\mathbb{Y}\subset
\mathbb{R}^{d_y}$ for some $d_x,d_y \in\mathbb{N}$. A model specifies an initial distribution $m_0(dx_{0}|\theta)$ and a transition kernel $f(dx_{t}| x_{t-1},\theta)$ for the latent process. We will assume that we have access to deterministic functions $M$ and $F$, and random variables $U_t$ for $t\geq 0$, such that $M(U_0,\theta)$ follows $m_0(dx_0|\theta)$ and $F(x_{t-1},U_t,\theta)$ follows $f(dx_t|x_{t-1},\theta)$; we refer to these as random function representations of the process [see @diaconis1999iterated]. Conditionally upon the latent process, the observations are independent and their distribution is given by a measurement kernel $g(dy_{t}| x_{t},\theta)$. The model is parameterized by $\theta\in\Theta\subset \mathbb{R}^{d_\theta}$, for $d_\theta\in\mathbb{N}$. Filtering consists in approximating the distribution $p(dx_{t}|
y_{1:t},\theta)$ for all times $t\geq 1$, whereas smoothing refers to the approximation of $p(dx_{0:T}|y_{1:T},\theta)$ for a fixed time horizon $T$, where for $s,t\in\mathbb{N}$, we write $s:t$ for the set $\{s,\ldots,t\}$, and $v_{s:t}$ for the vector $(v_s,\ldots,v_t)$. The parameter $\theta$ is hereafter fixed and removed from the notation, as is usually done in the smoothing literature [see Section 4 in @kantas2015particle]; we discuss unknown parameters in Section \[sec:discussion\]. Denote by $h$ a test function from $\mathbb{X}^{T+1}$ to $\mathbb{R}$, of which we want to compute the expectation with respect to the smoothing distribution $\pi(dx_{0:T})=p(dx_{0:T}|y_{1:T})$; we write $\pi(h)$ for $\int_{\mathbb{X}^{T+1}} h(x_{0:T}) \pi(dx_{0:T})$. For instance, with $h:x_{0:T}\mapsto x_t$ where $t\in 0:T$, $\pi(h)$ is the smoothing expectation $\mathbb{E}[x_t|y_{1:T}]$.
Postponing a discussion on existing smoothing methods to Section \[sec:comparison\], we first describe the conditional particle filter [CPF, @andrieu:doucet:holenstein:2010], which is a variant of the particle filter [@doucet:defreitas:gordon:2001]. Given a “reference” trajectory $X = x_{0:T}$, a CPF generates a new trajectory $X^\prime = x_{0:T}^\prime$ as described in Algorithm \[alg:conditional-particle-filter\], which defines a Markov kernel on the space of trajectories; we will write $x^\prime_{0:T} \sim \text{CPF}(x_{0:T},\cdot)$. This Markov kernel leaves $\pi$ invariant and ergodic averages of the resulting chains consistently estimate integrals with respect to $\pi$, under mild conditions [@andrieu:doucet:holenstein:2010; @ChopinS:2015; @LindstenDM:2015; @andrieuvihola2013uniform; @kuhlenschmidt2018stability; @Lee2018ccbpf]. We denote by $(X^{(n)})_{n\geq 0}$ a chain starting from a path $X^{(0)}$, and iterating through $X^{(n)}\sim\text{CPF}(X^{(n-1)},\cdot)$ for $n\geq 1$.
1. 2.
<!-- -->
1. 2. 3.
<!-- -->
1. 2.
In step 2.1. of Algorithm \[alg:conditional-particle-filter\], the resampling distribution $r(da^{1:N-1}|w^{1:N})$ refers to a distribution on $\{1,\ldots,N\}^{N-1}$ from which “ancestors” are drawn according to particle weights. The resampling distribution is an algorithmic choice; specific schemes for the conditional particle filter are described in @ChopinS:2015. Here we will use multinomial resampling throughout. In step 2.3., “normalize the weights” means dividing them by their sum. Instead of bootstrap particle filters [@gordon:salmon:smith:1993], where particles are propagated from the model transition, more sophisticated filters can readily be used in the CPF procedure. For instance, performance gains can be obtained with auxiliary particle filters [@pitt1999filtering; @johansen2008note], as illustrated in Section \[sec:numerics:hiddenar\]. In presenting algorithms we focus on bootstrap particle filters for simplicity. When the transition density is tractable, extensions of the CPF include backward sampling [@whiteleycommentonpmcmc; @LindstenS:2013] and ancestor sampling [@LindstenJS:2014], which is beneficial in the proposed approach as illustrated in Section \[sec:numerics:hiddenar\]. The complexity of a standard CPF update is of order $NT$, and the memory requirements are of order $T + N\log N$ [@jacob2015path].
The proposed method relies on CPF kernels but is different from Markov chain Monte Carlo (MCMC) estimators: it involves independent copies of unbiased estimators of $\pi(h)$. Thus it will be amenable to parallel computation and confidence intervals will be constructed in a different way than with standard MCMC output [e.g. Chapter 7 in @gelman2010handbook]; see Section \[sec:comparison\] for a comparison with existing smoothers.
Debiasing Markov chains \[sec:debiasing\]
-----------------------------------------
We briefly recall the debiasing technique of @glynn2014exact, see also @McLeish:2011 [@Rhee:Glynn:2012; @vihola2015unbiased] and references therein. Denote by $(X^{(n)})_{n\geq 0}$ and $({\tilde{X}}^{(n)})_{n\geq 0}$ two Markov chains with invariant distribution $\pi$, initialized from a distribution $\pi_0$. Assume that, for all $n\geq 0$, $X^{(n)}$ and ${\tilde{X}}^{(n)}$ have the same marginal distribution, and that $\lim_{n\to\infty} \mathbb{E}[h(X^{(n)})] = \pi(h)$. Writing limit as a telescopic sum, and swapping infinite sum and expectation, which will be justified later on, we obtain $$\begin{aligned}
\pi(h)
&= \mathbb{E}[h(X^{(0)})] + \sum_{n=1}^\infty \mathbb{E}[h(X^{(n)}) - h(\tilde{X}^{(n-1)})]
= \mathbb{E}[h(X^{(0)}) + \sum_{n=1}^\infty (h(X^{(n)}) - h(\tilde{X}^{(n-1)}))].\end{aligned}$$ Then, if it exists, the random variable $H_0 = h(X^{(0)}) + \sum_{n=1}^\infty (h(X^{(n)}) - h(\tilde{X}^{(n-1)}))$, is an unbiased estimator of $\pi(h)$. Furthermore, if the chains are coupled in such a way that there exists a time $\tau$, termed the *meeting time*, such that $X^{(n)}={\tilde{X}}^{(n-1)}$ almost surely for all $n\geq \tau$, then $H_0$ can be computed as $$H_0 = h(X^{(0)}) + \sum_{n=1}^{\tau - 1} (h(X^{(n)}) - h(\tilde{X}^{(n-1)})). \label{eq:RGestimator}$$ We refer to $H_0$ as a Rhee–Glynn estimator. Given that the cost of producing $H_0$ increases with $\tau$, it will be worth keeping in mind that we would prefer $\tau$ to take small values with large probability. The main contribution of the present article is to couple CPF chains and to use them in a Rhee–Glynn estimation procedure. Section \[sec:newsmoother:theory\] provides guarantees on the cost and the variance of $H_0$ under mild conditions, and Section \[sec:newsmoother:practical\] contains alternative estimators with reduced variance and practical considerations.
Unbiased smoothing \[sec:unbiasedsmoothing\]
============================================
Coupled conditional particle filters \[sec:ccpf\]
-------------------------------------------------
Our goal is to couple CPF chains $(X^{(n)})_{n\geq 0}$ and $({\tilde{X}}^{(n)})_{n\geq 0}$ such that the meeting time has finite expectation, in order to enable a Rhee–Glynn estimator for smoothing. A coupled conditional particle filter (CCPF) is a Markov kernel on the space of pairs of trajectories, such that $(X^\prime,{\tilde{X}}^\prime)\sim \text{CCPF}((X,{\tilde{X}}), \cdot)$ implies that $X^\prime\sim \text{CPF}(X, \cdot)$ and ${\tilde{X}}^\prime \sim \text{CPF}({\tilde{X}}, \cdot)$.
Algorithm \[alg:coupled-conditional-particle-filter\] describes CCPF in pseudo-code, conditional upon $X = x_{0:T}$ and ${\tilde{X}}= {\tilde{x}}_{0:T}$. Two particle systems are initialized and propagated using common random numbers. The resampling steps and the selection of trajectories at the final step are performed jointly using couplings of discrete distributions. To complete the description of the CCPF procedure, we thus need to specify these couplings (for steps 2.1. and 3.1. in Algorithm \[alg:coupled-conditional-particle-filter\]). With the Rhee–Glynn estimation procedure in mind, we aim at achieving large meeting probabilities $\mathbb{P}(X^\prime = {\tilde{X}}^\prime | X,{\tilde{X}})$, so as to incur short meeting times on average.
1. 2. 3.
<!-- -->
1. 2. 3.
<!-- -->
1. 2.
Coupled resampling \[sec:couplingparticlesystems\]
--------------------------------------------------
The temporal index $t$ is momentarily removed from the notation: the task is that of sampling pairs $(a,{\tilde{a}})$ such that $\mathbb{P}(a=j)=w^{j}$ and $\mathbb{P}({\tilde{a}}=j)={\tilde{w}}^{j}$ for all $j\in 1:N$; this is a sufficient condition for CPF kernels to leave $\pi$ invariant [@andrieu:doucet:holenstein:2010].
A joint distribution on $\{1,\ldots,N\}^{2}$ is characterized by a matrix $P$ with non-negative entries $P^{ij}$, for $i,j\in\{ 1,\ldots,N\}$, that sum to one. The value $P^{ij}$ represents the probability of the event $(a,{\tilde{a}}) = (i,j)$. We consider the set $\mathcal{J}(w,{\tilde{w}})$ of matrices $P$ such that $P\mathds{1}=w$ and $P^{\mathsf{T}}\mathds{1}={\tilde{w}}$, where $\mathds{1}$ denotes a column vector of $N$ ones, $w = w^{1:N}$ and ${\tilde{w}}= {\tilde{w}}^{1:N}$. Matrices $P\in \mathcal{J}(w,{\tilde{w}})$ are such that $\mathbb{P}(a=j)=w^{j}$ and $\mathbb{P}({\tilde{a}}=j)={\tilde{w}}^{j}$ for $j\in 1:N$.
Any choice of probability matrix $P\in\mathcal{J}(w,{\tilde{w}})$, and of a way of sampling $(a,{\tilde{a}})\sim P$, leads to a *coupled* resampling scheme. In order to keep the complexity of sampling $N$ pairs from $P$ linear in $N$, we focus on a particular choice. Other choices of coupled resampling schemes are given in @deligiannidis2015correlated [@jacob2016coupling; @sen2018coupling], following earlier works such as @pitt2002smooth [@lee2008towards].
We consider the *index-coupled* resampling scheme, used by @ChopinS:2015 in their theoretical analysis of the CPF, and by @jasra2015multilevel in a multilevel Monte Carlo context, see also Section 2.4 in @jacob2016coupling. The scheme amounts to a maximal coupling of discrete distributions on $\{1,\ldots,N\}$ with probabilities $w^{1:N}$ and ${\tilde{w}}^{1:N}$, respectively. This coupling maximizes the probability of the event $\{a = \tilde{a}\}$ under the marginal constraints. How to sample from a maximal coupling of discrete distributions is described e.g. in @lindvall2002lectures. The scheme is intuitive at the initial step of the CCPF, when $x_0^j = {\tilde{x}}_0^j$ for all $j=1,\ldots,N-1$: one would want pairs of ancestors $(a_0,{\tilde{a}}_0)$ to be such that $a_0 = {\tilde{a}}_0$, so that pairs of resampled particles remain identical. At later steps, the number of identical pairs across both particle systems might be small, or even null. In any case, at step 2.2. of Algorithm \[alg:coupled-conditional-particle-filter\], the same random number $U_{t}^j$ is used to compute $x^j_{t}$ and ${\tilde{x}}^j_{t}$ from their ancestors. If $a_{t-1}^j = {\tilde{a}}_{t-1}^j$, we select ancestor particles that were, themselves, computed with common random numbers at the previous step, and we give them common random numbers again. Thus this scheme maximizes the number of consecutive steps at which common random numbers are used to propagate each pair of particles.
We now discuss why propagating pairs of particles with common random numbers might be desirable. Under assumptions on the random function representation of the latent process, using common random numbers to propagate pairs of particles results in the particles contracting. For instance, in an auto-regressive model where $F(x,U,\theta) = \theta x + U$, where $\theta \in (-1,1)$ and $U$ is the innovation term, we have $|F(x,U,\theta) - F({\tilde{x}},U,\theta)| = |\theta| |x-{\tilde{x}}|$, thus a pair of particles propagated with common variables $U$ contracts at a geometric rate. We can formulate assumptions directly on the function $x\mapsto \mathbb{E}_U[F(x,U,\theta)]$, such as Lipschitz conditions with respect to $x$, after having integrated $U$ out, for fixed $\theta$. Discussions on these assumptions can be found in @diaconis1999iterated, and an alternative method that would not require them is mentioned in Section \[sec:discussion\].
Rhee–Glynn smoothing estimator \[sec:rgsmoothing\]
--------------------------------------------------
We now put together the Rhee–Glynn estimator of Section \[sec:debiasing\] with the CCPF algorithm of Section \[sec:ccpf\]. In passing we generalize the Rhee–Glynn estimator slightly by starting the telescopic sum at index $k\geq 0$ instead of zero, and denote it by $H_k$; $k$ becomes a tuning parameter, discussed in Section \[sec:newsmoother:practical\]. The procedure is fully described in Algorithm \[alg:rheeglynnsmoother\]; CPF and CCPF refer to Algorithms \[alg:conditional-particle-filter\] and \[alg:coupled-conditional-particle-filter\] respectively.
By convention the sum from $k+1$ to $\tau-1$ in the definition of $H_k$ is set to zero whenever $k+1>\tau-1$. Thus the estimator $H_k$ is equal to $h(X^{(k)})$ on the event $\{k+1>\tau-1\}$. Recall that $h(X^{(k)})$ is in general a biased estimator of $\pi(h)$, since there is no guarantee that a CPF chain reaches stationarity within $k$ iterations. Thus the term $\sum_{n=k+1}^{\tau - 1}(h(X^{(n)}) - h({\tilde{X}}^{(n-1)}))$ acts as a bias correction.
1. 2. 1. 2.
3.
At step 1. of Algorithm \[alg:rheeglynnsmoother\], the paths $X^{(0)}$ and ${\tilde{X}}^{(0)}$ can be sampled independently or not from $\pi_0$. In the experiments we will initialize chains independently and $\pi_0$ will refer to the distribution of a path randomly chosen among the trajectories of a particle filter.
Theoretical properties\[sec:newsmoother:theory\]
================================================
We give three sufficient conditions for the validity of Rhee–Glynn smoothing estimators.
\[assumption:upperbound\] The measurement density of the model is bounded from above: there exists $\bar{g} < \infty$ such that, for all $y\in \mathbb{Y}$ and $x\in\mathbb{X}$, $g(y | x) \leq \bar{g}$.
\[assumption:couplingmatrix\] The resampling probability matrix $P$, with rows summing to $w^{1:N}$ and columns summing to ${\tilde{w}}^{1:N}$, is such that, for all $i\in \{1,\ldots,N\}$, $P^{ii} \geq w^i {\tilde{w}}^i$. Furthermore, if $w^{1:N} = {\tilde{w}}^{1:N}$, then $P$ is a diagonal matrix with entries given by $w^{1:N}$.
\[assumption:mixing\] Let $(X^{(n)})_{n \geq 0}$ be a Markov chain generated by the conditional particle filter and started from $\pi_0$, and $h$ a test function of interest. Then $\mathbb{E}\left[h(X^{(n)})\right] \xrightarrow[n\to \infty]{} \pi(h)$. Furthermore, there exists $\delta > 0$, $n_0 < \infty$ and $C<\infty$ such that, for all $n\geq n_0$, $\mathbb{E}\left[h(X^{(n)})^{2+\delta}\right]\leq C$.
The first assumption is satisfied for wide classes of models where the measurements are assumed to be some transformation of the latent process with added noise. However, it would not be satisfied for instance in stochastic volatility models where it is often assumed that $Y|X=x\sim \mathcal{N}(0,
\exp(x)^2)$ or variants thereof [e.g. @fulop2013efficient]. There, the measurement density would diverge when $y$ is exactly zero and $x\to -\infty$. A similar assumption is discussed in Section 3 of @whiteley2013stability. One can readily check that the second assumption always holds for the index-coupled resampling scheme. The third assumption relates to the validity of MCMC estimators generated by the CPF algorithm, addressed under general assumptions in @ChopinS:2015 [@LindstenDM:2015; @andrieuvihola2013uniform].
Our main result states that the proposed estimator is unbiased, has a finite variance, and that the meeting time $\tau$ has tail probabilities bounded by those of a geometric variable, which implies in particular that the estimator has a finite expected cost.
Under Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\], for any initial distribution $\pi_0$, any number of particles $N\geq 2$ and time horizon $T\geq 1$, there exists $\varepsilon>0$, which might depend on $N$ and $T$, such that for all $n\geq 2$, $$\mathbb{P}(\tau > n) \leq (1-\varepsilon)^{n-1},$$ and therefore $\mathbb{E}[\tau]<\infty$. Under the additional Assumption \[assumption:mixing\], the Rhee–Glynn smoothing estimator $H_k$ of Algorithm \[alg:rheeglynnsmoother\] is such that, for any $k\geq 0$, $\mathbb{E}[H_k] = \pi(h)$ and $\mathbb{V}[H_k] < \infty$. \[thm:finitevariance\]
The proof is in Appendices \[sec:proof:intermed\] and \[sec:proof:unbiased\]. Some aspects of the proof, not specific to the smoothing setting, are similar to the proofs of Theorem 1 in @rhee:phd, Theorem 2.1 in @McLeish:2011, Theorem 7 in @vihola2015unbiased, and results in @glynn2014exact. It is provided in univariate notation but the Rhee–Glynn smoother can estimate multivariate smoothing functionals, in which case the theorem applies component-wise.
Improvements and tuning \[sec:newsmoother:practical\]
=====================================================
Since $H_\ell$ is unbiased for all $\ell\geq 0$, we can compute $H_\ell$ for various values of $\ell$ between two integers $k\leq m$, and average these estimators to obtain $H_{k:m}$ defined as $$\begin{aligned}
\label{eq:timeaverage}
H_{k:m} & = \frac{1}{m-k+1}\sum_{n = k}^m \{h(X^{(n)}) + \sum_{\ell = n + 1}^{\tau - 1} (h(X^{(\ell)}) - h({\tilde{X}}^{(\ell-1)}))\} \nonumber \\
&= \frac{1}{m-k+1}\sum_{n = k}^m h(X^{(n)}) + \sum_{n =k + 1}^{\tau - 1} \frac{\min(m-k+1, n-k)}{m-k+1} (h(X^{(n)}) - h({\tilde{X}}^{(n-1)})).\end{aligned}$$ The term $(m-k+1)^{-1} \sum_{n = k}^m h(X^{(n)})$ is a standard ergodic average of a CPF chain, after $m$ iterations and discarding the first $k-1$ steps as burn-in. It is a biased estimator of $\pi(h)$ in general since $\pi_0$ is different from $\pi$. The other term acts as a bias correction. On the event $\tau - 1< k+1$ the correction term is equal to zero.
As $k$ increases the bias of the term $(m-k+1)^{-1} \sum_{n = k}^m h(X^{(n)})$ decreases. The variance inflation of the Rhee–Glynn estimator decreases too, since the correction term is equal to zero with increasing probability. On the other hand, it can be wasteful to set $k$ to an overly large value, in the same way that it is wasteful to discard too many iterations as burn-in when computing MCMC estimators. In practice we propose to choose $k$ according to the distribution of $\tau$, which can be sampled from exactly by running Algorithm \[alg:rheeglynnsmoother\], as illustrated in the numerical experiments of Section \[sec:numerics\]. Conditional upon a choice of $k$, by analogy with MCMC estimators we can set $m$ to a multiple of $k$, such as $2k$ or $5k$. Indeed the proportion of discarded iterations is approximately $k/m$, and it appears desirable to keep this proportion low. We stress that the proposed estimators are unbiased and with a finite variance for any choice of $k$ and $m$; tuning $k$ and $m$ only impacts variance and cost.
For a given choice of $k$ and $m$, the estimator $H_{k:m}$ can be sampled $R$ times independently in parallel. We denote the independent copies by $H_{k:m}^{(r)}$ for $r\in 1:R$. The smoothing expectation of interest $\pi(h)$ can then be approximated by $\bar{H}_{k:m}^R = R^{-1}\sum_{r=1}^R H_{k:m}^{(r)}$, with a variance that decreases linearly with $R$. From the central limit theorem the confidence interval $[\bar{H}_{k:m}^R + z_{\alpha/2} \hat{\sigma}^R/\sqrt{R}, \bar{H}_{k:m}^R + z_{1-\alpha/2} \hat{\sigma}^R/\sqrt{R}]$, where $\hat{\sigma}^R$ is the empirical standard deviation of $(H_{k:m}^{(r)})_{r=1}^R$ and $z_a$ is the $a$-th quantile of a standard Normal distribution, has $1-\alpha$ asymptotic coverage as $R\to \infty$. The central limit theorem is applicable as a consequence of Theorem \[thm:finitevariance\].
The variance of the proposed estimator can be further reduced by Rao–Blackwellization. In Eq. , the random variable $h(X^{(n)})$ is obtained by applying the test function $h$ of interest to a trajectory drawn among $N$ trajectories, denoted by say $x_{0:T}^k$ for $k=1,\ldots,N$, with probabilities $w_T^{1:N}$; see step 3 in Algorithms \[alg:conditional-particle-filter\] and \[alg:coupled-conditional-particle-filter\]. Thus the random variable $\sum_{k=1}^N w_T^{k}h(x_{0:T}^{k})$ is the conditional expectation of $h(X^{(n)})$ given the trajectories and $w_T^{1:N}$, which has the same expectation as $h(X^{(n)})$. Thus any term $h(X^{(n)})$ or $h({\tilde{X}}^{(n)})$ in $H_{k:m}$ can be replaced by similar conditional expectations. This enables the use of all the paths generated by the CPF and CCPF kernels, and not only the selected ones.
As in other particle methods the choice of the number of particles $N$ is important. Here, the estimator $\bar{H}_{k:m}^R$ is consistent as $R\to \infty$ for any $N\geq 2$, but $N$ plays a role both on the cost and of the variance of each $H^{(r)}_{k:m}$. We can generate unbiased estimators for different values of $N$ and compare their costs and variances in preliminary runs. The scaling of $N$ with the time horizon $T$ is explored numerically in Section \[sec:numerics:hiddenar\]. If possible, one can also employ other algorithms than the bootstrap particle filter, as illustrated in Section \[sec:numerics:hiddenar\] with the auxiliary particle filter.
Comparison with existing smoothers \[sec:comparison\]
=====================================================
The proposed method combines elements from both particle smoothers and MCMC methods, but does not belong to either category. We summarize advantages and drawbacks below, after having discussed the cost of the proposed estimators.
Each estimator $H_{k:m}$ requires two draws from $\pi_0$, here taken as the distribution of a trajectory selected from a particle filter with $N$ particles. Then, the estimator as described in Algorithm \[alg:rheeglynnsmoother\] requires a draw from the CPF kernel, $\tau-1$ draws from the CCPF kernel, and finally $m-\tau$ draws of the CPF kernel on the events $\{m>\tau\}$. The cost of a particle filter and of an iteration of CPF is usually dominated by the propagation of $N$ particles and the evaluation of their weights. The cost of an iteration of CCPF is approximately twice larger. Overall the cost of $H_{k:m}$ is thus of order $C(\tau,m,N) = N\times (3+2(\tau-1)+\max(0,m-\tau))$, for fixed $T$. The finiteness of the expected cost $\mathbb{E}[C(\tau,m,N)]$ is a consequence of Theorem \[thm:finitevariance\]. The average $\bar{H}_{k:m}^R$ satisfies a central limit theorem parametrized by the number of estimators $R$, as discussed in Section \[sec:newsmoother:practical\]; however, since the cost of $H_{k:m}$ is random, it might be more relevant to consider central limit theorems parametrized by computational cost, as in @glynn1992asymptotic. The asymptotic inefficiency of the proposed estimators can be defined as $\mathbb{E}[C(\tau,m,N)]\times\mathbb{V}[H_{k:m}]$, which can be approximated with independent copies of $H_{k:m}$ and $\tau$, obtained by running Algorithm \[alg:rheeglynnsmoother\].
State-of-the-art particle smoothers include fixed-lag approximations [@kitagawa2001monte; @cappe:ryden:2004; @olsson2008sequential], forward filtering backward smoothers [@GodsillDW:2004; @del2010forward; @douc2011sequential; @taghavi2013adaptive], and smoothers based on the two-filter formula [@briers2010smoothing; @kantas2015particle]. These particle methods provide consistent approximations as $N\to\infty$, with associated mean squared error decreasing as $1/N$ [Section 4.4 of @kantas2015particle]; except for fixed-lag approximations for which some bias remains. The cost is typically of order $N$ with efficient implementations described in @fearnheadwyncolltawn2010 [@kantas2015particle; @olsson2017efficient], and is linear in $T$ for fixed $N$. Parallelization over the $N$ particles is mostly feasible, the main limitation coming from the resampling step [@murray2015parallel; @lee2015forest; @whiteley2016role; @paige2014asynchronous; @murray2016anytime]. The memory cost of particle filters is of order $N$, or $N\log N$ if trajectories are kept [@jacob2015path], see also @Koskela2018. Assessing the accuracy of particle approximations from a single run of these methods remains a major challenge; see @lee2015variance [@olsson2017numerically] for recent breakthroughs. Furthermore, we will see in Section \[sec:numerics:unlikely\] that the bias of particle smoothers cannot always be safely ignored. On the other hand, we will see in Section \[sec:numerics:pz\] that the variance of particle smoothers can be smaller than that of the proposed estimators, for a given computational cost. Thus, in terms of mean squared error per unit of computational cost, the proposed method is not expected to provide benefits.
The main advantage of the proposed method over particle smoothers lies in the construction of confidence intervals, and the possibility of parallelizing over independent runs as opposed to interacting particles. Additionally, a user of particle smoothers who would want more precise results would increase the number of particles $N$, if enough memory is available, discarding previous runs. On the other hand, the proposed estimator $\bar{H}_{k:m}^R$ can be refined to arbitrary precision by drawing more independent copies of $H_{k:m}$, for a constant memory requirement.
Other popular smoothers belong to the family of MCMC methods. Early examples include Gibbs samplers, updating components of the latent process conditionally on other components and on the observations [e.g. @carter1994gibbs]. The CPF kernel described in Section \[sec:intro:smoothing\] can be used in the standard MCMC way, averaging over as many iterations as possible [@andrieu:doucet:holenstein:2010]. The bias of MCMC estimators after a finite number of iterations is hard to assess, which makes the choice of burn-in period difficult. Asymptotically valid confidence intervals can be produced in various ways, for instance using the CODA package [@plummer2006coda]; see also @vats2018strong. On the other hand, parallelization over the iterations is intrinsically challenging with MCMC methods [@rosenthal2000parallel].
Therefore the proposed estimators have some advantages over existing methods, the main drawback being a potential increase in mean squared error for a given (serial) computational budget, as illustrated in the numerical experiments.
Numerical experiments\[sec:numerics\]
=====================================
We illustrate the tuning of the proposed estimators, their advantages and their drawbacks through numerical experiments. All estimators of this section employ the Rao–Blackwellization technique described in Section \[sec:newsmoother:practical\], and multinomial resampling is used within all filters.
Hidden auto-regressive model\[sec:numerics:hiddenar\]
-----------------------------------------------------
Our first example illustrates the proposed method, the impact of the number of particles $N$ and that of the time horizon $T$, and the benefits of auxiliary particle filters. We consider a linear Gaussian model, with $x_{0}\sim\mathcal{N}\left(0,1\right)$ and $x_{t}=\eta
x_{t-1}+\mathcal{N}\left(0,1\right)$ for all $t \geq 1$, with $\eta=0.9$. We assume that $y_{t}\sim\mathcal{N}\left(x_{t},1\right)$ for all $t \geq 1$.
We first generate $T = 100$ observations from the model, and consider the task of estimating all smoothing means, which corresponds to the test function $h:
x_{0:T}\mapsto x_{0:T}$. With CPF kernels using bootstrap particle filters, with $N = 256$ particles and ancestor sampling [@LindstenJS:2014], we draw meeting times $\tau$ independently, and represent a histogram of them in Figure \[fig:ar1:meetings\]. Based on these meeting times, we can choose $k$ as a large quantile of the meeting times, for instance $k = 10$, and $m$ as a multiple of $k$, for instance $m = 2k = 20$. For this choice, we find the average compute cost of each estimator to approximately equal that of a particle filter with $28\times 256$ particles, with a memory usage equivalent to $2\times 256$ particles. How many of these estimators can be produced in a given wall-clock time depends on available hardware. With $R=100$ independent estimators, we obtain $95\%$ confidence intervals indicated by black error bars in Figure \[fig:ar1:smoothingmeans\]. The true smoothing means, obtained by Kalman smoothing, are indicated by a line.
The method is valid for all $N$, which prompts the question of the optimal choice of $N$. Intuitively, larger values of $N$ lead to smaller meeting times. However, the meeting time cannot be less than $2$ by definition, which leads to a trade-off. We verify this intuition by numerical simulations with $1,000$ independent runs. For $N=16$, $N=128$, $N=256$, $N=512$ and $N=1,024$, we find average meeting times of $97$, $15$, $7$, $4$ and $3$ respectively. After adjusting for the different numbers of particles, the expected cost of obtaining a meeting is approximately equivalent with $N=16$ and $N=512$, but more expensive for $N=1,024$. In practice, for specific integrals of interest, one can approximate the cost and the variance of the proposed estimators for various values of $N$, $k$ and $m$ using independent runs, and use the most favorable configuration in subsequent, larger experiments.
Next we investigate the effect of the time horizon $T$. We expect the performance of the CPF kernel to decay as $T$ increases for a fixed $N$. We compensate by increasing $N$ linearly with $T$. Table \[table:effecthorizon\] reports the average meeting times obtained from $R=500$ independent runs. We see that the average meeting times are approximately constant or slightly decreasing over $T$, implying that the linear scaling of $N$ with $T$ is appropriate or even conservative, in agreement with the literature [e.g. @huggins2015sequential]. The table contains the average meeting times obtained with and without ancestor sampling [@LindstenJS:2014]; we observe significant reductions of average meeting times with ancestor sampling, but it requires tractable transition densities. Finally, for the present model we can employ an auxiliary particle filter, in which particles are propagated conditionally on the next observation. Table \[table:effecthorizon\] shows a significant reduction in expected meeting time. The combination of auxiliary particle filter and ancestor sampling naturally leads to the smallest expected meeting times.
A hidden auto-regressive model with an unlikely observation {#sec:numerics:unlikely}
-----------------------------------------------------------
We now illustrate the benefits of the proposed estimators in an example taken from @ruiz2016particle where particle filters exhibit a significant bias. The latent process is defined as $x_{0}\sim\mathcal{N}\left(0,0.1^{2}\right)$ and $x_{t}=\eta
x_{t-1}+\mathcal{N}\left(0,0.1^{2}\right)$; we take $\eta=0.9$ and consider $T=10$ time steps. The process is observed only at time $T=10$, where $y_{T}=1$ and we assume $y_{T}\sim\mathcal{N}\left(x_{T},0.1^{2}\right)$. The observation $y_{T}$ is unlikely under the model. Therefore the filtering distributions and the smoothing distributions have little overlap, particularly for times $t$ close to $T$. This toy model is a stylized example of settings with highly-informative observations [@ruiz2016particle; @del2015sequential].
We consider the task of estimating the smoothing mean $\mathbb{E}[x_9|y_{10}]$. We run particle filters for different values of $N$, $10,000$ times independently, and plot kernel density estimators of the distributions of the estimators of $\mathbb{E}[x_9|y_{10}]$ in Figure \[fig:unlikely:pf\]. The dashed vertical line represents the estimand $\mathbb{E}[x_9|y_{10}]$, obtained analytically. We see that the bias diminishes when $N$ increases, but that it is still significant with $N=16,384$ particles. For any fixed $N$, if we were to ignore the bias and produce confidence intervals using the central limit theorem based on independent particle filter estimators, the associated coverage would go to zero as the number of independent runs would increase.
In contrast, confidence intervals obtained with the proposed unbiased estimators are shown in Figure \[fig:unlikely:rg\]. For each value of $N$, the average meeting time was estimated from $100$ independent runs (without ancestor sampling), and then $k$ was set to that estimate, and $m$ equal to $k$. Then, $R=10,000$ independent estimators were produced, and confidence intervals were computed as described in Section \[sec:newsmoother:practical\]. This leads to precise intervals for each choice of $N$. The average costs associated with $N=128$, $N=256$, $N=512$ and $N=1024$ were respectively matching the costs of particle filters with $3814$, $4952$, $9152$ and $13,762$ particles. To conclude, if we match computational costs and compare mean squared errors, the proposed method is not necessarily advantageous. However, if the interest lies in confidence intervals with adequate coverage, the proposed approach comes with guarantees thanks to the lack of bias and the central limit theorem for i.i.d. variables.
Prey-predator model \[sec:numerics:pz\]
---------------------------------------
Our last example involves a model of plankton–zooplankton dynamics taken from @jones2010bayesian, in which the transition density is intractable [@breto2009time; @jacob2015sequential]. The bootstrap particle filter is still implementable, and one can either keep the entire trajectories of the particle filter, or perform fixed-lag approximations to perform smoothing. On the other hand, backward and ancestor sampling are not implementable.
The hidden state $x_t = (p_t, z_t)$ represents the population size of phytoplankton and zooplankton, and the transition from time $t$ to $t+1$ is given by a Lotka–Volterra equation, $$\frac{dp_t}{dt} = \alpha p_t - c p_t z_t , \quad \text{and}\quad \frac{dz_t}{dt} = e c p_t z_t -m_l z_t -m_q z_t^2,$$ where the stochastic daily growth rate $\alpha$ is drawn from $\mathcal{N}(\mu_\alpha,\sigma_\alpha^2)$ at every integer time $t$. The propagation of each particle involves solving the above equation numerically using a Runge-Kutta method in the `odeint` library [@ahnert2011odeint]. The initial distribution is given by $\log p_0 \sim \mathcal{N}(\log 2 , 1)$ and $\log z_0 \sim \mathcal{N}(\log 2, 1)$. The parameters $c$ and $e$ represent the clearance rate of the prey and the growth efficiency of the predator. Both $m_l$ and $m_q$ parameterize the mortality rate of the predator. The observations $y_t$ are noisy measurements of the phytoplankton $p_t$, $\log y_t \sim \mathcal{N}(\log
p_t, 0.2^2)$; $z_t$ is not observed. We generate $T = 365$ observations using $\mu_\alpha = 0.7, \sigma_\alpha = 0.5$, $c = 0.25$, $e = 0.3$, $m_l = 0.1$, $m_q = 0.1$. We consider the problem of estimating the mean population of zooplankton at each time $t\in0:T$, denoted by $\mathbb{E}[z_t|y_{1:T}]$, given the data-generating parameter.
The distribution of meeting times obtained with $N=4,096$ particles over $R=1,000$ experiments is shown in Figure \[fig:pz:meetings\]. Based on this graph, we choose $k=7$, $m=2k=14$, and produce $R=1,000$ independent estimators of the smoothing means $\mathbb{E}[z_t|y_{1:T}]$. We compute the smoothing means with a long CPF chain, taken as ground truth. We then compute the relative variance of our estimators, defined as their variance divided by the square of the smoothing means. We find the average cost of the proposed estimator to be equivalent to that of a particle filter with $78,377$ particles. To approximately match the cost, we thus run particle filters with $2^{16}=65,536$ particles, with and without fixed-lag smoothing with a lag of $10$. The resulting relative variances are shown in Figure \[fig:pz:relvar\]. We see that the proposed estimators yield a larger variance than particle filters, but that the difference is manageable. Fixed-lag smoothing provides significant variance reduction, particularly for earlier time indices. We can also verify that the bias of fixed-lag smoothing is negligible in the present example; this would however be hard to assess with fixed-lag smoothers alone.
Discussion\[sec:discussion\]
============================
The performance of the proposed estimator is tied to the meeting time. As in @ChopinS:2015, the coupling inequality [@lindvall2002lectures] can be used to relate the meeting time with the mixing of the underlying conditional particle filter kernel. The proposed approach can be seen as a framework to parallelize CPF chains and to obtain reliable confidence intervals over independent replicates. Any improvement in the CPF directly translates into more efficient Rhee–Glynn estimators, as we have illustrated in Section \[sec:numerics:hiddenar\] with auxiliary particle filters and ancestor sampling. The methods proposed e.g. in @SinghLM:2017 [@del2015sequential; @guarniero2015iterated; @gerber2015sequential; @heng2017controlled] could also be used in Rhee–Glynn estimators, with the hope of obtaining shorter meeting times and smaller variance.
We have considered the estimation of latent processes given known parameters. In the case of unknown parameters, joint inference of parameters and latent processes can be done with MCMC methods, and particle MCMC methods in particular [@andrieu:doucet:holenstein:2010]. Couplings of generic particle MCMC methods could be achieved by combining couplings proposed in the present article with those described in @jacob2017unbiased for Metropolis–Hastings chains. Furthermore, for fixed parameters, coupling the particle independent Metropolis–Hastings algorithm of @andrieu:doucet:holenstein:2010 would lead to unbiased estimators of smoothing expectations that would not require coupled resampling schemes (see Section \[sec:couplingparticlesystems\]).
The appeal of the proposed smoother, namely parallelization over independent replicates and confidence intervals, would be shared by perfect samplers. These algorithms aim at the more ambitious task of sampling exactly from the smoothing distribution [@leedoucetperfectsimulation]. It remains unknown whether the proposed approach could play a role in the design of perfect samplers. We have established the validity of the Rhee–Glynn estimator under mild conditions, but its theoretical study as a function of the time horizon and the number of particles deserves further analysis [see @Lee2018ccbpf for a path forward]. Finally, together with Fisher’s identity [@douc:moulines:2014], the proposed smoother provides unbiased estimators of the score for models where the transition density is tractable. This could help maximizing the likelihood via stochastic gradient ascent.
**Acknowledgements.** The authors thank Marco Cuturi, Mathieu Gerber, Jeremy Heng and Anthony Lee for helpful discussions. This work was initiated during the workshop on *Advanced Monte Carlo methods for complex inference problems* at the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK held in April 2014. We would like to thank the organizers for a great event which led to this work.
Intermediate result on the meeting probability \[sec:proof:intermed\]
=====================================================================
Before proving Theorem \[thm:finitevariance\], we introduce an intermediate result on the probability of the chains meeting at the next step, irrespective of their current states. The result provides a lower-bound on the probability of meeting in one step, for coupled chains generated by the coupled conditional particle filter (CCPF) kernel.
Let $N\geq 2$ and $T\geq 1$ be fixed. Under Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\], there exists $\varepsilon>0$, depending on $N$ and $T$, such that $$\forall X \in \mathbb{X}^{T+1}, \quad \forall {\tilde{X}}\in \mathbb{X}^{T+1}, \quad \mathbb{P}(X' = {\tilde{X}}' | X, {\tilde{X}}) \geq \varepsilon,$$ where $(X',{\tilde{X}}') \sim \text{CCPF}((X,{\tilde{X}}), \cdot)$. Furthermore, if $X = {\tilde{X}}$, then $X' = {\tilde{X}}'$ almost surely. \[lemma:meetingprobability\]
The constant $\varepsilon$ depends on $N$ and $T$, and on the coupled resampling scheme being used. Lemma \[lemma:meetingprobability\] can be used, together with the coupling inequality [@lindvall2002lectures], to prove the ergodicity of the conditional particle filter kernel, which is akin to the approach of @ChopinS:2015. The coupling inequality states that the total variation distance between $X^{(n)}$ and ${\tilde{X}}^{(n-1)}$ is less than $2\mathbb{P}(\tau > n)$, where $\tau$ is the meeting time. By assuming ${\tilde{X}}^{(0)}\sim\pi$, ${\tilde{X}}^{(n)}$ follows $\pi$ at each step $n$, and we obtain a bound for the total variation distance between $X^{(n)}$ and $\pi$. Using Lemma \[lemma:meetingprobability\], we can bound the probability $\mathbb{P}(\tau > n)$ from above by $(1-\varepsilon)^n$, as in the proof of Theorem \[thm:finitevariance\] below. This implies that the computational cost of the proposed estimator has a finite expectation for all $N\geq 2$ and $T\geq 1$.
*Proof of Lemma \[lemma:meetingprobability\]*. We write ${{\mathbb{P}}_{x_{0:t},\tilde x_{0:t}}}$ and ${{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}$ for the conditional probability and expectation, respectively, with respect to the law of the particles generated by the CCPF procedure conditionally on the reference trajectories up to time $t$, $(x_{0:t}, \tilde x_{0:t})$. Furthermore, let $\mathcal{F}_t$ denote the filtrations generated by the CCPF at time $t$. We denote by $x_{0:t}^k$, for $k\in1:N$, the surviving trajectories at time $t$. Let $I_t \subseteq 1:N-1$ be the set of common particles at time $t$ defined by $I_t = \{j \in 1:N-1 : x_{0:t}^j = \tilde x_{0:t}^j \}$. The meeting probability can then be bounded by: $$\begin{gathered}
{{\mathbb{P}}_{x_{0:T},\tilde x_{0:T}}}(x_{0:T}^\prime = \tilde x_{0:T}^\prime) = {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}\left[{\mathds{1}}\!\left(x_{0:T}^{b_T} = \tilde x_{0:T}^{\tilde{b}_T} \right)\right]
\geq \sum_{k=1}^{N-1} {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(k \in I_T\right) P_T^{kk}] \\
= (N-1){{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(1\in I_T \right) P_T^{11}]
\geq \frac{N-1}{ (N\bar{g})^2} {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(1\in I_T \right) g_T(x_T^1) g_T(\tilde x_T^1)],\end{gathered}$$ where we have used Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\].
Now, let $\psi_t : {\mathbb{X}}^t \mapsto {\mathbb{R}}_+$ and consider $$\begin{aligned}
\label{eq:crude:h}
{{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1) \psi_t(\tilde x_{0:t}^1)] =
{{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1)^2],\end{aligned}$$ since the two trajectories agree on $\{1\in I_t\}$. We have $$\begin{aligned}
{\mathds{1}}\!\left( 1\in I_t \right) \geq \sum_{k=1}^{N-1} {\mathds{1}}\!\left(k\in I_{t-1} \right) {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = k \right),\end{aligned}$$ and thus $$\begin{gathered}
\label{eq:crude:h2}
{{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1)^2] \\
\geq {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[\sum_{k=1}^{N-1} {\mathds{1}}\!\left(k\in I_{t-1} \right) {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = k \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ]] \\
= (N-1){{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left(1\in I_{t-1} \right) {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = 1 \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ]].\end{gathered}$$ The inner conditional expectation can be computed as $$\begin{gathered}
\label{eq:cruce:h2-inner}
{{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = 1 \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ] \\
=\sum_{k,\ell=1}^N P_{t-1}^{k\ell} {\mathds{1}}\!\left(k=\ell=1\right) \int \psi_t((x_{0:t-1}^k, x_t ))^2 f(dx_t|x_{t-1}^k) \\
= P_{t-1}^{11} \int \psi_t((x_{0:t-1}^1, x_t))^2 f(dx_t|x_{t-1}^1) \\
\geq \frac{g_{t-1}(x_{t-1}^1) g_{t-1}(\tilde x_{t-1}^1) }{(N\bar{g})^2} \left( \int \psi_t((x_{0:t-1}^1, x_t )) f(dx_t|x_{t-1}^1) \right)^2,\end{gathered}$$ where we have again used Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\]. Note that this expression is independent of the final states of the reference trajectories, $(x_t, \tilde x_t)$, which can thus be dropped from the conditioning. Furthermore, on $\{1\in I_{t-1}\}$ it holds that $x_{0:t-1}^1 = \tilde x_{0:t-1}^1$ and therefore, combining Eqs. – we get $$\begin{gathered}
{{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1) \psi_t(\tilde x_{0:t}^1)] \\
\geq \frac{(N-1)}{(N\bar{g})^2}{{\mathbb{E}}_{x_{0:t-1},\tilde x_{0:t-1}}}\Big[{\mathds{1}}\!\left(1\in I_{t-1} \right) g_{t-1}(x_{t-1}^1) \int \psi_t((x_{0:t-1}^1, x_t )) f(dx_t|x_{t-1}^1) \\ \times
g_{t-1}(\tilde x_{t-1}^1) \int \psi_t((\tilde x_{0:t-1}^1, x_t )) f(dx_t|\tilde x_{t-1}^1)
\Big].\end{gathered}$$ Thus, if we define for $t=1,\ldots,T-1$, $\psi_t(x_{0:t}) = g_t(x_t) \int \psi_{t+1}(x_{0:t+1}) f(dx_{t+1}|x_t)$, and $\psi_T(x_{0:T}) = g_T(x_T)$, it follows that $$\begin{aligned}
{{\mathbb{P}}_{x_{0:T},\tilde x_{0:T}}}(x_{0:T}^\prime= \tilde x_{0:T}^\prime) &\geq \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} {{\mathbb{E}}_{x_{0},\tilde x_{0}}}[{\mathds{1}}\!\left(1\in I_1 \right) \psi_1(x_1^1)\psi_1(\tilde x_1^1)] \\
&= \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} {{\mathbb{E}}_{x_{0},\tilde x_{0}}}[\psi_1(x_1^1)^2] \geq \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} Z^2 > 0,\end{aligned}$$ where $Z > 0$ is the normalizing constant of the model, $Z=\int m_0(dx_0) \prod_{t=1}^{\mathsf{T}}g_t(x_t) f(dx_t|x_{t-1})$. This concludes the proof of Lemma \[lemma:meetingprobability\].
For any fixed $T$, the bound goes to zero when $N\to \infty$. The proof fails to capture accurately the behaviour of $\varepsilon$ in Lemma \[lemma:meetingprobability\] as a function of $N$ and $T$. Indeed, we observe in the numerical experiments of Section \[sec:numerics\] that meeting times decrease when $N$ increases.
Proof of Theorem \[thm:finitevariance\] \[sec:proof:unbiased\]
==============================================================
The proof is similar to those presented in @rhee:phd, in @McLeish:2011, @vihola2015unbiased, and @glynn2014exact. We can first upper-bound $\mathbb{P}\left(\tau>n\right)$, for all $n\geq2$, using Lemma \[lemma:meetingprobability\] [e.g. @williams1991probability exercise E.10.5]. We obtain for all $n\geq2$, $$\mathbb{P}\left(\tau>n\right)\leq\left(1-\varepsilon\right)^{n-1}.\label{eq:meetingtime:survival2}$$ This ensures that $\mathbb{E}[\tau]$ is finite; and that $\tau$ is almost surely finite. We then introduce the random variables $Z_{m}=\sum_{n=0}^{m} \Delta^{(n)}$ for all $m\geq 1$. Since $\tau$ is almost surely finite, and since $\Delta^{(n)} = 0$ for all $n \geq \tau$, then $Z_m\to Z_\tau = H_0$ almost surely when $m\to\infty$. We prove that $(Z_m)_{m\geq 1}$ is a Cauchy sequence in $L_2$, i.e. $\sup_{m'\geq m} \mathbb{E}\left[ (Z_{m'} - Z_m)^2 \right]$ goes to $0$ as $m\to\infty$. We write $$\begin{aligned}
\label{eq:zcauchy}
\mathbb{E}[(Z_{m'} - Z_m)^2] &= \sum_{n = m + 1}^{m'}\sum_{\ell = m + 1}^{m'} \mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}].\end{aligned}$$ We use Cauchy-Schwarz inequality to write $(\mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}])^2 \leq \mathbb{E}[(\Delta^{(n)})^2]\mathbb{E}[(\Delta^{(\ell)})^2]$, and we note that $(\Delta^{(n)})^2= \Delta^{(n)}\mathds{1}(\tau>n)$. Together with Hölder’s inequality with $p=1+\delta/2$, and $q=(2+\delta)/\delta$, where $\delta$ is as in Assumption \[assumption:mixing\], we can write $$\begin{aligned}
\mathbb{E}\left[(\Delta^{(n)})^{2}\right] & \leq\mathbb{E}\left[(\Delta^{(n)})^{2+\delta}\right]^{1/(1+\delta/2)}\left(\left(1-\varepsilon\right)^{\delta/(2+\delta)}\right)^{n-1}.\end{aligned}$$ Furthermore, using Assumption \[assumption:mixing\] and Minkowski’s inequality, we obtain the bound $$\begin{aligned}
\forall n\geq n_0, \qquad & \mathbb{E}\left[(\Delta^{(n)})^{2+\delta}\right]^{1/(1+\delta/2)}\leq C_{1},\end{aligned}$$ where $C_1$ is independent of $n$. The above inequalities lead to the terms $\mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}]$ being upper bounded by an expression of the form $C_1 \eta^n \eta^\ell$, where $\eta \in (0,1)$. Thus we can compute a bound on Eq. , by computing geometric series, and finally conclude that $(Z_m)_{m \geq 1}$ is a Cauchy sequence in $L_2$.
By uniqueness of the limit, since $(Z_m)_{m \geq 1}$ goes almost surely to $H_0$, $(Z_m)_{m \geq 1}$ goes to $H_0$ in $L_2$. This shows that $H_0$ has finite first two moments. We can retrieve the expectation of $H_0$ by $$\mathbb{E}Z_{m}=\sum_{n=0}^{m}\mathbb{E}[\Delta^{(n)}]=\mathbb{E}\left[h(X^{(m)})\right] \xrightarrow[m\to \infty]{} \pi(h),$$ according to Assumption \[assumption:mixing\]. This concludes the proof of Theorem \[thm:finitevariance\] for $H_k$ with $k=0$, and a similar reasoning applies for any $k\geq 0$.
[^1]: The authors gratefully acknowledge the Swedish Foundation for Strategic Research (SSF) via the projects *Probabilistic Modeling and Inference for Machine Learning* (contract number: ICA16-0015) and ASSEMBLE (contract number: RIT15-0012), the Swedish Research Council (VR) via the projects *Learning of Large-Scale Probabilistic Dynamical Models* (contract number: 2016-04278) and *NewLEADS - New Directions in Learning Dynamical Systems* (contract number: 621-2016-06079), and the National Science Foundation through grant DMS-1712872.
|
{
"pile_set_name": "arxiv"
}
|
Mai-Mai
The term Mayi-Mayi or Mai-Mai refers to any kind of community-based militia group active in the Democratic Republic of the Congo (DRC), formed to defend their local territory against other armed groups. Most were formed to resist the invasion of Rwandan forces and Rwanda-affiliated Congolese rebel groups, but some may have formed to exploit the war for their own advantage by looting, cattle rustling or banditry.
Groups that fall under the umbrella term "Mai-Mai" include armed forces led by warlords, traditional tribal elders, village heads, and politically motivated resistance fighters. Because Mai Mai have had only the most tenuous internal cohesion, different Mai-Mai groups allied themselves with a variety of domestic and foreign government and guerrilla groups at different times. The term Mai-Mai does not refer to any particular movement, affiliation or political objective but to a broad variety of groups.
Mai-Mai were particularly active in the eastern Congolese provinces bordering Rwanda, North Kivu and South Kivu (the "Kivus"), which were under the control of the Rwanda-allied Banyamulenge-dominated rebel faction, the Rally for Congolese Democracy–Goma (RCD-Goma) during the Second Congo War. While militias have long been common in the Kivus, particularly among the minority Batembo and Babembe ethnic groups, the recent wars and conflicts caused large numbers of town dwellers to form Mai-Mai. Although the Mai-Mai, either as a group or as individual groups, were not party to the 1999 Lusaka Accord meant to end the Second Congo War, they remained one of the most powerful forces in the conflict and the lack of cooperation from some groups has been problematic for the peace process.
Mai-Mai in North and South Kivu
According to a 2001 UN report, 20,000 to 30,000 Mai-Mai were active in the two Kivu provinces. The two most powerful and well-organized Mai-Mai groups in the Kivus were led by Generals Padiri and Dunia. Currently most active is a group which is called Mai-Mai Yakutumba, was organized in 2007 by General Yakutumba. They were reported to have received aid from the government of the Democratic Republic of Congo and are widely viewed by other Mai Mai groups as the leaders, though not the commanders, of the Kivu Mai-Mai. A number of smaller Mai-Mai groups, such as the Mudundu 40/Front de Résistance et de Défense du Kivu (FRDKI) and Mouvement de Lutte contre l'Agression au Zaïre/Forces Unies de Résistance Nationale contre l'Agression de la Républíque Démocratique du Congo (MLAZ/FURNAC), were reported to cooperate with the Rwandan military and Rally for Congolese Democracy–Goma (RCD-Goma).
Walikale and Masisi north of Goma were the centres of Mai-Mai activity in North Kivu. In South Kivu, there have historically been concentrations around Walungu and Bunyakiri south of Lake Kivu, around Uvira and Mwenaga at the northern end of Lake Tanganyika, further south around Fizi, and around Shabunda, between the Rwandan border and Kindu.
A Mai-Mai leader, Colonel Mayele, was arrested by UN forces in October 2010, allegedly being the leader behind mass rapes in the Walikale region of North Kivu province.
Mai-Mai in Katanga
A former leader of the Mai-Mai, Gédéon Kyungu Mutanga, turned himself over to MONUC troops in May 2006. He was found guilty of numerous war crimes between October 2003 and May 2006 and was sentenced to death by the Kipushi Military Tribunal in Katanga Province on 6 March 2009. He escaped from prison in September 2011 and formed the Mai-Mai Kata Katanga ("Secede Katanga").
Other Mai-Mai groups
There was a large Mai-Mai presence in Maniema, in particular around Kindu and Kalemie. Province Orientale also hosts a number of Mai-Mai, but these groups were apparently involved in long-standing ethnic disputes.
Mai-Mai Gedeon is also commanded by Gedeon Kyungu Mutanga and loosely tied to his Mai-Mai Kata Katanga. The Corak Kata Katanga also known as the Co-ordination for a Referendum on Self-determination for Katanga, composed mainly of former Katanga Tigers, a separatist group active in the 1960s. They claim to be behind the attack on the Katanga airport in February 2011. It is unclear to what extent all these groups are co-ordinated.
The Nduma Defense of Congo (or Mai-Mai Sheka) was formed in 2009 by former minerals trader Ntabo Ntaberi Sheka, an ethnic Nyanga. Sheka claims the group was formed to liberate the mines of Walikale Territory in North Kivu. The NDC are accused of a mass rape of at least 387 women, men, and children over a three day span in Walikale in 2010.
Mai-Mai and the mountain gorillas
In May 2007, Mai-Mai killed two wildlife officers in Virunga National Park and threatened to kill mountain gorillas if the government retaliated. The Mai-Mai are also suspected of the killings of nine mountain gorillas, with the use of machetes, and automatic weapons. In an October 2012 incident, Mai-Mai killed two park staff and a soldier, while three soldiers were injured. From 1990 to 2018 some 170 Virunga Rangers have died in such attacks, according to the World Wildlife Foundation.
Six Virunga Park Rangers were reported to have been killed in Virunga National Park. Five rangers and a driver were killed in an ambush and a sixth ranger was injured in the Central section of the vast reserve on April 9, 2018. Officials suspected the attacks were by the Mai-Mai.
See also
Resistance Patriots Maï-Maï
Mai-Mai Kata Katanga
Gédéon Kyungu Mutanga
References
External links
Global Security description
UN Assessment of armed groups in Congo, 1 April 2002
National Geographic
Mai-mai atrocities included canibalism
Category:Factions of the Second Congo War
Category:History of the Democratic Republic of the Congo
Category:History of Rwanda
Category:Rebel groups in the Democratic Republic of the Congo
Category:Rebel groups that actively control territory
Category:Vigilantism
|
{
"pile_set_name": "wikipedia_en"
}
|
# Project-wide Gradle settings.
# IDE (e.g. Android Studio) users:
# Gradle settings configured through the IDE *will override*
# any settings specified in this file.
# For more details on how to configure your build environment visit
# http://www.gradle.org/docs/current/userguide/build_environment.html
# Specifies the JVM arguments used for the daemon process.
# The setting is particularly useful for tweaking memory settings.
org.gradle.jvmargs=-Xmx1024m
# When configured, Gradle will run in incubating parallel mode.
# This option should only be used with decoupled projects. More details, visit
# http://www.gradle.org/docs/current/userguide/multi_project_builds.html#sec:decoupled_projects
# org.gradle.parallel=true
|
{
"pile_set_name": "github"
}
|
package com.tencent.mm.ui.chatting;
import android.view.View;
import android.view.ViewStub;
import android.view.animation.AnimationUtils;
import android.widget.ListView;
import com.tencent.mm.e.a.nq;
import com.tencent.mm.plugin.sight.encode.ui.ChattingSightContainerView.a;
import com.tencent.mm.sdk.c.a;
import com.tencent.mm.sdk.platformtools.ac;
import com.tencent.mm.ui.j;
import com.tencent.mm.ui.o;
final class ChattingUI$a$84$2
implements ChattingSightContainerView.a
{
View lBB = null;
ChattingUI$a$84$2(ChattingUI.a.84 param84) {}
public final void azd()
{
nq localnq = new nq();
avS.type = 6;
a.kug.y(localnq);
lBA.lAY.setRequestedOrientation(1);
lBA.lAY.Xk();
lBA.lAY.bkT();
lBA.lAY.blj();
if (lBB == null) {
lBB = ((ViewStub)lBA.lAY.findViewById(2131755932)).inflate();
}
lBB.setVisibility(0);
lBB.startAnimation(AnimationUtils.loadAnimation(lBA.lAY.kNN.kOg, 2130968612));
}
public final void onHide()
{
lBA.lAY.setRequestedOrientation(-1);
lBA.lAY.bkT();
if ((lBB != null) && (lBB.getVisibility() == 0))
{
lBB.setVisibility(8);
lBB.startAnimation(AnimationUtils.loadAnimation(lBA.lAY.kNN.kOg, 2130968613));
}
new ac().post(new Runnable()
{
public final void run()
{
nq localnq = new nq();
avS.type = 7;
avS.avT = ChattingUI.a.e(lBA.lAY).getFirstVisiblePosition();
avS.avU = ChattingUI.a.e(lBA.lAY).getLastVisiblePosition();
avS.avV = ChattingUI.a.e(lBA.lAY).getHeaderViewsCount();
a.kug.y(localnq);
}
});
}
}
/* Location:
* Qualified Name: com.tencent.mm.ui.chatting.ChattingUI.a.84.2
* Java Class Version: 6 (50.0)
* JD-Core Version: 0.7.1
*/
|
{
"pile_set_name": "github"
}
|
---
author:
- 'Dimitrios A. Gouliermis'
- Stefan Schmeja
- Volker Ossenkopf
- 'Ralf S. Klessen'
- 'Andrew E. Dolphin'
title: Hierarchically Clustered Star Formation in the Magellanic Clouds
---
Method: The identification of stellar clusters {#sec:1}
==============================================
For the investigation of the clustering behavior of stars it is necessary to thoroughly characterize distinct concentrations of stars, which can only be achieved by the accurate identification of individual stellar clusters. Considering the importance of this process, different identification methods were developed, which can be classified in two families. The first, represented by [*friend of friend*]{} algorithms and [*cluster analysis*]{} techniques, e.g., [@battinelli96], are designed for limited samples of observed stars, and thus are based on linking individual stars into coherent stellar groups. These methods are recently superseded by [*minimum spanning trees*]{}, e.g., [@bastian09]. The second family of identification codes, represented by [*nearest-neighbors*]{} and [*star-counts*]{}, make use of surface stellar density maps constructed from rich observed stellar samples. Distinct stellar systems are identified as statistically significant over-densities in respect to the average stellar density in the observed regions, e.g., [@gouliermis10]. Tests on artificial clusters of various density gradients and shapes showed that the latter (density) techniques are more robust in detecting real stellar concentrations, provided that rich stellar samples are available [@schmeja11]. A schematic representation of stellar density maps constructed with star-counts is shown in Fig. \[fig:1\].
![Schematic of the star-count process. (a) The chart of an observed stellar sample. (b) The corresponding stellar density map, after counting stars in quadrilateral grid of elements (pixels) of size $1.8\arcsec$ each, and after filtering the map with a Gaussian of FWHM$\simeq2.8$px ($\sim$5). (c) The corresponding isodensity contour map. Isopleths at levels [ -2.truept]{}3$\sigma$ are indicated with white lines. \[fig:1\]](fig1.ps)
![Isodensity contour map from star-counts of the young bright main-sequence and faint PMS populations identified with HST/ACS in the region of NGC 346 in the SMC. Lines represent isopleths of significance [ -2.truept]{}$1\sigma$. Apart from the dominating central large stellar aggregate, there are peripheral young sub-clusters, revealed as statistically important stellar concentrations. The central aggregate, denoted by the 1$\sigma$ isopleth, encompass various distinct sub-groups, which appear at higher density thresholds. NGC 346 itself appears at [ -2.truept]{}3$\sigma$ significance. \[fig:2\]](fig2.ps)
Data: Stellar clustering in the region NGC 346/N66 {#sec:2}
==================================================
One of the most prominent bright stellar systems in the Small Magellanic Cloud (SMC) is the stellar association NGC 346, related to the region LHA 115-N66 [@henize56], the brightest in this galaxy. This system appears in partially-resolved observations form the ground as a single stellar concentration, but recent imaging with the [*Advanced Camera for Surveys*]{} onboard the Hubble Space Telescope (HST) allowed the detection of smaller sub-clusters within the boundaries of the nebula. The images were collected within the HST GO Program 10248 and were retrieved from the HST Data Archive. Their photometry demonstrated that the faint young stellar populations in the region are still in their pre–main-sequence (PMS) phase, and revealed a plethora of sub-solar PMS stars [@gouliermis06]. Our [*nearest-neighbor*]{} cluster analysis of the observed young stellar populations, i.e., the bright main-sequence (down to $m_{555} {\ \raise
-2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept\hbox{$<$}\ }}21$) and the faint PMS stars, revealed a significant number of smaller, previously unresolved, young stellar sub-clusters [@schmeja09]. This clustering behavior of young stars in NGC 346 is further demonstrated here by the stellar density contour map of Fig \[fig:2\], constructed with star-counts.
Results: Hierarchical clustering of young stars {#sec:3}
===============================================
The map of Fig. \[fig:2\] shows significant sub-structure, in particular within the 1$\sigma$ boundaries of the central dominant stellar aggregate. This structuring behavior indicates hierarchy. The minimum spanning tree (MST) of the young stars in the whole region allows to determine the statistical parameter, introduced by [@cw04]. This parameter is a measure of the fractal dimension $D$ of a stellar group, permitting to distinguish between centrally concentrated clusters and hierarchical clusters with fractal substructure. The application of the MST to our data shows that the region NGC 346/N66 is highly hierarchical with a that corresponds to a fractal dimension $D \simeq 2.5$.
Constructing surface stellar density maps allows us to further characterize the clustering behavior of stars with the application of tools, which are originally designed for the study of the structuring of the interstellar medium (ISM), as observed at far-infrared or longer wavelengths. The so-called [*dendrograms*]{} are used for the visualization of hierarchy through structural trees [@rosolowsky08]. The dendrogram of the stellar density map of NGC 346 demonstrates that the observed hierarchy is mostly due to the substructure in the dominant stellar aggregate. The $\Delta$-variance analysis [@stutzki98; @ossenkopf08] is a robust structure analysis method that measures the amount of structure on a given scale $l$. In principle the $\Delta$-variance is directly related to the power spectrum of the map, and thus for a power law spectrum of index $-\beta$, $\Delta$-variance also follows a power law, $\displaystyle
\sigma_\Delta^2 \propto l^\alpha$, with $\alpha={\beta-2}$. The application of the $\Delta$-variance analysis on the surface stellar density map of NGC 346 verifies that indeed the clustering of the young stars in the region is self-similar (Fig. \[fig:3\]), with a spectral index $\beta \simeq 2.8$, corresponding to a fractal dimension $D=2.6$ of the corresponding fractional Brownian motion structure [@stutzki98], similar to that previously derived for Galactic molecular clouds. Self-similarity appears to brake, i.e., we find different hierarchical properties for the short-range scaling and the behavior at the overall scale of the region, at length-scales $l \geq 25$px, corresponding to physical scales of $\sim$40($\sim 11$pc at the distance of the SMC).
![The $\Delta$-variance spectrum of the surface stellar density map of the entire region of NGC 346/N66. This analysis shows that the young stellar populations in this region are hierarchically structured up to length-scales of $\sim$40. The spectral index $\beta$ is determined from the fit of the spectrum for data between lags 4and 13(indicated by the gray shaded area). The dashed line provides the used virtual beamsize (5). \[fig:3\]](fig3.ps)
D.A.G., S.S. and V.O. kindly acknowledge support by the German Research Foundation (DFG) through grants GO 1659/3-1, SFB 881 and OS 177/2-1 respectively. Based on observations made with the NASA/ESA [*Hubble Space Telescope*]{}, obtained from the data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
[99.]{} Bastian, N., et al. 2009. Mon. Not. R. Astron. Soc. 392, 868 Battinelli, P., Efremov, Y., & Magnier, E. A. 1996. Astron. Astrophys. 314, 51 Cartwright, A., & Whitworth, A. P. 2004. Mon. Not. R. Astron. Soc. 348, 589 Gouliermis, D. A., et al. 2006. Astroph. J. Suppl. Ser. 166, 549 Gouliermis, D. A., et al. 2010. Astroph. J. 725, 1717 Henize, K. G. 1956. Astroph. J. Suppl. Ser. 2, 315 Ossenkopf, V., Krips, M., & Stutzki, J. 2008. Astron. Astrophys. 485, 917 Rosolowsky, E. W., et al. 2008. Astroph. J. 679, 1338 Schmeja, S., Gouliermis, D. A., & Klessen, R. S. 2009. Astroph. J. 694, 367 Schmeja, S. 2011, Astronomische Nachrichten, 332, 172 Stutzki J., et al. 1998. Astron. Astrophys. 336, 697
|
{
"pile_set_name": "arxiv"
}
|
Cummings Machine Works
Cummings Machine Works was a Boston, Massachusetts based business. It was founded by Henry Havelock Cummings in 1881, when Cummings was 23 years old. The company was awarded a United States Defense Department contract to manufacture fixtures in March 1941. The contract amounted to $17,893. The company was among the firms which contributed to the building of the Boston Opera House, completed in 1909, supplying steelworks used in the construction of the stage.
Cummings Machine Works has been credited with the development of the sally saw. A patent filed in 1945, and assigned to the company, describes a saw with a circular blade. The blade could be rotated between horizontal and vertical, thus allowing a tree to be felled, limbed, and bucked with one saw. Other inventions included a hydraulic hospital bed, automatic doughnut machine, teardrop vehicle and Hookups.
Last owners were Robert M. Mustard, Sr., Pres., and Lewis W. Mustard, Treas. Last known address was 10 Melcher Street in Boston, MA. Went out of business in 1958.
References
Category:Manufacturing companies based in Boston
Category:History of Boston
Category:Defunct manufacturing companies of the United States
Category:Defunct companies based in Massachusetts
Category:Manufacturing companies established in 1881
|
{
"pile_set_name": "wikipedia_en"
}
|
// DO NOT EDIT.
//
// Generated by the Swift generator plugin for the protocol buffer compiler.
// Source: google/protobuf/unittest_proto3_arena.proto
//
// For information on using the generated types, please see the documenation:
// https://github.com/apple/swift-protobuf/
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import Foundation
import SwiftProtobuf
// If the compiler emits an error on this type, it is because this file
// was generated by a version of the `protoc` Swift plug-in that is
// incompatible with the version of SwiftProtobuf to which you are linking.
// Please ensure that your are building against the same version of the API
// that was used to generate this file.
fileprivate struct _GeneratedWithProtocGenSwiftVersion: SwiftProtobuf.ProtobufAPIVersionCheck {
struct _2: SwiftProtobuf.ProtobufAPIVersion_2 {}
typealias Version = _2
}
enum Proto3ArenaUnittest_ForeignEnum: SwiftProtobuf.Enum {
typealias RawValue = Int
case foreignZero // = 0
case foreignFoo // = 4
case foreignBar // = 5
case foreignBaz // = 6
case UNRECOGNIZED(Int)
init() {
self = .foreignZero
}
init?(rawValue: Int) {
switch rawValue {
case 0: self = .foreignZero
case 4: self = .foreignFoo
case 5: self = .foreignBar
case 6: self = .foreignBaz
default: self = .UNRECOGNIZED(rawValue)
}
}
var rawValue: Int {
switch self {
case .foreignZero: return 0
case .foreignFoo: return 4
case .foreignBar: return 5
case .foreignBaz: return 6
case .UNRECOGNIZED(let i): return i
}
}
}
#if swift(>=4.2)
extension Proto3ArenaUnittest_ForeignEnum: CaseIterable {
// The compiler won't synthesize support with the UNRECOGNIZED case.
static var allCases: [Proto3ArenaUnittest_ForeignEnum] = [
.foreignZero,
.foreignFoo,
.foreignBar,
.foreignBaz,
]
}
#endif // swift(>=4.2)
/// This proto includes every type of field in both singular and repeated
/// forms.
struct Proto3ArenaUnittest_TestAllTypes {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
/// Singular
var optionalInt32: Int32 {
get {return _storage._optionalInt32}
set {_uniqueStorage()._optionalInt32 = newValue}
}
var optionalInt64: Int64 {
get {return _storage._optionalInt64}
set {_uniqueStorage()._optionalInt64 = newValue}
}
var optionalUint32: UInt32 {
get {return _storage._optionalUint32}
set {_uniqueStorage()._optionalUint32 = newValue}
}
var optionalUint64: UInt64 {
get {return _storage._optionalUint64}
set {_uniqueStorage()._optionalUint64 = newValue}
}
var optionalSint32: Int32 {
get {return _storage._optionalSint32}
set {_uniqueStorage()._optionalSint32 = newValue}
}
var optionalSint64: Int64 {
get {return _storage._optionalSint64}
set {_uniqueStorage()._optionalSint64 = newValue}
}
var optionalFixed32: UInt32 {
get {return _storage._optionalFixed32}
set {_uniqueStorage()._optionalFixed32 = newValue}
}
var optionalFixed64: UInt64 {
get {return _storage._optionalFixed64}
set {_uniqueStorage()._optionalFixed64 = newValue}
}
var optionalSfixed32: Int32 {
get {return _storage._optionalSfixed32}
set {_uniqueStorage()._optionalSfixed32 = newValue}
}
var optionalSfixed64: Int64 {
get {return _storage._optionalSfixed64}
set {_uniqueStorage()._optionalSfixed64 = newValue}
}
var optionalFloat: Float {
get {return _storage._optionalFloat}
set {_uniqueStorage()._optionalFloat = newValue}
}
var optionalDouble: Double {
get {return _storage._optionalDouble}
set {_uniqueStorage()._optionalDouble = newValue}
}
var optionalBool: Bool {
get {return _storage._optionalBool}
set {_uniqueStorage()._optionalBool = newValue}
}
var optionalString: String {
get {return _storage._optionalString}
set {_uniqueStorage()._optionalString = newValue}
}
var optionalBytes: Data {
get {return _storage._optionalBytes}
set {_uniqueStorage()._optionalBytes = newValue}
}
var optionalNestedMessage: Proto3ArenaUnittest_TestAllTypes.NestedMessage {
get {return _storage._optionalNestedMessage ?? Proto3ArenaUnittest_TestAllTypes.NestedMessage()}
set {_uniqueStorage()._optionalNestedMessage = newValue}
}
/// Returns true if `optionalNestedMessage` has been explicitly set.
var hasOptionalNestedMessage: Bool {return _storage._optionalNestedMessage != nil}
/// Clears the value of `optionalNestedMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalNestedMessage() {_uniqueStorage()._optionalNestedMessage = nil}
var optionalForeignMessage: Proto3ArenaUnittest_ForeignMessage {
get {return _storage._optionalForeignMessage ?? Proto3ArenaUnittest_ForeignMessage()}
set {_uniqueStorage()._optionalForeignMessage = newValue}
}
/// Returns true if `optionalForeignMessage` has been explicitly set.
var hasOptionalForeignMessage: Bool {return _storage._optionalForeignMessage != nil}
/// Clears the value of `optionalForeignMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalForeignMessage() {_uniqueStorage()._optionalForeignMessage = nil}
var optionalImportMessage: ProtobufUnittestImport_ImportMessage {
get {return _storage._optionalImportMessage ?? ProtobufUnittestImport_ImportMessage()}
set {_uniqueStorage()._optionalImportMessage = newValue}
}
/// Returns true if `optionalImportMessage` has been explicitly set.
var hasOptionalImportMessage: Bool {return _storage._optionalImportMessage != nil}
/// Clears the value of `optionalImportMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalImportMessage() {_uniqueStorage()._optionalImportMessage = nil}
var optionalNestedEnum: Proto3ArenaUnittest_TestAllTypes.NestedEnum {
get {return _storage._optionalNestedEnum}
set {_uniqueStorage()._optionalNestedEnum = newValue}
}
var optionalForeignEnum: Proto3ArenaUnittest_ForeignEnum {
get {return _storage._optionalForeignEnum}
set {_uniqueStorage()._optionalForeignEnum = newValue}
}
var optionalStringPiece: String {
get {return _storage._optionalStringPiece}
set {_uniqueStorage()._optionalStringPiece = newValue}
}
var optionalCord: String {
get {return _storage._optionalCord}
set {_uniqueStorage()._optionalCord = newValue}
}
/// Defined in unittest_import_public.proto
var optionalPublicImportMessage: ProtobufUnittestImport_PublicImportMessage {
get {return _storage._optionalPublicImportMessage ?? ProtobufUnittestImport_PublicImportMessage()}
set {_uniqueStorage()._optionalPublicImportMessage = newValue}
}
/// Returns true if `optionalPublicImportMessage` has been explicitly set.
var hasOptionalPublicImportMessage: Bool {return _storage._optionalPublicImportMessage != nil}
/// Clears the value of `optionalPublicImportMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalPublicImportMessage() {_uniqueStorage()._optionalPublicImportMessage = nil}
var optionalLazyMessage: Proto3ArenaUnittest_TestAllTypes.NestedMessage {
get {return _storage._optionalLazyMessage ?? Proto3ArenaUnittest_TestAllTypes.NestedMessage()}
set {_uniqueStorage()._optionalLazyMessage = newValue}
}
/// Returns true if `optionalLazyMessage` has been explicitly set.
var hasOptionalLazyMessage: Bool {return _storage._optionalLazyMessage != nil}
/// Clears the value of `optionalLazyMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalLazyMessage() {_uniqueStorage()._optionalLazyMessage = nil}
var optionalLazyImportMessage: ProtobufUnittestImport_ImportMessage {
get {return _storage._optionalLazyImportMessage ?? ProtobufUnittestImport_ImportMessage()}
set {_uniqueStorage()._optionalLazyImportMessage = newValue}
}
/// Returns true if `optionalLazyImportMessage` has been explicitly set.
var hasOptionalLazyImportMessage: Bool {return _storage._optionalLazyImportMessage != nil}
/// Clears the value of `optionalLazyImportMessage`. Subsequent reads from it will return its default value.
mutating func clearOptionalLazyImportMessage() {_uniqueStorage()._optionalLazyImportMessage = nil}
/// Repeated
var repeatedInt32: [Int32] {
get {return _storage._repeatedInt32}
set {_uniqueStorage()._repeatedInt32 = newValue}
}
var repeatedInt64: [Int64] {
get {return _storage._repeatedInt64}
set {_uniqueStorage()._repeatedInt64 = newValue}
}
var repeatedUint32: [UInt32] {
get {return _storage._repeatedUint32}
set {_uniqueStorage()._repeatedUint32 = newValue}
}
var repeatedUint64: [UInt64] {
get {return _storage._repeatedUint64}
set {_uniqueStorage()._repeatedUint64 = newValue}
}
var repeatedSint32: [Int32] {
get {return _storage._repeatedSint32}
set {_uniqueStorage()._repeatedSint32 = newValue}
}
var repeatedSint64: [Int64] {
get {return _storage._repeatedSint64}
set {_uniqueStorage()._repeatedSint64 = newValue}
}
var repeatedFixed32: [UInt32] {
get {return _storage._repeatedFixed32}
set {_uniqueStorage()._repeatedFixed32 = newValue}
}
var repeatedFixed64: [UInt64] {
get {return _storage._repeatedFixed64}
set {_uniqueStorage()._repeatedFixed64 = newValue}
}
var repeatedSfixed32: [Int32] {
get {return _storage._repeatedSfixed32}
set {_uniqueStorage()._repeatedSfixed32 = newValue}
}
var repeatedSfixed64: [Int64] {
get {return _storage._repeatedSfixed64}
set {_uniqueStorage()._repeatedSfixed64 = newValue}
}
var repeatedFloat: [Float] {
get {return _storage._repeatedFloat}
set {_uniqueStorage()._repeatedFloat = newValue}
}
var repeatedDouble: [Double] {
get {return _storage._repeatedDouble}
set {_uniqueStorage()._repeatedDouble = newValue}
}
var repeatedBool: [Bool] {
get {return _storage._repeatedBool}
set {_uniqueStorage()._repeatedBool = newValue}
}
var repeatedString: [String] {
get {return _storage._repeatedString}
set {_uniqueStorage()._repeatedString = newValue}
}
var repeatedBytes: [Data] {
get {return _storage._repeatedBytes}
set {_uniqueStorage()._repeatedBytes = newValue}
}
var repeatedNestedMessage: [Proto3ArenaUnittest_TestAllTypes.NestedMessage] {
get {return _storage._repeatedNestedMessage}
set {_uniqueStorage()._repeatedNestedMessage = newValue}
}
var repeatedForeignMessage: [Proto3ArenaUnittest_ForeignMessage] {
get {return _storage._repeatedForeignMessage}
set {_uniqueStorage()._repeatedForeignMessage = newValue}
}
var repeatedImportMessage: [ProtobufUnittestImport_ImportMessage] {
get {return _storage._repeatedImportMessage}
set {_uniqueStorage()._repeatedImportMessage = newValue}
}
var repeatedNestedEnum: [Proto3ArenaUnittest_TestAllTypes.NestedEnum] {
get {return _storage._repeatedNestedEnum}
set {_uniqueStorage()._repeatedNestedEnum = newValue}
}
var repeatedForeignEnum: [Proto3ArenaUnittest_ForeignEnum] {
get {return _storage._repeatedForeignEnum}
set {_uniqueStorage()._repeatedForeignEnum = newValue}
}
var repeatedStringPiece: [String] {
get {return _storage._repeatedStringPiece}
set {_uniqueStorage()._repeatedStringPiece = newValue}
}
var repeatedCord: [String] {
get {return _storage._repeatedCord}
set {_uniqueStorage()._repeatedCord = newValue}
}
var repeatedLazyMessage: [Proto3ArenaUnittest_TestAllTypes.NestedMessage] {
get {return _storage._repeatedLazyMessage}
set {_uniqueStorage()._repeatedLazyMessage = newValue}
}
var oneofField: OneOf_OneofField? {
get {return _storage._oneofField}
set {_uniqueStorage()._oneofField = newValue}
}
var oneofUint32: UInt32 {
get {
if case .oneofUint32(let v)? = _storage._oneofField {return v}
return 0
}
set {_uniqueStorage()._oneofField = .oneofUint32(newValue)}
}
var oneofNestedMessage: Proto3ArenaUnittest_TestAllTypes.NestedMessage {
get {
if case .oneofNestedMessage(let v)? = _storage._oneofField {return v}
return Proto3ArenaUnittest_TestAllTypes.NestedMessage()
}
set {_uniqueStorage()._oneofField = .oneofNestedMessage(newValue)}
}
var oneofString: String {
get {
if case .oneofString(let v)? = _storage._oneofField {return v}
return String()
}
set {_uniqueStorage()._oneofField = .oneofString(newValue)}
}
var oneofBytes: Data {
get {
if case .oneofBytes(let v)? = _storage._oneofField {return v}
return SwiftProtobuf.Internal.emptyData
}
set {_uniqueStorage()._oneofField = .oneofBytes(newValue)}
}
var unknownFields = SwiftProtobuf.UnknownStorage()
enum OneOf_OneofField: Equatable {
case oneofUint32(UInt32)
case oneofNestedMessage(Proto3ArenaUnittest_TestAllTypes.NestedMessage)
case oneofString(String)
case oneofBytes(Data)
#if !swift(>=4.1)
static func ==(lhs: Proto3ArenaUnittest_TestAllTypes.OneOf_OneofField, rhs: Proto3ArenaUnittest_TestAllTypes.OneOf_OneofField) -> Bool {
switch (lhs, rhs) {
case (.oneofUint32(let l), .oneofUint32(let r)): return l == r
case (.oneofNestedMessage(let l), .oneofNestedMessage(let r)): return l == r
case (.oneofString(let l), .oneofString(let r)): return l == r
case (.oneofBytes(let l), .oneofBytes(let r)): return l == r
default: return false
}
}
#endif
}
enum NestedEnum: SwiftProtobuf.Enum {
typealias RawValue = Int
case zero // = 0
case foo // = 1
case bar // = 2
case baz // = 3
/// Intentionally negative.
case neg // = -1
case UNRECOGNIZED(Int)
init() {
self = .zero
}
init?(rawValue: Int) {
switch rawValue {
case -1: self = .neg
case 0: self = .zero
case 1: self = .foo
case 2: self = .bar
case 3: self = .baz
default: self = .UNRECOGNIZED(rawValue)
}
}
var rawValue: Int {
switch self {
case .neg: return -1
case .zero: return 0
case .foo: return 1
case .bar: return 2
case .baz: return 3
case .UNRECOGNIZED(let i): return i
}
}
}
struct NestedMessage {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
/// The field name "b" fails to compile in proto1 because it conflicts with
/// a local variable named "b" in one of the generated methods. Doh.
/// This file needs to compile in proto1 to test backwards-compatibility.
var bb: Int32 = 0
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
}
init() {}
fileprivate var _storage = _StorageClass.defaultInstance
}
#if swift(>=4.2)
extension Proto3ArenaUnittest_TestAllTypes.NestedEnum: CaseIterable {
// The compiler won't synthesize support with the UNRECOGNIZED case.
static var allCases: [Proto3ArenaUnittest_TestAllTypes.NestedEnum] = [
.zero,
.foo,
.bar,
.baz,
.neg,
]
}
#endif // swift(>=4.2)
struct Proto3ArenaUnittest_TestPackedTypes {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
var packedInt32: [Int32] = []
var packedInt64: [Int64] = []
var packedUint32: [UInt32] = []
var packedUint64: [UInt64] = []
var packedSint32: [Int32] = []
var packedSint64: [Int64] = []
var packedFixed32: [UInt32] = []
var packedFixed64: [UInt64] = []
var packedSfixed32: [Int32] = []
var packedSfixed64: [Int64] = []
var packedFloat: [Float] = []
var packedDouble: [Double] = []
var packedBool: [Bool] = []
var packedEnum: [Proto3ArenaUnittest_ForeignEnum] = []
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
}
/// Explicitly set packed to false
struct Proto3ArenaUnittest_TestUnpackedTypes {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
var repeatedInt32: [Int32] = []
var repeatedInt64: [Int64] = []
var repeatedUint32: [UInt32] = []
var repeatedUint64: [UInt64] = []
var repeatedSint32: [Int32] = []
var repeatedSint64: [Int64] = []
var repeatedFixed32: [UInt32] = []
var repeatedFixed64: [UInt64] = []
var repeatedSfixed32: [Int32] = []
var repeatedSfixed64: [Int64] = []
var repeatedFloat: [Float] = []
var repeatedDouble: [Double] = []
var repeatedBool: [Bool] = []
var repeatedNestedEnum: [Proto3ArenaUnittest_TestAllTypes.NestedEnum] = []
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
}
/// This proto includes a recusively nested message.
struct Proto3ArenaUnittest_NestedTestAllTypes {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
var child: Proto3ArenaUnittest_NestedTestAllTypes {
get {return _storage._child ?? Proto3ArenaUnittest_NestedTestAllTypes()}
set {_uniqueStorage()._child = newValue}
}
/// Returns true if `child` has been explicitly set.
var hasChild: Bool {return _storage._child != nil}
/// Clears the value of `child`. Subsequent reads from it will return its default value.
mutating func clearChild() {_uniqueStorage()._child = nil}
var payload: Proto3ArenaUnittest_TestAllTypes {
get {return _storage._payload ?? Proto3ArenaUnittest_TestAllTypes()}
set {_uniqueStorage()._payload = newValue}
}
/// Returns true if `payload` has been explicitly set.
var hasPayload: Bool {return _storage._payload != nil}
/// Clears the value of `payload`. Subsequent reads from it will return its default value.
mutating func clearPayload() {_uniqueStorage()._payload = nil}
var repeatedChild: [Proto3ArenaUnittest_NestedTestAllTypes] {
get {return _storage._repeatedChild}
set {_uniqueStorage()._repeatedChild = newValue}
}
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
fileprivate var _storage = _StorageClass.defaultInstance
}
/// Define these after TestAllTypes to make sure the compiler can handle
/// that.
struct Proto3ArenaUnittest_ForeignMessage {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
var c: Int32 = 0
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
}
/// TestEmptyMessage is used to test behavior of unknown fields.
struct Proto3ArenaUnittest_TestEmptyMessage {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
}
// MARK: - Code below here is support for the SwiftProtobuf runtime.
fileprivate let _protobuf_package = "proto3_arena_unittest"
extension Proto3ArenaUnittest_ForeignEnum: SwiftProtobuf._ProtoNameProviding {
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
0: .same(proto: "FOREIGN_ZERO"),
4: .same(proto: "FOREIGN_FOO"),
5: .same(proto: "FOREIGN_BAR"),
6: .same(proto: "FOREIGN_BAZ"),
]
}
extension Proto3ArenaUnittest_TestAllTypes: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".TestAllTypes"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "optional_int32"),
2: .standard(proto: "optional_int64"),
3: .standard(proto: "optional_uint32"),
4: .standard(proto: "optional_uint64"),
5: .standard(proto: "optional_sint32"),
6: .standard(proto: "optional_sint64"),
7: .standard(proto: "optional_fixed32"),
8: .standard(proto: "optional_fixed64"),
9: .standard(proto: "optional_sfixed32"),
10: .standard(proto: "optional_sfixed64"),
11: .standard(proto: "optional_float"),
12: .standard(proto: "optional_double"),
13: .standard(proto: "optional_bool"),
14: .standard(proto: "optional_string"),
15: .standard(proto: "optional_bytes"),
18: .standard(proto: "optional_nested_message"),
19: .standard(proto: "optional_foreign_message"),
20: .standard(proto: "optional_import_message"),
21: .standard(proto: "optional_nested_enum"),
22: .standard(proto: "optional_foreign_enum"),
24: .standard(proto: "optional_string_piece"),
25: .standard(proto: "optional_cord"),
26: .standard(proto: "optional_public_import_message"),
27: .standard(proto: "optional_lazy_message"),
115: .standard(proto: "optional_lazy_import_message"),
31: .standard(proto: "repeated_int32"),
32: .standard(proto: "repeated_int64"),
33: .standard(proto: "repeated_uint32"),
34: .standard(proto: "repeated_uint64"),
35: .standard(proto: "repeated_sint32"),
36: .standard(proto: "repeated_sint64"),
37: .standard(proto: "repeated_fixed32"),
38: .standard(proto: "repeated_fixed64"),
39: .standard(proto: "repeated_sfixed32"),
40: .standard(proto: "repeated_sfixed64"),
41: .standard(proto: "repeated_float"),
42: .standard(proto: "repeated_double"),
43: .standard(proto: "repeated_bool"),
44: .standard(proto: "repeated_string"),
45: .standard(proto: "repeated_bytes"),
48: .standard(proto: "repeated_nested_message"),
49: .standard(proto: "repeated_foreign_message"),
50: .standard(proto: "repeated_import_message"),
51: .standard(proto: "repeated_nested_enum"),
52: .standard(proto: "repeated_foreign_enum"),
54: .standard(proto: "repeated_string_piece"),
55: .standard(proto: "repeated_cord"),
57: .standard(proto: "repeated_lazy_message"),
111: .standard(proto: "oneof_uint32"),
112: .standard(proto: "oneof_nested_message"),
113: .standard(proto: "oneof_string"),
114: .standard(proto: "oneof_bytes"),
]
fileprivate class _StorageClass {
var _optionalInt32: Int32 = 0
var _optionalInt64: Int64 = 0
var _optionalUint32: UInt32 = 0
var _optionalUint64: UInt64 = 0
var _optionalSint32: Int32 = 0
var _optionalSint64: Int64 = 0
var _optionalFixed32: UInt32 = 0
var _optionalFixed64: UInt64 = 0
var _optionalSfixed32: Int32 = 0
var _optionalSfixed64: Int64 = 0
var _optionalFloat: Float = 0
var _optionalDouble: Double = 0
var _optionalBool: Bool = false
var _optionalString: String = String()
var _optionalBytes: Data = SwiftProtobuf.Internal.emptyData
var _optionalNestedMessage: Proto3ArenaUnittest_TestAllTypes.NestedMessage? = nil
var _optionalForeignMessage: Proto3ArenaUnittest_ForeignMessage? = nil
var _optionalImportMessage: ProtobufUnittestImport_ImportMessage? = nil
var _optionalNestedEnum: Proto3ArenaUnittest_TestAllTypes.NestedEnum = .zero
var _optionalForeignEnum: Proto3ArenaUnittest_ForeignEnum = .foreignZero
var _optionalStringPiece: String = String()
var _optionalCord: String = String()
var _optionalPublicImportMessage: ProtobufUnittestImport_PublicImportMessage? = nil
var _optionalLazyMessage: Proto3ArenaUnittest_TestAllTypes.NestedMessage? = nil
var _optionalLazyImportMessage: ProtobufUnittestImport_ImportMessage? = nil
var _repeatedInt32: [Int32] = []
var _repeatedInt64: [Int64] = []
var _repeatedUint32: [UInt32] = []
var _repeatedUint64: [UInt64] = []
var _repeatedSint32: [Int32] = []
var _repeatedSint64: [Int64] = []
var _repeatedFixed32: [UInt32] = []
var _repeatedFixed64: [UInt64] = []
var _repeatedSfixed32: [Int32] = []
var _repeatedSfixed64: [Int64] = []
var _repeatedFloat: [Float] = []
var _repeatedDouble: [Double] = []
var _repeatedBool: [Bool] = []
var _repeatedString: [String] = []
var _repeatedBytes: [Data] = []
var _repeatedNestedMessage: [Proto3ArenaUnittest_TestAllTypes.NestedMessage] = []
var _repeatedForeignMessage: [Proto3ArenaUnittest_ForeignMessage] = []
var _repeatedImportMessage: [ProtobufUnittestImport_ImportMessage] = []
var _repeatedNestedEnum: [Proto3ArenaUnittest_TestAllTypes.NestedEnum] = []
var _repeatedForeignEnum: [Proto3ArenaUnittest_ForeignEnum] = []
var _repeatedStringPiece: [String] = []
var _repeatedCord: [String] = []
var _repeatedLazyMessage: [Proto3ArenaUnittest_TestAllTypes.NestedMessage] = []
var _oneofField: Proto3ArenaUnittest_TestAllTypes.OneOf_OneofField?
static let defaultInstance = _StorageClass()
private init() {}
init(copying source: _StorageClass) {
_optionalInt32 = source._optionalInt32
_optionalInt64 = source._optionalInt64
_optionalUint32 = source._optionalUint32
_optionalUint64 = source._optionalUint64
_optionalSint32 = source._optionalSint32
_optionalSint64 = source._optionalSint64
_optionalFixed32 = source._optionalFixed32
_optionalFixed64 = source._optionalFixed64
_optionalSfixed32 = source._optionalSfixed32
_optionalSfixed64 = source._optionalSfixed64
_optionalFloat = source._optionalFloat
_optionalDouble = source._optionalDouble
_optionalBool = source._optionalBool
_optionalString = source._optionalString
_optionalBytes = source._optionalBytes
_optionalNestedMessage = source._optionalNestedMessage
_optionalForeignMessage = source._optionalForeignMessage
_optionalImportMessage = source._optionalImportMessage
_optionalNestedEnum = source._optionalNestedEnum
_optionalForeignEnum = source._optionalForeignEnum
_optionalStringPiece = source._optionalStringPiece
_optionalCord = source._optionalCord
_optionalPublicImportMessage = source._optionalPublicImportMessage
_optionalLazyMessage = source._optionalLazyMessage
_optionalLazyImportMessage = source._optionalLazyImportMessage
_repeatedInt32 = source._repeatedInt32
_repeatedInt64 = source._repeatedInt64
_repeatedUint32 = source._repeatedUint32
_repeatedUint64 = source._repeatedUint64
_repeatedSint32 = source._repeatedSint32
_repeatedSint64 = source._repeatedSint64
_repeatedFixed32 = source._repeatedFixed32
_repeatedFixed64 = source._repeatedFixed64
_repeatedSfixed32 = source._repeatedSfixed32
_repeatedSfixed64 = source._repeatedSfixed64
_repeatedFloat = source._repeatedFloat
_repeatedDouble = source._repeatedDouble
_repeatedBool = source._repeatedBool
_repeatedString = source._repeatedString
_repeatedBytes = source._repeatedBytes
_repeatedNestedMessage = source._repeatedNestedMessage
_repeatedForeignMessage = source._repeatedForeignMessage
_repeatedImportMessage = source._repeatedImportMessage
_repeatedNestedEnum = source._repeatedNestedEnum
_repeatedForeignEnum = source._repeatedForeignEnum
_repeatedStringPiece = source._repeatedStringPiece
_repeatedCord = source._repeatedCord
_repeatedLazyMessage = source._repeatedLazyMessage
_oneofField = source._oneofField
}
}
fileprivate mutating func _uniqueStorage() -> _StorageClass {
if !isKnownUniquelyReferenced(&_storage) {
_storage = _StorageClass(copying: _storage)
}
return _storage
}
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
_ = _uniqueStorage()
try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularInt32Field(value: &_storage._optionalInt32)
case 2: try decoder.decodeSingularInt64Field(value: &_storage._optionalInt64)
case 3: try decoder.decodeSingularUInt32Field(value: &_storage._optionalUint32)
case 4: try decoder.decodeSingularUInt64Field(value: &_storage._optionalUint64)
case 5: try decoder.decodeSingularSInt32Field(value: &_storage._optionalSint32)
case 6: try decoder.decodeSingularSInt64Field(value: &_storage._optionalSint64)
case 7: try decoder.decodeSingularFixed32Field(value: &_storage._optionalFixed32)
case 8: try decoder.decodeSingularFixed64Field(value: &_storage._optionalFixed64)
case 9: try decoder.decodeSingularSFixed32Field(value: &_storage._optionalSfixed32)
case 10: try decoder.decodeSingularSFixed64Field(value: &_storage._optionalSfixed64)
case 11: try decoder.decodeSingularFloatField(value: &_storage._optionalFloat)
case 12: try decoder.decodeSingularDoubleField(value: &_storage._optionalDouble)
case 13: try decoder.decodeSingularBoolField(value: &_storage._optionalBool)
case 14: try decoder.decodeSingularStringField(value: &_storage._optionalString)
case 15: try decoder.decodeSingularBytesField(value: &_storage._optionalBytes)
case 18: try decoder.decodeSingularMessageField(value: &_storage._optionalNestedMessage)
case 19: try decoder.decodeSingularMessageField(value: &_storage._optionalForeignMessage)
case 20: try decoder.decodeSingularMessageField(value: &_storage._optionalImportMessage)
case 21: try decoder.decodeSingularEnumField(value: &_storage._optionalNestedEnum)
case 22: try decoder.decodeSingularEnumField(value: &_storage._optionalForeignEnum)
case 24: try decoder.decodeSingularStringField(value: &_storage._optionalStringPiece)
case 25: try decoder.decodeSingularStringField(value: &_storage._optionalCord)
case 26: try decoder.decodeSingularMessageField(value: &_storage._optionalPublicImportMessage)
case 27: try decoder.decodeSingularMessageField(value: &_storage._optionalLazyMessage)
case 31: try decoder.decodeRepeatedInt32Field(value: &_storage._repeatedInt32)
case 32: try decoder.decodeRepeatedInt64Field(value: &_storage._repeatedInt64)
case 33: try decoder.decodeRepeatedUInt32Field(value: &_storage._repeatedUint32)
case 34: try decoder.decodeRepeatedUInt64Field(value: &_storage._repeatedUint64)
case 35: try decoder.decodeRepeatedSInt32Field(value: &_storage._repeatedSint32)
case 36: try decoder.decodeRepeatedSInt64Field(value: &_storage._repeatedSint64)
case 37: try decoder.decodeRepeatedFixed32Field(value: &_storage._repeatedFixed32)
case 38: try decoder.decodeRepeatedFixed64Field(value: &_storage._repeatedFixed64)
case 39: try decoder.decodeRepeatedSFixed32Field(value: &_storage._repeatedSfixed32)
case 40: try decoder.decodeRepeatedSFixed64Field(value: &_storage._repeatedSfixed64)
case 41: try decoder.decodeRepeatedFloatField(value: &_storage._repeatedFloat)
case 42: try decoder.decodeRepeatedDoubleField(value: &_storage._repeatedDouble)
case 43: try decoder.decodeRepeatedBoolField(value: &_storage._repeatedBool)
case 44: try decoder.decodeRepeatedStringField(value: &_storage._repeatedString)
case 45: try decoder.decodeRepeatedBytesField(value: &_storage._repeatedBytes)
case 48: try decoder.decodeRepeatedMessageField(value: &_storage._repeatedNestedMessage)
case 49: try decoder.decodeRepeatedMessageField(value: &_storage._repeatedForeignMessage)
case 50: try decoder.decodeRepeatedMessageField(value: &_storage._repeatedImportMessage)
case 51: try decoder.decodeRepeatedEnumField(value: &_storage._repeatedNestedEnum)
case 52: try decoder.decodeRepeatedEnumField(value: &_storage._repeatedForeignEnum)
case 54: try decoder.decodeRepeatedStringField(value: &_storage._repeatedStringPiece)
case 55: try decoder.decodeRepeatedStringField(value: &_storage._repeatedCord)
case 57: try decoder.decodeRepeatedMessageField(value: &_storage._repeatedLazyMessage)
case 111:
if _storage._oneofField != nil {try decoder.handleConflictingOneOf()}
var v: UInt32?
try decoder.decodeSingularUInt32Field(value: &v)
if let v = v {_storage._oneofField = .oneofUint32(v)}
case 112:
var v: Proto3ArenaUnittest_TestAllTypes.NestedMessage?
if let current = _storage._oneofField {
try decoder.handleConflictingOneOf()
if case .oneofNestedMessage(let m) = current {v = m}
}
try decoder.decodeSingularMessageField(value: &v)
if let v = v {_storage._oneofField = .oneofNestedMessage(v)}
case 113:
if _storage._oneofField != nil {try decoder.handleConflictingOneOf()}
var v: String?
try decoder.decodeSingularStringField(value: &v)
if let v = v {_storage._oneofField = .oneofString(v)}
case 114:
if _storage._oneofField != nil {try decoder.handleConflictingOneOf()}
var v: Data?
try decoder.decodeSingularBytesField(value: &v)
if let v = v {_storage._oneofField = .oneofBytes(v)}
case 115: try decoder.decodeSingularMessageField(value: &_storage._optionalLazyImportMessage)
default: break
}
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
if _storage._optionalInt32 != 0 {
try visitor.visitSingularInt32Field(value: _storage._optionalInt32, fieldNumber: 1)
}
if _storage._optionalInt64 != 0 {
try visitor.visitSingularInt64Field(value: _storage._optionalInt64, fieldNumber: 2)
}
if _storage._optionalUint32 != 0 {
try visitor.visitSingularUInt32Field(value: _storage._optionalUint32, fieldNumber: 3)
}
if _storage._optionalUint64 != 0 {
try visitor.visitSingularUInt64Field(value: _storage._optionalUint64, fieldNumber: 4)
}
if _storage._optionalSint32 != 0 {
try visitor.visitSingularSInt32Field(value: _storage._optionalSint32, fieldNumber: 5)
}
if _storage._optionalSint64 != 0 {
try visitor.visitSingularSInt64Field(value: _storage._optionalSint64, fieldNumber: 6)
}
if _storage._optionalFixed32 != 0 {
try visitor.visitSingularFixed32Field(value: _storage._optionalFixed32, fieldNumber: 7)
}
if _storage._optionalFixed64 != 0 {
try visitor.visitSingularFixed64Field(value: _storage._optionalFixed64, fieldNumber: 8)
}
if _storage._optionalSfixed32 != 0 {
try visitor.visitSingularSFixed32Field(value: _storage._optionalSfixed32, fieldNumber: 9)
}
if _storage._optionalSfixed64 != 0 {
try visitor.visitSingularSFixed64Field(value: _storage._optionalSfixed64, fieldNumber: 10)
}
if _storage._optionalFloat != 0 {
try visitor.visitSingularFloatField(value: _storage._optionalFloat, fieldNumber: 11)
}
if _storage._optionalDouble != 0 {
try visitor.visitSingularDoubleField(value: _storage._optionalDouble, fieldNumber: 12)
}
if _storage._optionalBool != false {
try visitor.visitSingularBoolField(value: _storage._optionalBool, fieldNumber: 13)
}
if !_storage._optionalString.isEmpty {
try visitor.visitSingularStringField(value: _storage._optionalString, fieldNumber: 14)
}
if !_storage._optionalBytes.isEmpty {
try visitor.visitSingularBytesField(value: _storage._optionalBytes, fieldNumber: 15)
}
if let v = _storage._optionalNestedMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 18)
}
if let v = _storage._optionalForeignMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 19)
}
if let v = _storage._optionalImportMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 20)
}
if _storage._optionalNestedEnum != .zero {
try visitor.visitSingularEnumField(value: _storage._optionalNestedEnum, fieldNumber: 21)
}
if _storage._optionalForeignEnum != .foreignZero {
try visitor.visitSingularEnumField(value: _storage._optionalForeignEnum, fieldNumber: 22)
}
if !_storage._optionalStringPiece.isEmpty {
try visitor.visitSingularStringField(value: _storage._optionalStringPiece, fieldNumber: 24)
}
if !_storage._optionalCord.isEmpty {
try visitor.visitSingularStringField(value: _storage._optionalCord, fieldNumber: 25)
}
if let v = _storage._optionalPublicImportMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 26)
}
if let v = _storage._optionalLazyMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 27)
}
if !_storage._repeatedInt32.isEmpty {
try visitor.visitPackedInt32Field(value: _storage._repeatedInt32, fieldNumber: 31)
}
if !_storage._repeatedInt64.isEmpty {
try visitor.visitPackedInt64Field(value: _storage._repeatedInt64, fieldNumber: 32)
}
if !_storage._repeatedUint32.isEmpty {
try visitor.visitPackedUInt32Field(value: _storage._repeatedUint32, fieldNumber: 33)
}
if !_storage._repeatedUint64.isEmpty {
try visitor.visitPackedUInt64Field(value: _storage._repeatedUint64, fieldNumber: 34)
}
if !_storage._repeatedSint32.isEmpty {
try visitor.visitPackedSInt32Field(value: _storage._repeatedSint32, fieldNumber: 35)
}
if !_storage._repeatedSint64.isEmpty {
try visitor.visitPackedSInt64Field(value: _storage._repeatedSint64, fieldNumber: 36)
}
if !_storage._repeatedFixed32.isEmpty {
try visitor.visitPackedFixed32Field(value: _storage._repeatedFixed32, fieldNumber: 37)
}
if !_storage._repeatedFixed64.isEmpty {
try visitor.visitPackedFixed64Field(value: _storage._repeatedFixed64, fieldNumber: 38)
}
if !_storage._repeatedSfixed32.isEmpty {
try visitor.visitPackedSFixed32Field(value: _storage._repeatedSfixed32, fieldNumber: 39)
}
if !_storage._repeatedSfixed64.isEmpty {
try visitor.visitPackedSFixed64Field(value: _storage._repeatedSfixed64, fieldNumber: 40)
}
if !_storage._repeatedFloat.isEmpty {
try visitor.visitPackedFloatField(value: _storage._repeatedFloat, fieldNumber: 41)
}
if !_storage._repeatedDouble.isEmpty {
try visitor.visitPackedDoubleField(value: _storage._repeatedDouble, fieldNumber: 42)
}
if !_storage._repeatedBool.isEmpty {
try visitor.visitPackedBoolField(value: _storage._repeatedBool, fieldNumber: 43)
}
if !_storage._repeatedString.isEmpty {
try visitor.visitRepeatedStringField(value: _storage._repeatedString, fieldNumber: 44)
}
if !_storage._repeatedBytes.isEmpty {
try visitor.visitRepeatedBytesField(value: _storage._repeatedBytes, fieldNumber: 45)
}
if !_storage._repeatedNestedMessage.isEmpty {
try visitor.visitRepeatedMessageField(value: _storage._repeatedNestedMessage, fieldNumber: 48)
}
if !_storage._repeatedForeignMessage.isEmpty {
try visitor.visitRepeatedMessageField(value: _storage._repeatedForeignMessage, fieldNumber: 49)
}
if !_storage._repeatedImportMessage.isEmpty {
try visitor.visitRepeatedMessageField(value: _storage._repeatedImportMessage, fieldNumber: 50)
}
if !_storage._repeatedNestedEnum.isEmpty {
try visitor.visitPackedEnumField(value: _storage._repeatedNestedEnum, fieldNumber: 51)
}
if !_storage._repeatedForeignEnum.isEmpty {
try visitor.visitPackedEnumField(value: _storage._repeatedForeignEnum, fieldNumber: 52)
}
if !_storage._repeatedStringPiece.isEmpty {
try visitor.visitRepeatedStringField(value: _storage._repeatedStringPiece, fieldNumber: 54)
}
if !_storage._repeatedCord.isEmpty {
try visitor.visitRepeatedStringField(value: _storage._repeatedCord, fieldNumber: 55)
}
if !_storage._repeatedLazyMessage.isEmpty {
try visitor.visitRepeatedMessageField(value: _storage._repeatedLazyMessage, fieldNumber: 57)
}
switch _storage._oneofField {
case .oneofUint32(let v)?:
try visitor.visitSingularUInt32Field(value: v, fieldNumber: 111)
case .oneofNestedMessage(let v)?:
try visitor.visitSingularMessageField(value: v, fieldNumber: 112)
case .oneofString(let v)?:
try visitor.visitSingularStringField(value: v, fieldNumber: 113)
case .oneofBytes(let v)?:
try visitor.visitSingularBytesField(value: v, fieldNumber: 114)
case nil: break
}
if let v = _storage._optionalLazyImportMessage {
try visitor.visitSingularMessageField(value: v, fieldNumber: 115)
}
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_TestAllTypes, rhs: Proto3ArenaUnittest_TestAllTypes) -> Bool {
if lhs._storage !== rhs._storage {
let storagesAreEqual: Bool = withExtendedLifetime((lhs._storage, rhs._storage)) { (_args: (_StorageClass, _StorageClass)) in
let _storage = _args.0
let rhs_storage = _args.1
if _storage._optionalInt32 != rhs_storage._optionalInt32 {return false}
if _storage._optionalInt64 != rhs_storage._optionalInt64 {return false}
if _storage._optionalUint32 != rhs_storage._optionalUint32 {return false}
if _storage._optionalUint64 != rhs_storage._optionalUint64 {return false}
if _storage._optionalSint32 != rhs_storage._optionalSint32 {return false}
if _storage._optionalSint64 != rhs_storage._optionalSint64 {return false}
if _storage._optionalFixed32 != rhs_storage._optionalFixed32 {return false}
if _storage._optionalFixed64 != rhs_storage._optionalFixed64 {return false}
if _storage._optionalSfixed32 != rhs_storage._optionalSfixed32 {return false}
if _storage._optionalSfixed64 != rhs_storage._optionalSfixed64 {return false}
if _storage._optionalFloat != rhs_storage._optionalFloat {return false}
if _storage._optionalDouble != rhs_storage._optionalDouble {return false}
if _storage._optionalBool != rhs_storage._optionalBool {return false}
if _storage._optionalString != rhs_storage._optionalString {return false}
if _storage._optionalBytes != rhs_storage._optionalBytes {return false}
if _storage._optionalNestedMessage != rhs_storage._optionalNestedMessage {return false}
if _storage._optionalForeignMessage != rhs_storage._optionalForeignMessage {return false}
if _storage._optionalImportMessage != rhs_storage._optionalImportMessage {return false}
if _storage._optionalNestedEnum != rhs_storage._optionalNestedEnum {return false}
if _storage._optionalForeignEnum != rhs_storage._optionalForeignEnum {return false}
if _storage._optionalStringPiece != rhs_storage._optionalStringPiece {return false}
if _storage._optionalCord != rhs_storage._optionalCord {return false}
if _storage._optionalPublicImportMessage != rhs_storage._optionalPublicImportMessage {return false}
if _storage._optionalLazyMessage != rhs_storage._optionalLazyMessage {return false}
if _storage._optionalLazyImportMessage != rhs_storage._optionalLazyImportMessage {return false}
if _storage._repeatedInt32 != rhs_storage._repeatedInt32 {return false}
if _storage._repeatedInt64 != rhs_storage._repeatedInt64 {return false}
if _storage._repeatedUint32 != rhs_storage._repeatedUint32 {return false}
if _storage._repeatedUint64 != rhs_storage._repeatedUint64 {return false}
if _storage._repeatedSint32 != rhs_storage._repeatedSint32 {return false}
if _storage._repeatedSint64 != rhs_storage._repeatedSint64 {return false}
if _storage._repeatedFixed32 != rhs_storage._repeatedFixed32 {return false}
if _storage._repeatedFixed64 != rhs_storage._repeatedFixed64 {return false}
if _storage._repeatedSfixed32 != rhs_storage._repeatedSfixed32 {return false}
if _storage._repeatedSfixed64 != rhs_storage._repeatedSfixed64 {return false}
if _storage._repeatedFloat != rhs_storage._repeatedFloat {return false}
if _storage._repeatedDouble != rhs_storage._repeatedDouble {return false}
if _storage._repeatedBool != rhs_storage._repeatedBool {return false}
if _storage._repeatedString != rhs_storage._repeatedString {return false}
if _storage._repeatedBytes != rhs_storage._repeatedBytes {return false}
if _storage._repeatedNestedMessage != rhs_storage._repeatedNestedMessage {return false}
if _storage._repeatedForeignMessage != rhs_storage._repeatedForeignMessage {return false}
if _storage._repeatedImportMessage != rhs_storage._repeatedImportMessage {return false}
if _storage._repeatedNestedEnum != rhs_storage._repeatedNestedEnum {return false}
if _storage._repeatedForeignEnum != rhs_storage._repeatedForeignEnum {return false}
if _storage._repeatedStringPiece != rhs_storage._repeatedStringPiece {return false}
if _storage._repeatedCord != rhs_storage._repeatedCord {return false}
if _storage._repeatedLazyMessage != rhs_storage._repeatedLazyMessage {return false}
if _storage._oneofField != rhs_storage._oneofField {return false}
return true
}
if !storagesAreEqual {return false}
}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_TestAllTypes.NestedEnum: SwiftProtobuf._ProtoNameProviding {
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
-1: .same(proto: "NEG"),
0: .same(proto: "ZERO"),
1: .same(proto: "FOO"),
2: .same(proto: "BAR"),
3: .same(proto: "BAZ"),
]
}
extension Proto3ArenaUnittest_TestAllTypes.NestedMessage: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = Proto3ArenaUnittest_TestAllTypes.protoMessageName + ".NestedMessage"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "bb"),
]
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularInt32Field(value: &self.bb)
default: break
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if self.bb != 0 {
try visitor.visitSingularInt32Field(value: self.bb, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_TestAllTypes.NestedMessage, rhs: Proto3ArenaUnittest_TestAllTypes.NestedMessage) -> Bool {
if lhs.bb != rhs.bb {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_TestPackedTypes: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".TestPackedTypes"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
90: .standard(proto: "packed_int32"),
91: .standard(proto: "packed_int64"),
92: .standard(proto: "packed_uint32"),
93: .standard(proto: "packed_uint64"),
94: .standard(proto: "packed_sint32"),
95: .standard(proto: "packed_sint64"),
96: .standard(proto: "packed_fixed32"),
97: .standard(proto: "packed_fixed64"),
98: .standard(proto: "packed_sfixed32"),
99: .standard(proto: "packed_sfixed64"),
100: .standard(proto: "packed_float"),
101: .standard(proto: "packed_double"),
102: .standard(proto: "packed_bool"),
103: .standard(proto: "packed_enum"),
]
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 90: try decoder.decodeRepeatedInt32Field(value: &self.packedInt32)
case 91: try decoder.decodeRepeatedInt64Field(value: &self.packedInt64)
case 92: try decoder.decodeRepeatedUInt32Field(value: &self.packedUint32)
case 93: try decoder.decodeRepeatedUInt64Field(value: &self.packedUint64)
case 94: try decoder.decodeRepeatedSInt32Field(value: &self.packedSint32)
case 95: try decoder.decodeRepeatedSInt64Field(value: &self.packedSint64)
case 96: try decoder.decodeRepeatedFixed32Field(value: &self.packedFixed32)
case 97: try decoder.decodeRepeatedFixed64Field(value: &self.packedFixed64)
case 98: try decoder.decodeRepeatedSFixed32Field(value: &self.packedSfixed32)
case 99: try decoder.decodeRepeatedSFixed64Field(value: &self.packedSfixed64)
case 100: try decoder.decodeRepeatedFloatField(value: &self.packedFloat)
case 101: try decoder.decodeRepeatedDoubleField(value: &self.packedDouble)
case 102: try decoder.decodeRepeatedBoolField(value: &self.packedBool)
case 103: try decoder.decodeRepeatedEnumField(value: &self.packedEnum)
default: break
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.packedInt32.isEmpty {
try visitor.visitPackedInt32Field(value: self.packedInt32, fieldNumber: 90)
}
if !self.packedInt64.isEmpty {
try visitor.visitPackedInt64Field(value: self.packedInt64, fieldNumber: 91)
}
if !self.packedUint32.isEmpty {
try visitor.visitPackedUInt32Field(value: self.packedUint32, fieldNumber: 92)
}
if !self.packedUint64.isEmpty {
try visitor.visitPackedUInt64Field(value: self.packedUint64, fieldNumber: 93)
}
if !self.packedSint32.isEmpty {
try visitor.visitPackedSInt32Field(value: self.packedSint32, fieldNumber: 94)
}
if !self.packedSint64.isEmpty {
try visitor.visitPackedSInt64Field(value: self.packedSint64, fieldNumber: 95)
}
if !self.packedFixed32.isEmpty {
try visitor.visitPackedFixed32Field(value: self.packedFixed32, fieldNumber: 96)
}
if !self.packedFixed64.isEmpty {
try visitor.visitPackedFixed64Field(value: self.packedFixed64, fieldNumber: 97)
}
if !self.packedSfixed32.isEmpty {
try visitor.visitPackedSFixed32Field(value: self.packedSfixed32, fieldNumber: 98)
}
if !self.packedSfixed64.isEmpty {
try visitor.visitPackedSFixed64Field(value: self.packedSfixed64, fieldNumber: 99)
}
if !self.packedFloat.isEmpty {
try visitor.visitPackedFloatField(value: self.packedFloat, fieldNumber: 100)
}
if !self.packedDouble.isEmpty {
try visitor.visitPackedDoubleField(value: self.packedDouble, fieldNumber: 101)
}
if !self.packedBool.isEmpty {
try visitor.visitPackedBoolField(value: self.packedBool, fieldNumber: 102)
}
if !self.packedEnum.isEmpty {
try visitor.visitPackedEnumField(value: self.packedEnum, fieldNumber: 103)
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_TestPackedTypes, rhs: Proto3ArenaUnittest_TestPackedTypes) -> Bool {
if lhs.packedInt32 != rhs.packedInt32 {return false}
if lhs.packedInt64 != rhs.packedInt64 {return false}
if lhs.packedUint32 != rhs.packedUint32 {return false}
if lhs.packedUint64 != rhs.packedUint64 {return false}
if lhs.packedSint32 != rhs.packedSint32 {return false}
if lhs.packedSint64 != rhs.packedSint64 {return false}
if lhs.packedFixed32 != rhs.packedFixed32 {return false}
if lhs.packedFixed64 != rhs.packedFixed64 {return false}
if lhs.packedSfixed32 != rhs.packedSfixed32 {return false}
if lhs.packedSfixed64 != rhs.packedSfixed64 {return false}
if lhs.packedFloat != rhs.packedFloat {return false}
if lhs.packedDouble != rhs.packedDouble {return false}
if lhs.packedBool != rhs.packedBool {return false}
if lhs.packedEnum != rhs.packedEnum {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_TestUnpackedTypes: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".TestUnpackedTypes"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "repeated_int32"),
2: .standard(proto: "repeated_int64"),
3: .standard(proto: "repeated_uint32"),
4: .standard(proto: "repeated_uint64"),
5: .standard(proto: "repeated_sint32"),
6: .standard(proto: "repeated_sint64"),
7: .standard(proto: "repeated_fixed32"),
8: .standard(proto: "repeated_fixed64"),
9: .standard(proto: "repeated_sfixed32"),
10: .standard(proto: "repeated_sfixed64"),
11: .standard(proto: "repeated_float"),
12: .standard(proto: "repeated_double"),
13: .standard(proto: "repeated_bool"),
14: .standard(proto: "repeated_nested_enum"),
]
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeRepeatedInt32Field(value: &self.repeatedInt32)
case 2: try decoder.decodeRepeatedInt64Field(value: &self.repeatedInt64)
case 3: try decoder.decodeRepeatedUInt32Field(value: &self.repeatedUint32)
case 4: try decoder.decodeRepeatedUInt64Field(value: &self.repeatedUint64)
case 5: try decoder.decodeRepeatedSInt32Field(value: &self.repeatedSint32)
case 6: try decoder.decodeRepeatedSInt64Field(value: &self.repeatedSint64)
case 7: try decoder.decodeRepeatedFixed32Field(value: &self.repeatedFixed32)
case 8: try decoder.decodeRepeatedFixed64Field(value: &self.repeatedFixed64)
case 9: try decoder.decodeRepeatedSFixed32Field(value: &self.repeatedSfixed32)
case 10: try decoder.decodeRepeatedSFixed64Field(value: &self.repeatedSfixed64)
case 11: try decoder.decodeRepeatedFloatField(value: &self.repeatedFloat)
case 12: try decoder.decodeRepeatedDoubleField(value: &self.repeatedDouble)
case 13: try decoder.decodeRepeatedBoolField(value: &self.repeatedBool)
case 14: try decoder.decodeRepeatedEnumField(value: &self.repeatedNestedEnum)
default: break
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.repeatedInt32.isEmpty {
try visitor.visitRepeatedInt32Field(value: self.repeatedInt32, fieldNumber: 1)
}
if !self.repeatedInt64.isEmpty {
try visitor.visitRepeatedInt64Field(value: self.repeatedInt64, fieldNumber: 2)
}
if !self.repeatedUint32.isEmpty {
try visitor.visitRepeatedUInt32Field(value: self.repeatedUint32, fieldNumber: 3)
}
if !self.repeatedUint64.isEmpty {
try visitor.visitRepeatedUInt64Field(value: self.repeatedUint64, fieldNumber: 4)
}
if !self.repeatedSint32.isEmpty {
try visitor.visitRepeatedSInt32Field(value: self.repeatedSint32, fieldNumber: 5)
}
if !self.repeatedSint64.isEmpty {
try visitor.visitRepeatedSInt64Field(value: self.repeatedSint64, fieldNumber: 6)
}
if !self.repeatedFixed32.isEmpty {
try visitor.visitRepeatedFixed32Field(value: self.repeatedFixed32, fieldNumber: 7)
}
if !self.repeatedFixed64.isEmpty {
try visitor.visitRepeatedFixed64Field(value: self.repeatedFixed64, fieldNumber: 8)
}
if !self.repeatedSfixed32.isEmpty {
try visitor.visitRepeatedSFixed32Field(value: self.repeatedSfixed32, fieldNumber: 9)
}
if !self.repeatedSfixed64.isEmpty {
try visitor.visitRepeatedSFixed64Field(value: self.repeatedSfixed64, fieldNumber: 10)
}
if !self.repeatedFloat.isEmpty {
try visitor.visitRepeatedFloatField(value: self.repeatedFloat, fieldNumber: 11)
}
if !self.repeatedDouble.isEmpty {
try visitor.visitRepeatedDoubleField(value: self.repeatedDouble, fieldNumber: 12)
}
if !self.repeatedBool.isEmpty {
try visitor.visitRepeatedBoolField(value: self.repeatedBool, fieldNumber: 13)
}
if !self.repeatedNestedEnum.isEmpty {
try visitor.visitRepeatedEnumField(value: self.repeatedNestedEnum, fieldNumber: 14)
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_TestUnpackedTypes, rhs: Proto3ArenaUnittest_TestUnpackedTypes) -> Bool {
if lhs.repeatedInt32 != rhs.repeatedInt32 {return false}
if lhs.repeatedInt64 != rhs.repeatedInt64 {return false}
if lhs.repeatedUint32 != rhs.repeatedUint32 {return false}
if lhs.repeatedUint64 != rhs.repeatedUint64 {return false}
if lhs.repeatedSint32 != rhs.repeatedSint32 {return false}
if lhs.repeatedSint64 != rhs.repeatedSint64 {return false}
if lhs.repeatedFixed32 != rhs.repeatedFixed32 {return false}
if lhs.repeatedFixed64 != rhs.repeatedFixed64 {return false}
if lhs.repeatedSfixed32 != rhs.repeatedSfixed32 {return false}
if lhs.repeatedSfixed64 != rhs.repeatedSfixed64 {return false}
if lhs.repeatedFloat != rhs.repeatedFloat {return false}
if lhs.repeatedDouble != rhs.repeatedDouble {return false}
if lhs.repeatedBool != rhs.repeatedBool {return false}
if lhs.repeatedNestedEnum != rhs.repeatedNestedEnum {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_NestedTestAllTypes: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".NestedTestAllTypes"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "child"),
2: .same(proto: "payload"),
3: .standard(proto: "repeated_child"),
]
fileprivate class _StorageClass {
var _child: Proto3ArenaUnittest_NestedTestAllTypes? = nil
var _payload: Proto3ArenaUnittest_TestAllTypes? = nil
var _repeatedChild: [Proto3ArenaUnittest_NestedTestAllTypes] = []
static let defaultInstance = _StorageClass()
private init() {}
init(copying source: _StorageClass) {
_child = source._child
_payload = source._payload
_repeatedChild = source._repeatedChild
}
}
fileprivate mutating func _uniqueStorage() -> _StorageClass {
if !isKnownUniquelyReferenced(&_storage) {
_storage = _StorageClass(copying: _storage)
}
return _storage
}
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
_ = _uniqueStorage()
try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularMessageField(value: &_storage._child)
case 2: try decoder.decodeSingularMessageField(value: &_storage._payload)
case 3: try decoder.decodeRepeatedMessageField(value: &_storage._repeatedChild)
default: break
}
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
if let v = _storage._child {
try visitor.visitSingularMessageField(value: v, fieldNumber: 1)
}
if let v = _storage._payload {
try visitor.visitSingularMessageField(value: v, fieldNumber: 2)
}
if !_storage._repeatedChild.isEmpty {
try visitor.visitRepeatedMessageField(value: _storage._repeatedChild, fieldNumber: 3)
}
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_NestedTestAllTypes, rhs: Proto3ArenaUnittest_NestedTestAllTypes) -> Bool {
if lhs._storage !== rhs._storage {
let storagesAreEqual: Bool = withExtendedLifetime((lhs._storage, rhs._storage)) { (_args: (_StorageClass, _StorageClass)) in
let _storage = _args.0
let rhs_storage = _args.1
if _storage._child != rhs_storage._child {return false}
if _storage._payload != rhs_storage._payload {return false}
if _storage._repeatedChild != rhs_storage._repeatedChild {return false}
return true
}
if !storagesAreEqual {return false}
}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_ForeignMessage: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".ForeignMessage"
static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "c"),
]
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularInt32Field(value: &self.c)
default: break
}
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if self.c != 0 {
try visitor.visitSingularInt32Field(value: self.c, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_ForeignMessage, rhs: Proto3ArenaUnittest_ForeignMessage) -> Bool {
if lhs.c != rhs.c {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Proto3ArenaUnittest_TestEmptyMessage: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".TestEmptyMessage"
static let _protobuf_nameMap = SwiftProtobuf._NameMap()
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let _ = try decoder.nextFieldNumber() {
}
}
func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Proto3ArenaUnittest_TestEmptyMessage, rhs: Proto3ArenaUnittest_TestEmptyMessage) -> Bool {
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
|
{
"pile_set_name": "github"
}
|
文件说明:
1、base_dic_full.dic
hash索引 -- 字典带有词频和词性标志。
2、words_addons.dic
s 开头的表示停止词 u 后缀词(地名后缀、数学单位等) n 前导词(姓、汉字数词等) a 后导词(地区,部门等)
3、 not-build/base_dic_full.txt
没编译过的词典源码
4、重新编译词典的方法:
<?php
header('Content-Type: text/html; charset=utf-8');
require_once('phpanalysis.class.php');
$pa = new PhpAnalysis('utf-8', 'utf-8', false);
$pa->MakeDict( sourcefile, 16 , 'dict/base_dic_full.dic');
echo "OK";
?>
|
{
"pile_set_name": "github"
}
|
Jared Allen (quarterback)
Jared Allen (born August 26, 1981) is an American football coach and former player. He was the starting quarterback at FAU from 2001 to 2004 and also played professionally for the Amsterdam Admirals of NFL Europe in 2006.
High school career
Allen attended Edmond Santa Fe High School in Edmond, Oklahoma. As a senior, he completed 171 of 282 passes for 1,973 yards and 18 touchdowns. As a junior, he connected on 145 of 232 attempts for 1,502 yards and 9 touchdowns. He was named a 1999 Blue Chip athlete. He earned first-team All-Edmond Area, All-Metro Conference, All-District 6A-1, all-city (Oklahoma City) and Oklahoma Coaches Association all-state honors. Allen was named to the Jim Thorpe All-Star Game and was selected MVP. He was a two-sport athlete (football and basketball).
College career
Allen started four years at Florida Atlantic and was named the team's MVP in 2003, the offensive MVP in 2002, and the team MVP in 2001. He played in 47 games and started 44 times. Throughout his collegiate career, he completed 570 of 1,003 passes for 8,100 yards and 50 touchdowns. He was redshirted in 2000 and majored in political science.
Professional career
2005 season
Allen signed with the Tampa Bay Buccaneers as an undrafted free agent on May 5, 2005. He was released on August 31, 2005.
2006 season
Allen was re-signed on January 5, 2006 and was allocated to the NFL Europe's Amsterdam Admirals. He shared backup quarterback duties with Reggie Robertson playing one quarter per game every other week, until starting quarterback Gibran Hamdan broke his ankle. Jared Allen was then granted the starting job, leading the Amsterdam Admirals to one win (away vs. the Frankfurt Galaxy) and two losses during the remaining regular season games. The one win was sufficient to secure first place in the season and a spot in the World Bowl, which was lost to the Frankfurt Galaxy by 7–22. Allen was released on August 29.
Coaching career
In January, 2012 Allen was name the tight ends coach at Florida Atlantic University (FAU). After the 2013 season, Allen stepped back into an administrative role as director of player personnel and external relations. In 2015, he returned to a positional coach this time coaching running backs. Following the 2016 season, Allen stepped back from coaching for the 2017 season.
External links
FAU profile
Category:1981 births
Category:Living people
Category:American football quarterbacks
Category:Amsterdam Admirals players
Category:Florida Atlantic Owls football coaches
Category:Florida Atlantic Owls football players
Category:Players of American football from Oklahoma
Category:Sportspeople from Edmond, Oklahoma
|
{
"pile_set_name": "wikipedia_en"
}
|
Derek Duncan
Derek Henry Junior Duncan (born 23 April 1987) is an English footballer who plays as a winger or as a left back for VCD Athletic .
Career
Duncan was signed by Grays Athletic on a one-year contract on 25 May 2007, following his release by Leyton Orient. The left-winger left Grays Athletic by mutual consent, just a month after he signed, after his agent offered him to other Football League clubs.
Duncan was signed by Paul Lambert in the summer of 2007 and joined Wycombe Wanderers, where he failed to make a league appearance before having his contract terminated by mutual consent in January 2009.
On the same day it was announced that he had left Wycombe Wanderers, it was announced that the winger had signed for Ebbsfleet United until the end of the 2008–09 season. The following day Duncan made his debut for Ebbsfleet in their 1–0 home league win over Rushden & Diamonds.
Duncan signed for AFC Wimbledon on 15 June 2009, but after one season at Kingsmeadow he signed for former club Ebbsfleet, on 6 July 2010.
On 29 July 2011, it was announced he had signed for Conference South side Woking.
At the start of 2012–13 season he signed for Conference South side Maidenhead United.
Isthmian League side VCD Athletic recruited Duncan for the 2016-17 season. He featured throughout the first part of the season, before picking up a straight-red card sending off on 1 January 2017 versus local rivals Phoenix Sports.
References
External links
Category:1987 births
Category:Living people
Category:English footballers
Category:Association football wingers
Category:Leyton Orient F.C. players
Category:Lewes F.C. players
Category:Grays Athletic F.C. players
Category:Wycombe Wanderers F.C. players
Category:Ebbsfleet United F.C. players
Category:AFC Wimbledon players
Category:Woking F.C. players
Category:Maidenhead United F.C. players
Category:Thamesmead Town F.C. players
Category:English Football League players
Category:National League (English football) players
Category:Isthmian League players
Category:Footballers from Upton Park, London
|
{
"pile_set_name": "wikipedia_en"
}
|
Rhône's 13th constituency
The 13th constituency of the Rhône (French: Treizième circonscription du Rhône) is a French legislative constituency in the Rhône département. Like the other 576 French constituencies, it elects one MP using the first past the post election system with a run-off.
Description
The 13th constituency of the Rhône lies to the east of Lyon and is largely suburban in character. The largest town in the constituency, Meyzieu, lies close to Lyon's main airport.
The constituency has elected by centre right and centre left representatives in recent years. At the 2017 election the PS vote collapsed to the extent that they came 6th in the first round with under 5% of the vote.
Assembly Members
References
13
|
{
"pile_set_name": "wikipedia_en"
}
|
2017 XIXO Ladies Open Hódmezővásárhely – Doubles
Laura Pigossi and Nadia Podoroska were the defending champions, but both players chose not to participate.
Kotomi Takahata and Prarthana Thombare won the title after Ulrikke Eikeri and Tereza Mrdeža retired in the final at 1–0.
Seeds
Draw
References
Main Draw
XIXO Ladies Open Hódmezővásárhely - Doubles
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Noisy measurements of a physical unclonable function (PUF) are used to store secret keys with reliability, security, privacy, and complexity constraints. A new set of low-complexity and orthogonal transforms with no multiplication is proposed to obtain bit-error probability results significantly better than all methods previously proposed for key binding with PUFs. The uniqueness and security performance of a transform selected from the proposed set is shown to be close to optimal. An error-correction code with a low-complexity decoder and a high code rate is shown to provide a block-error probability significantly smaller than provided by previously proposed codes with the same or smaller code rates.'
address: |
Information Theory and Applications Chair, Technische Universität Berlin\
{[guenlue]{}, [rafael.schaefer]{}}[@tu-berlin.de]{}\
bibliography:
- 'references.bib'
title: |
LOW-COMPLEXITY AND RELIABLE TRANSFORMS FOR\
PHYSICAL UNCLONABLE FUNCTIONS
---
physical unclonable function (PUF), no multiplication transforms, secret key agreement, low complexity.
Introduction
============
Biometric identifiers such as fingerprints are useful to authenticate a user. Similarly, secret keys are traditionally stored in non-volatile memories (NVMs) to authenticate a physical device that contains the key. NVMs require hardware protection even when the device is turned off since an attacker can try to obtain the key at any time. A safe and cheap alternative to storing keys in NVMs is to use physical identifiers, e.g., fine variations of ring oscillator (RO) outputs, as a randomness source. Since invasive attacks to physical identifiers permanently change the identifier output, there is no need for continuous hardware protection for physical identifiers [@pufintheory].
Physical unclonable functions (PUFs) are physical identifiers with reliable and high-entropy outputs [@GassendThesis; @PappuThesis]. PUF outputs are unique to each device, so they are used for safe and low-complexity key storage in digital devices. These keys can be used for private authentication, secure computation, and encryption. Replacing such identifiers is expensive, so key-storage methods should limit the information the public data leak about the identifier outputs. Moreover, the same device should be able to reconstruct a secret key generated from the noiseless outputs by using the noisy outputs and public information. The ultimate secret-key vs. privacy-leakage rate tradeoffs are given in [@IgnaTrans; @LaiTrans; @benimdissertation]. The secret-key and privacy-leakage rate limits for a suboptimal chosen-secret (CS) model called *fuzzy commitment scheme* (FCS) [@FuzzyCommitment] are given in [@IgnatenkoFuzzy]. We consider the FCS to compare different post-processing methods applied to PUFs. Asymptotically optimal CS model constructions are given in [@bizimWZ] and similar comparison results can be obtained by using these constructions.
Physical identifier outputs are highly correlated and noisy, which are the two main problems in using PUFs. If errors in the extracted sequences are not corrected, PUF reliability would be low. If correlations are not eliminated, machine learning algorithms can model the PUF outputs [@MLPUF]. To solve the two problems, the discrete cosine transform (DCT) is used in [@bizimtemperature] to generate a uniformly-distributed bit sequence from PUFs under varying environmental conditions. Similarly, the discrete Walsh-Hadamard transform (DWHT), discrete Haar transform (DHT), and Karhunen-Loève transform (KLT) are compared in [@bizimMDPI] in terms of the maximum secret-key length, decorrelation efficiency, reliability, security, and hardware cost. The DCT, DWHT, and DHT provide good reliability and security results, and a hardware implementation of the DWHT in [@bizimMDPI] shows that the DWHT requires a substantially smaller hardware area than other transforms. There are two main reasons why the DWHT can be implemented efficiently. Firstly, the matrix that represents the DWHT has elements $1$ or $-1$, so there is no matrix multiplication. Secondly, an input-selection algorithm that is an extension of the algorithm in [@InputSelection] allows to calculate two-dimensional (2D) DWHT recursively. Based on these observations, we propose a new set of transforms that preserve these properties and that significantly improve the reliability of the sequences extracted from PUFs.
The FCS requires error-correction codes (ECCs) to achieve the realistic block-error probability of $\displaystyle P_\text{B}\!=\!10^{-9}$ for RO PUFs. The ECCs proposed in [@bizimMDPI] have better secret-key and privacy-leakage rates than previously proposed codes, but in some cases it is assumed that if multiple bits are extracted from each transform coefficient, each bit is affected by independent errors. This assumption is not valid in general. Thus, we extract only one bit from each transform coefficient. The contributions of this work are as follows.
- We propose a new set of 2D orthogonal transforms that have low-complexity hardware implementations and no matrix multiplications. The new set of transforms are shown to provide an average bit-error probability smaller than the most reliable transform considered in the PUF literature, i.e., DCT.
- Bit sequences extracted using a transform selected from the new set of transforms are shown to give good uniqueness and security results that are comparable to state-of-the-art results.
- We propose a joint transform-quantizer-code design method for the new set of transforms in combination with the FCS to achieve a block-error probability substantially smaller than the common value of $10^{-9}$ with perfect secrecy.
This paper is organized as follows. In Section \[sec:fuzzycommitment\], we review the FCS. The transform-coding algorithm to extract secure sequences from RO PUFs is explained in Section \[sec:commonsteps\]. A new set of orthogonal transforms that require a small hardware area and that result in bit-error probabilities smaller than previously considered transforms is proposed in Section \[sec:neworth\]. In Section \[sec:comparisons\], we compare the new transforms with previous methods and show that the proposed ECC provides a block-error probability for the new selected transform (ST) that is smaller than for previously considered transforms.
Review of the Fuzzy Commitment Scheme {#sec:fuzzycommitment}
=====================================
Fig. \[fig:fuzzycommitment\] shows the FCS, where an encoder ${\mathsf{Enc}}(\cdot)$ adds a codeword $\displaystyle C^N$, uniformly distributed over a set with cardinality $|\mathcal{S}|$, modulo-2 to the binary noiseless PUF-output sequence $\displaystyle X^N$ during enrollment. We show in Section \[sec:commonsteps\] that the sequence $X^N$ and its noisy version $Y^N$ can be obtained by applying the post-processing steps in Fig. \[fig:postprocessing\] to RO outputs $\widetilde{X}^L$ and its noisy version $\widetilde{Y}^L$, respectively. The sum $\displaystyle W^N=C^N{\mathbin{\oplus}}X^N$ is publicly sent through a noiseless and authenticated channel, and it is called *helper data*. The modulo-2 sum of $W^N$ and the noisy PUF-output sequence $Y^N =X^N {\mathbin{\oplus}}E^N$, where $E^N$ is the binary error vector, gives the noisy codeword $\displaystyle C^N{\mathbin{\oplus}}E^N$. Using the noisy codeword, a channel decoder $\displaystyle {\mathsf{Dec}}(\cdot)$ estimates the secret key $S$ during reconstruction. A reliable secret-key agreement is possible by using $X^N$, $Y^N$, and $W^N$ [@AhlswedeCsiz; @Maurer].
One can achieve a (secret-key, privacy-leakage) rate pair $ (R_\text{s}\text{,}R_\ell)$ using the FCS with perfect secrecy if, given any $\epsilon\!>\!0$, there is some $N\!\geq\!1$, and an encoder and a decoder for which $\displaystyle R_\text{s}=\frac{\log|\mathcal{S}|}{N}$ and $$\begin{aligned}
{2}
&\Pr[S\ne\hat{S}] \leq \epsilon && (\text{reliability}) \label{eq:reliabilityconst}\\
&I\big(S;W^N\big)\!=\!0 && (\text{perfect secrecy})\label{eq:secrecyconst}\\
&\frac{1}{N}I\big(X^N;W^N\big) \leq R_\ell+\epsilon. \quad\quad\quad&&(\text{privacy}) \label{eq:privacyconst}\end{aligned}$$ Condition (\[eq:secrecyconst\]) ensures that the public side information $W^N$ does not leak any information about the secret key, so one achieves perfect secrecy. The normalized information that $W^N$ leaks about the PUF output sequence $X^N$ is considered in (\[eq:privacyconst\]). If one should asymptotically limit the unnormalized privacy leakage $I(X^N;W^N)$, private keys available during enrollment and reconstruction are necessary [@IgnaTrans], which is not realistic or practical; see the discussions in [@bizimWZ].
Suppose the measurement channel $P_{Y|X}$ is a binary symmetric channel (BSC) with crossover probability $p$, and $X$ is independent and identically distributed (i.i.d.) according to a uniform distribution. Define $\displaystyle H_b(p)\!=\!-p\log p-(1\!-p)\log(1\!-p)$ as the binary entropy function. The region $\displaystyle \mathcal{R}$ of all achievable (secret-key, privacy-leakage) rate pairs for the FCS with perfect secrecy is [@IgnatenkoFuzzy] $$\begin{aligned}
\mathcal{R}\! =\! \big\{ \left(R_\text{s},R_\ell\right)\!\colon\!\quad 0\leq R_\text{s}\leq 1-H_b(p),\quad R_\ell\geq 1\!-\!R_\text{s} \big\}.\label{eq:ls0}\end{aligned}$$ We plot this region in Section \[sec:comparisons\] to evaluate the secret-key and privacy-leakage rates achieved by the proposed ECC.
The FCS is a particular realization of the CS model. The region $\mathcal{R}_{\text{cs}}$ of all achievable (secret-key, privacy-leakage) rate pairs for the CS model, where a generic encoder is used to confidentially transmit an embedded secret key to a decoder that observes $Y^N$ and the helper data $W^N$, is given in [@IgnaTrans; @LaiTrans] as the union over all $P_{U|X}$ of the set of achievable rate pairs $\left(R_\text{s},R_\ell\right)$ such that $$\begin{aligned}
\Big\{0\leq R_\text{s}\leq I(U;Y),\qquad R_\ell\geq I(U;X)-I(U;Y)\!\Big\}\label{eq:chosensecret}\end{aligned}$$ where $P_X$ is the probability distribution of $X$ and the alphabet $\mathcal{U}$ of the auxiliary random variable $U$ can be limited to have the size $\displaystyle |\mathcal{U}|\!\leq\!|\mathcal{X}|+1$ as $U-X-Y$ forms a Markov chain. The FCS achieves a boundary point of $\mathcal{R}_{\text{cs}}$ for a BSC $P_{Y|X}$ only at the point $\displaystyle (R_\text{s}^*,R_\ell^*)\!=\!(1\!-\!H_b(p),H_b(p))$. To achieve the other points on the rate-region boundary, one should use a nested code construction as in [@bizimWZ] or a binning based construction as in [@MatthieuPolar], both of which require careful polar code [@Arikan] designs. This is not necessary to illustrate the gains from the new set of transforms and it suffices to combine the new set with the FCS.
Post-processing Steps {#sec:commonsteps}
=====================
We consider a 2D array of $r\!\times\!c$ ROs. Denote the continuous-valued outputs of $L\!=\!r\!\times\!c$ ROs as the vector random variable $\widetilde{X}^L$, distributed according to $\displaystyle f_{\widetilde{X}^L}$. Suppose that the noise component $\widetilde{E}_j$ on the $j$-th RO output is Gaussian distributed with zero mean for all $j=1,2,\ldots,L$ and that the noise components are mutually independent. Denote the noisy RO outputs as $\widetilde{Y}^L\!=\!\widetilde{X}^L\!+\!\widetilde{E}^L$. We extract binary vectors $X^N$ and $Y^N$ from $\widetilde{X}^L$ and $\widetilde{Y}^L$, respectively, and define binary error variables $\displaystyle E_i\!=\!X_i{\mathbin{\oplus}}Y_i$ for $i\!=\!1,2,\ldots,N$.
![The transform-coding steps.[]{data-label="fig:postprocessing"}](./Transformcodingmodel.eps){width="48.50050%" height="0.5005\textheight"}
The post-processing steps used during the enrollment (and reconstruction) to extract a bit sequence $X^N$ (and its noisy version $Y^N$) are depicted in Fig. \[fig:postprocessing\]. These steps are transformation, histogram equalization, quantization, Gray mapping, and concatenation. Since RO outputs $\widetilde{X}^L$ are correlated, we apply a transform $\emph{T}_{r\!\times\!c}(\cdot)$ for decorrelation. We model all transform coefficients and noise components as random variables with Gaussian marginal distributions. A transform-coefficient output $T$ that comes from a distribution with mean $\mu\neq 0$ and variance $\sigma^2\neq 1$ is converted into a standard Gaussian random variable during histogram equalization, which reduces the hardware area when multiple bits are extracted. Independent bits can be extracted from transform coefficients by setting the quantization boundaries of a $K$-bit quantizer to $$\label{eq:quantsteps}
b_k=Q^{-1}\left(1-\dfrac{k}{2^K}\right) \text{ for } k=0,1,\dots,2^K$$ where $Q(\cdot)$ is the $Q$-function. Quantizing a coefficient $\hat{T}$ to $k$ if $\displaystyle b_{k-1}\!<\!\hat{T}\!\leq\!b_k$ ensures that $X^N$ is uniformly distributed, which is necessary to achieve the rate point where the FCS is optimal.
One can use scalar quantizers without a performance loss in security if the RO output statistics satisfy certain constraints [@benimdissertation]. We do not use the first transform coefficient, i.e., DC coefficient, for bit extraction since it corresponds to the average over the RO array, known by an attacker [@benimdissertation]. Furthermore, Gray mapping ensures that the neighboring quantization intervals result in only one bit flip. This is a good choice as the noise components $E_i$ for all $i=1,2,\ldots,N$ have zero mean. The sequences extracted from transform coefficients are concatenated to obtain the sequence $X^N$ (or $Y^N$).
New Orthogonal Transforms {#sec:neworth}
=========================
A useful metric to measure the complexity of a transform is the number of operations required for computations. Consider only RO arrays of sizes $ r\!=\!c\!=\!8$ and $16$, which are powers of 2, so fast algorithms are available. In [@benimdissertation], the DWHT is suggested as the best candidate among the set of transforms {DCT, DHT, KLT, DWHT} for RO PUF applications with a low-complexity constraint such as internet of things (IoT) applications.
In [@bizimMDPI], we extend an input-selection algorithm to compute the 2D $16\times16$ DWHT by applying a $2\times2$ matrix operation recursively to illustrate that the DWHT requires a small hardware area in a field programmable gate array (FPGA) since it does not require any multiplications. Following this observation, we propose a set of transforms that are orthogonal (to decorrelate the RO outputs better), that have matrix elements $1$ or $-1$ (to eliminate multiplications), and that have size of $16\times16$ (to apply the input-selection algorithm given in [@bizimMDPI] to further reduce complexity). We show in the next section that these transforms provide higher reliability than other transforms previously considered in the literature.
Orthogonal Transform Construction and Selection {#subsec:orthtransselection}
-----------------------------------------------
Consider an orthogonal matrix $A$ with elements $1$ or $-1$ and of size $k\times k$, i.e., $AA^{T}= I$, where $T$ is the matrix transpose and $I$ is the identity matrix of size $k\times k$. It is straightforward to show that the following matrices are also orthogonal: $$\begin{aligned}
&\Biggl[ \begin{matrix}
A&A\\ A&\!-\!A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
A&A\\ \!-\!A&A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
A&\!-\!A\\ A&A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
\!-\!A&A\\ A&A
\end{matrix} \Biggr],\nonumber\\
\Biggl[ &\begin{matrix}
\!-\!A&\!-\!A\\\! -\!A&A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
\!-\!A&\!-\!A\\ A&\!-\!A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
\!-\!A&A\\ \!-\!A&\!-\!A
\end{matrix} \Biggr],
\Biggl[ \begin{matrix}
A&\!-\!A\\ \!-\!A&\!-\!A
\end{matrix} \Biggr].\label{eq1}\end{aligned}$$ Since $2^{k^2}$ possible matrices should be checked for orthogonality, we choose $k\!=\!4$ to keep the complexity of the exhaustive search for orthogonal matrices low. The result of the exhaustive search is a set of orthogonal matrices $A$ of size $4\!\times\! 4$. By applying the matrix construction methods in (\[eq1\]) twice consecutively, we obtain $12288$ unique orthogonal transforms of size $16\!\times\! 16$ with elements $1$ or $\displaystyle -1$.
We apply these orthogonal transforms, one of which is the DWHT, to an RO dataset to select the orthogonal transform whose maximum bit-error probability over the transform coefficients is minimum. This selection method provides reliability guarantees to every transform coefficient. An ECC that has a higher code dimension than it is achievable according to the Gilbert-Varshamov (GV) bound [@GilbertGV; @varshamovGV] for the maximum error probability over the transform coefficients of the ST, is given in Section \[subsec:codeselection\]. This illustrates that our selection method is conservative and the block-error probability is substantially smaller than $10^{-9}$.
There are also other orthogonal transforms of size $16\times 16$ but we illustrate in the next section that the new set suffices to significantly increase the reliability of the extracted bits as compared to previously considered transforms and previous RO PUF methods.
Performance Evaluations {#sec:comparisons}
=======================
We use RO arrays of size $16\!\times \!16$ from the RO dataset in [@ROPUF] and apply the transform-coding steps in Fig. \[fig:postprocessing\] to compare the previously considered transforms with the new set of transforms in terms of their reliability, uniqueness, and security. We illustrate that a Bose-Chaudhuri-Hocquenghem (BCH) code can be used for error correction in combination with the FCS to achieve a block-error probability smaller than the common value of $10^{-9}$.
Transform Comparisons
---------------------
We compare the orthogonal transform selected from the new set, i.e., the ST, with the DCT and DWHT in terms of the bit-error probabilities of the $255$ transform coefficients obtained from the RO dataset in [@ROPUF]. Fig. \[fig:BERComparisonsofTrans\] illustrates the bit-error probabilities of the DCT, DWHT, and the ST. The mean of the ST is smaller than the means of the DCT and DWHT. Furthermore, the maximum bit-error probability of the DCT and ST are almost equal and are less than the maximum error probability of the DWHT. Most importantly, the ST has a large set of transform coefficients with bit-error probabilities close to zero, so an ECC design for the maximum or mean bit-error probability of the ST would give pessimistic rate results. We propose in the next section an ECC for the ST to achieve a smaller block-error probability than the block-error probability for the DCT.
Uniqueness and Security {#subsec:uniqueness}
-----------------------
A common measure to check the randomness of a bit sequence is uniqueness, i.e., the average fractional Hamming distance (HD) between the sequences extracted from different RO PUFs [@bizimpaper]. The rate region in (\[eq:ls0\]) is valid if the extracted bit sequences are uniformly distributed, making the uniqueness a valid measure for the FCS.
Uniqueness results for the DCT, DWHT, KLT, and DHT have a mean HD of $0.5000$ and HD variances of approximately $\displaystyle 7\!\times \!10^{-4}$ [@bizimMDPI], which are close to optimal and better than previous RO PUF results. For the ST, we obtain a mean HD of $0.5001$ and a HD variance of $\displaystyle 2.69\!\times \!10^{-2}$. This suggests that the ST has good average uniqueness performance, but there might be a small set of RO PUFs from which slightly biased bit sequences are extracted. The latter can be avoided during manufacturing by considering uniqueness as a parameter in yield analysis of the chip that embodies the PUF. We apply the national institute of standards and technology (NIST) randomness tests [@NIST] to check whether there is a detectable deviation from the uniform distribution in the sequences extracted by using the ST. The bit sequences generated with the ST pass most of the randomness tests, which is considered to be an acceptable result [@NIST]. A correlation thresholding approach in [@bizimtemperature] further improves security.
Code Selection {#subsec:codeselection}
--------------
Consider the scenario where secret keys are used as an input to the advanced encryption standard (AES), a symmetric-key cryptosystem, with a key size of $128$ bits, so the code dimension of the ECC should be at least $128$ bits. The maximum error probability over the transform coefficients of the ST is $p_{\text{max}}=0.0149$, as shown in Fig. \[fig:BERComparisonsofTrans\]. Furthermore, assume that we use an ECC with a bounded minimum distance decoder (BMDD) to keep the complexity low. A BMDD can correct all error patterns with up to $\lfloor\frac{d_{\text{min}-1}}{2}\rfloor$ errors, where $d_{\text{min}}$ is the minimum distance of the code. It is straightforward to show that the ECC should have at least a minimum distance of $d_{\text{min}}=41$ to achieve a block-error probability of $P_\text{B}\leq 10^{-9}$ if all transform coefficients are assumed to have a bit-error probability of $p_{\text{max}}$. None of binary BCH and Reed-Solomon (RS) codes, which have good minimum-distance properties, can satisfy these parameters. Similarly, the GV bound computed for $p_{\text{max}}$ shows that there exists a linear binary ECC with code dimension $98$. Consider the binary BCH code with the block length $255$, code dimension $131$ that is greater than the code dimension of $98$ given by the GV bound, and minimum distance $\displaystyle d_{\text{min,BCH}}=37$ that is close to the required value of $d_{\text{min}}=41$. We illustrate in the next section that this BCH code provides a block-error probability significantly smaller than $10^{-9}$.
Reliability, Privacy, and Secrecy Analysis of the Code
------------------------------------------------------
We now show that the proposed ECC satisfies the block-error probability constraint. The block-error probability $P_\text{B}$ for the $\text{BCH}(255,131,37)$ code with a BMDD is equal to the probability of having more than $18$ errors in the codeword, i.e., we have $$\begin{aligned}
P_\text{B} = \sum_{j=19}^{255}\Bigg[\sum_{\mathcal{D}\in\mathcal{F}_j}\prod_{i\in \mathcal{D}}p_{i}\,\bigcdot\prod_{i\in \mathcal{D}^{c}}(1-p_{i}) \Bigg] \label{eq:blockerrorforbch}\end{aligned}$$ where $p_{i}\leq p_{\text{max}}$ is the bit-error probability of the $i$-th transform coefficient, as in Fig. \[fig:BERComparisonsofTrans\], for $i\!=\!2,3,\ldots,256$, $\displaystyle \mathcal{F}_j$ is the set of all size-$j$ subsets of the set $\displaystyle\{2,3,\ldots,256\}$, and $\mathcal{D}^{c}$ denotes the complement of the set $\mathcal{D}$. The bit-error probabilities $p_{i}$ represent probabilities of independent events due to the mutual independence assumption for transform coefficients and one-bit quantizers used.
The evaluation of (\[eq:blockerrorforbch\]) requires $ \sum_{j=0}^{18}{255\choose j}\approx 1.90\!\times\!10^{27}$ different calculations, which is not practical. We therefore apply the discrete Fourier transform - characteristic function (DFT-CF) method [@DFTCF] to (\[eq:blockerrorforbch\]) and obtain the result $P_\text{B}\!\approx\!2.860\!\times\!10^{-12}\!<\!10^{-9}$. This value is smaller than the block-error probabilitiy $P_{\text{B,DCT}}= 1.26\times 10^{-11}$ obtained in [@benimdissertation] for the DCT with the same code. The block-error probability constraint is thus satisfied by using the $\text{BCH}$ code although the conservative analysis suggests otherwise.
The rate regions given in (\[eq:ls0\]) and (\[eq:chosensecret\]) are asymptotic results, i.e., they assume $N\rightarrow \infty$. Since separate channel and secrecy coding is optimal for the FCS, we can use the finite length bounds for a BSC $P_{Y|X}$ with crossover probability $p\!=\! \frac{1}{L-1}\sum_{i=2}^Lp_{i}\!\approx\!0.0088$, i.e., the error probability averaged over all used coefficients. In [@benimdissertation], we show that the $\text{BCH}(255,131,37)$ code achieves $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})\approx(0.514,\,0.486)$ bits/source-bit, significantly better than previously proposed codes in the RO PUF literature, so it suffices to compare the proposed code with the best possible finite-length results for the FCS. We use Mrs. Gerber’s lemma [@WZ], giving the optimal auxiliary random variable $U$ in (\[eq:chosensecret\]), to compute all points in the region $\mathcal{R}_{\text{cs}}$. We plot all achievable rate pairs, the (secret-key, privacy-leakage) rate pair of the proposed BCH code, and a finite-length bound for the block length of $N=255$ bits and $P_\text{B}\!=\!10^{-9}$ in Fig. \[fig:ratecomparison\].
The maximum secret-key rate is $R_\text{s}^*\!\approx\!0.9268$ bits/source-bit with a corresponding minimum privacy-leakage rate of $R_\ell^*\!\approx\!0.0732$ bits/source-bit. The gap between the points $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})$ and $(R_{\text{s}}^*,R_\ell^*)$ can be partially explained by the short block length of the code and the small block-error probability. The finite-length bound given in [@Polyanskiy Theorem 52] shows that the rate pair $(R_\text{s},R_\ell)\!=\!(0.7029,0.2971)$ bits/source-bit is achievable by using the FCS, as depicted in Fig. \[fig:ratecomparison\]. One can thus improve the rate pairs by using better codes and decoders with higher hardware complexity, which is undesirable for IoT applications. Fig. \[fig:ratecomparison\] also illustrates the fact that there are operation points of the region $\mathcal{R}_{\text{cs}}$ that cannot be achieved by using the FCS and, e.g., a nested polar code construction from [@bizimWZ] should be used to achieve all points in $\mathcal{R}_{\text{cs}}$.
Conclusion {#sec:conclusion}
==========
We proposed a new set of transforms that are orthogonal (so that the decorrelation efficiency is high), that have elements $1$ or $-1$ (so that the hardware complexity is low), and that have a size of $k\times k$ where $k$ is a power of 2 (so that an input-selection algorithm can be applied to further decrease complexity). By using one-bit uniform quantizers for each transform coefficient obtained by applying the ST, we obtained bit-error probabilities that are on average smaller than the bit-error probabilities obtained from previously considered transforms. We proposed a BCH code as the ECC for RO PUFs in combination with the FCS. This code achieves the best rate pair in the RO PUF literature and it gives a block-error probability for the ST that is substantially smaller than for the DCT. We illustrated that the FCS cannot achieve all possible rate points. In future work, in combination with the new set of transforms, we will apply a joint vector quantization and error correction method by using nested polar codes to achieve rate pairs that cannot be achieved by the FCS.
|
{
"pile_set_name": "arxiv"
}
|
Frederick Lohden
Frederick Charles Lohden OBE (13 June 1871 – 13 April 1954) was an English sportsman who played rugby union as a forward at international level for England in a single game during the 1893 Home Nations Championship. After retiring from playing sport he became a sports administrator, most notably as the chairman of the Lawn Tennis Association.
Personal history
Lohden was born in Hartlepool in the north of England on 13 June 1871 to Jacob and Mary Lohden, and christened at Christ Church, Hartlepool on 12 July of that year. He attended Durham School as a youth, completing his education in France and Germany. In 1898 he was married to Margaret Emily Marshall of Broadwater, Sussex.
With the outbreak of the First World War, Lohden, who already had military experience, was promoted to Lieutenant in the 4th Durham Volunteer Artillery. He later joined the East Surrey Regiment. In 1917 he was transferred to the Ministry of Shipping and was placed in charge of Standard Steamers, Russian Steamers and Oilers. He was awarded the Order of the British Empire in the 1919 New Year Honours for his work for the Ministry of Shipping. He later moved to Cheam on the border between London and Surrey where he worked as a shipping broker. Lohden later became the mayor of Sutton and Cheam, and was also made a Justice of the Peace.
Sporting history
Lohden showed promise as a sportsman while a youth, making the Durham School rugby XV while still a 15-year-old, the biggest forward in his team. On his return from education in mainland Europe he joined Hartlepool Rovers, and by the age of 19 he was selected to play at county level for Durham. By the 1892/93 season he was playing for one of England's premier clubs, Blackheath. While representing Blackheath he came to the attention of the English selectors and was chosen for the South of England team in the trials of the England squad. He was given his first and only cap in the opening game of the 1893 Home Nations Championship against Wales at the Cardiff Arms Park. The game started well for the English side, opening a 7–0 lead in the first half, one of the two tries scored by Lohden. A further England try at the start of the second half appeared to give England an overwhelming lead only to see an historic Welsh comeback, led by their talismanic captain Arthur Gould, which snatched victory from England in the final minutes. Although Lohden never played for England again, a series of minor injuries ending his career by 1896, he was selected for invitational tourists the Barbarians in 1893, and also represented Surrey county. After retiring from playing he kept up his connection with the sport of rugby by being elected onto the Durham County Rugby Union committee, serving them from 1896 to 1902.
As well as rugby, Lohden was a keen sports shooter, and won the Baltic Exchange 'miniature' Championship for three years running. On returning to civilian life after the war, Lohden became increasingly active in the world of racket sports. A skillful badminton player he represented Surrey County playing in four consecutive London Badminton doubles finals in 1920. This was followed by the title of Veteran's Doubles Champion of England in 1921. That year Lohden also set up the Surrey Badminton Association, becoming their first honorary secretary.
In 1907 Lohden put his sporting administrative abilities to further use when he was elected to the Surrey branch of the Lawn Tennis Association. He progressed to becoming the organisations chairman, and then in 1911 he joined the Council of the LTA. In 1933 he became chairman of the LTA and the year later its vice-president.
References
Bibliography
Category:1871 births
Category:1954 deaths
Category:Rugby union forwards
Category:English rugby union players
Category:England international rugby union players
Category:Barbarian F.C. players
Category:Blackheath F.C. players
Category:Sportspeople from Hartlepool
Category:Officers of the Order of the British Empire
Category:People educated at Durham School
Category:British Army personnel of World War I
Category:East Surrey Regiment officers
Category:Tennis in the United Kingdom
|
{
"pile_set_name": "wikipedia_en"
}
|
---
layout: example
title: Wheat Plot Example
permalink: /examples/wheat-plot/index.html
spec: wheat-plot
image: /examples/img/wheat-plot.png
---
A [wheat plot](http://www.perceptualedge.com/articles/visual_business_intelligence/the_datavis_jitterbug.pdf) is an alternative to standard dot plots and histograms that incorporates aspects of both. The x-coordinate of a point is based on its exact value. The y-coordinate is determined by grouping points into histogram bins, then stacking them based on their rank order within each bin. While not scalable to large numbers of data points, wheat plots allow inspection of (and interaction with) individual points without overplotting. For a related approach, see [beeswarm plots](../beeswarm-plot/).
{% include example spec=page.spec %}
|
{
"pile_set_name": "github"
}
|
epsf.sty 220 mm 145 mm 0.5 mm 1 mm 8 mm
[M. Nieto-Vesperinas[^1] and J. R. Arias-González[^2]]{}
[*$^1$Instituto de Ciencia de Materiales de Madrid, CSIC*]{}
[*$^2$Instituto Madrileño de Estudios Avanzados en Nanociencia*]{}
[*Cantoblanco, 28049 Madrid, Spain.*]{}
Introduction {#sec:introduction}
============
The purpose of this report is to present the theoretical foundations of the interaction of evanescent fields on an object. Evanescent electromagnetic waves are inhomogeneous components of the near field, bound to the surface of the scattering object. These modes travel along the illuminated sample surface and exponentially decrease outside it [@bornwolf99ch11; @nieto-vesperinas91; @mandel95], [*e.g.*]{}, either in the form of lateral waves [@tamir72a; @tamir72b] created by total internal reflection (TIR) at dielectric flat interfaces, whispering–gallery modes in dielectric tips and particles [@hill88; @owen81; @benincasa87; @collot93; @knight95; @weiss95; @nieto-vesperinas96] or of plasmon polaritons [@raether88] in metallic corrugated interfaces (see Section \[sec:PFM\]). The force exerted by these evanescent waves on particles near the surface is of interest for several reasons. On the one hand, evanescent waves convey high resolution of the scattered field signal, beyond the half wavelength limit. This is the essence of near–field scanning optical microscopy, abbreviated usually as NSOM [@pohl93; @paesler96]. These fields may present large concentrations and intensity enhancements in subwavelength regions near tips thus giving rise to large gradients that produce enhanced trapping forces that may enable one to handle particles within nanometric distances [@novotny97]. In addition, the large contribution of evanescent waves to the near field is the basis of the high resolution of signals obtained by transducing the force due to these waves on particles over surfaces when such particles are used as probes. On the other hand, evanescent waves have been used both to control the position of a particle above a surface and to estimate the interaction (colloidal force) between such a particle and the surface (see Chapter 6) [@sasaki97; @clapp99; @dogariu00].
The first experimental observation, demonstrating the mechanical action of a single evanescent wave ([*i.e*]{}., of the lateral wave produced by total internal reflection at a dielectric (saphire–water interface) on microspheres immersed in water over a dielectric surface) was made in [@kawata92]. Further experiments either over waveguides [@kawata96] or attaching the particle to the cantilever of an atomic force microscope (AFM) [@vilfan98] aimed at estimating the magnitude of this force.
The scattering of an evanescent electromagnetic wave by a dielectric sphere has been investigated by several authors using Mie’s scattering theory (addressing scattering cross sections [@chew79] and electromagnetic forces [@almaas95]), as well as using ray optics [@prieve93; @walz99]. In particular, [@walz99] made a comparison with [@almaas95]. Although no direct evaluation of either theoretical work with any experimental result has been carried out yet, likely due to the yet lack of accurate well characterized and controlled experimental estimations of these TIR observed forces. In fact, to get an idea of the difficulties of getting accurate experimental data, we should consider the fluctuations of the particle position in its liquid environment, due to both Brownian movement and drift microcurrents, as well as the obliteration produced by the existence of the friction and van der Waals forces between particle and surface [@vilfan98; @almaas95]. This has led so far to discrepancies between experiment and theory.
In the next section we shall address the effect of these forces on particles from the point of view of the dipolar approximation, which is of considerable interpretative value to understand the contribution of horizontal and vertical forces. Then we shall show how the multiple scattering of waves between the surface and the particle introduces important modifications of the above mentioned forces, both for larger particles and when they are very close to substrates. Further, we shall investigate the interplay of these forces when there exists slight corrugation in the surface profile. Then the contribution of evanescent waves created under total internal reflection still being important, shares its effects with radiative propagating components that will exert scattering repulsive forces. Even so, the particle can be used in these cases as a scanning probe that transduces this force in a photonic force microscopy operation.
Force on a Small Particle: The Dipolar Approximation {#sec:dipapprox}
====================================================
Small polarizable particles, namely, those with radius $a\ll \lambda $, in the presence of an electromagnetic field experience a Lorentz force [@gordon73]:
$$\label{eq:lorentz}
{\bf F}=({\vec {\wp}} \cdot \nabla) {\vec{{\cal E}}}+ \frac{1}{c}\frac{
\partial {\vec \wp}}{\partial t} \times {\vec{{\cal B}}}.$$
In Equation (\[eq:lorentz\]) ${\vec{\wp}}$ is the induced dipole moment density of the particle, and ${\vec{{\cal E}}}$, ${\vec{{\cal B}}}$ are the electric and magnetic vectors, respectively.
At optical frequencies, used in most experiments, the observed magnitude of the electromagnetic force is the time–averaged value. Let the electromagnetic field be time–harmonic, so that ${\vec{{\cal E}}}({\bf r}
,t)=\Re e \{ {\bf E}({\bf r})\exp (-i\omega t) \}$, ${\vec{{\cal B}}}({\bf r
},t)=\Re e \{ {\bf B}({\bf r})\exp (-i\omega t) \}$, ${\vec{\wp}}({\bf r}
,t)=\Re e \{ {\bf p}({\bf r})\exp (-i\omega t) \}$; ${\bf E}({\bf r})$, $
{\bf B}({\bf r})$ and ${\bf p}({\bf r})$ being complex functions of position in the space, and $\Re e$ denoting the real part. Then, the time–averaged Lorentz force over a time interval $T$ large compared to $2\pi /\omega$ [@bornwolf99pp34] is
$$\langle {\bf F}({\bf r})\rangle =\frac{1}{4T}\int_{-T/2}^{T/2}dt\left[ {{{{{(
{\bf p}+{\bf p}^{\ast })\cdot \nabla ({\bf E}+{\bf E}^{\ast })+\frac{1}{c}
\left( \frac{\partial {\bf p}}{\partial t}+\frac{\partial {\bf p}^{\ast }}{
\partial t}\right) \times ({\bf B}+{\bf B}^{\ast })}}}}}\right] ,
\label{eq:averaging}$$
where $\ast $ denotes complex conjugate. On substituting in Equation (\[eq:averaging\]) ${\bf E}$, ${\bf B}$, and ${\bf p}$ by their time harmonic expressions given above and performing the integral, one obtains for each $
i^{th}$–Cartesian component of the force
$$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e \left\{ p_{k}\frac{\partial
E_{j}^{\ast }({\bf r})}{\partial x_{k}}+\frac{1}{c}\epsilon _{jkl}\frac{
\partial p_{k}}{\partial t}B_{l}^{\ast }\right\} . \label{eq:chaumetinicio}$$
In Equation (\[eq:chaumetinicio\]) $j=1,2,3$, $\epsilon _{jkl}$ is the completely antisymmetric Levy–Civita tensor. On using the Maxwell equation ${\bf B}
=(c/i\omega )\nabla \times {\bf E}$ and the relationships ${\bf p}=\alpha
{\bf E}$ and $\partial {\bf p}/\partial t=-i\omega {\bf p}$, $\alpha $ being the particle polarizability, Equation (\[eq:chaumetinicio\]) transforms into
$$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e
\left\{ {{{{{\alpha \left( E_{k}
\frac{\partial E_{j}^{\ast }({\bf r})}{\partial x_{k}}+\epsilon
_{jkl}~\epsilon _{lmn}E_{k}\frac{\partial E_{n}^{\ast}}{\partial x_{m}}
\right) }}}}}\right\} . \label{eq:chaumetmedio}$$
Since $\epsilon _{jkl}\epsilon _{lmn}=\delta _{jm}\delta _{kn}-\delta
_{jn}\delta _{km}$, one can finally express the time–averaged Lorentz force on the small particle as [@chaumet00c]
$$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e \left\{ {{{{{\alpha E_{k}\frac{
\partial E_{k}^{\ast }({\bf r})}{\partial x_{j}}}}}}}\right\} .
\label{eq:chaumetfin}$$
Equation (\[eq:chaumetfin\]) constitutes the expression of the time–averaged force on a particle in an arbitrary time–harmonic electromagnetic field.
For a dipolar particle, the polarizability is [@draine88]
$$\alpha =\frac{\alpha _{0}}{1-\frac{2}{3}ik^{3}\alpha _{0}}. \label{eq:alfa}$$
In Equation (\[eq:alfa\]) $\alpha _{0}$ is given by: $\alpha
_{0}=a^{3}(\epsilon -1)/(\epsilon +2)$, $\epsilon=\epsilon_2/\epsilon_0$ being the dielectric permittivity contrast between the particle, $\epsilon_2$, and the surrounding medium, $\epsilon_0$; and $k=\sqrt{\epsilon_0} k_0$, $k_0=\omega /c$. For $ka\ll 1$, one can approximate $\alpha $ by: $\alpha =\alpha _{0}(1+\frac{2}{3}ik^{3}|\alpha _{0}|^{2})$. The imaginary part in this expression of $\alpha $ constitutes the radiation–reaction term.
The light field can be expressed by its paraxial form, [*e.g*]{}., it is a beam or a plane wave, either propagating or evanescent, so that it has a main propagation direction along ${\bf k}$, the light electric vector will then be described by
$${\bf E}({\bf r})={\bf E}_{0}({\bf r})\exp (i{\bf k}\cdot {\bf r}).
\label{eq:oplana}$$
Substituting Equation (\[eq:oplana\]) into Equation (\[eq:chaumetfin\]), one obtains for the force
$$\langle {\bf F}\rangle =\frac{1}{4}\Re e \{ \alpha \} \nabla |{\bf E}
_{0}|^{2}+\frac{1}{2}{\bf k}\Im m \{ \alpha \} |{\bf E}_{0}|^{2}-\frac{1}{2}
\Im m \{ \alpha \} \Im m \{ {\bf E}_{0}\cdot \nabla {\bf E}_{0}^{\ast }\},
\label{eq:finforce}$$
where $\Im m$ denotes imaginary part. The first term is the gradient force acting on the particle, whereas the second term represents the radiation pressure contribution to the scattering force that, on substituting the above approximation for $\alpha $, namely, $\alpha =\alpha _{0}(1+\frac{2}{3}
ik^{3}|\alpha _{0}|^{2})$, can also be expressed for a Rayleigh particle ($ka\ll 1$) as [@vandehulst81] $(|{\bf E}_{0}|^{2}/8\pi )C{\bf k}/k$, where $C$ is the particle scattering cross section given by $C=(8/3)\pi
k^{4}|\alpha _{0}|^{2}$. Notice that the last term of Equation (\[eq:finforce\]) is only zero when either $\alpha $ or ${\bf E}_{0}$ is real. (This is the case for a plane propagating or evanescent wave but not for a beam, in general.)
Force on a Dipolar Particle due to an Evanescent Wave {#sec:dipevanescent}
=====================================================
Let the small particle be exposed to the electromagnetic field of an evanescent wave, whose electric vector is ${\bf E}={\bf T}\exp (-qz)\exp (i
{\bf K}\cdot {\bf R})$, where we have written ${\bf r}=({\bf R},z)$ and ${\bf k}=({\bf K},k_{z})$, ${\bf K}$ and $k_{z}$ satisfying $K^{2}+k_{z}^{2}=k^{2}$, $k^{2}=\omega ^{2}\epsilon _{0}/c^{2}$, with $k_{z}=iq=i\sqrt{K^{2}-k_{0}^{2}}$. This field is created under total internal reflection at a flat interface ($z=$ constant, below the particle) between two media of dielectric permittivity ratio $\epsilon_0/\epsilon_1$ (see also inset of Figure \[fig:dielec\](a)). The incident wave, either $s$ or $p$ polarized, ([*i.e*]{}., with the electric vector either perpendicular or in the plane of incidence: the plane formed by the incident wavevector ${\bf k}
_{i}$ at the interface and the surface normal $\hat{z}$) enters from the denser medium at $z<0$. The particle is in the medium at $z>0$. Without loss of generality, we shall choose the incidence plane as $OXZ$, so that ${\bf K}
=(K,0)$. Let $T_{\perp }$ and $T_{\parallel }$ be the transmitted amplitudes into $z>0$ for $s$ and $p$ polarizations, respectively. The electric vector is:
$${\bf E}=(0,1,0){T_{\perp }}\exp (iKx)\exp (-qz), \label{eq:evanescent1}$$
for $s$ polarization, and
$${\bf E}=(-iq,0,K)\frac{T_{\parallel }}{k}\exp (iKx)\exp (-qz).
\label{eq:evanescent2}$$
for $p$ polarization
By introducing the above expressions for the electric vector ${\bf E}$ into Equation (\[eq:finforce\]), we readily obtain the average total force on the particle split into the scattering and gradient forces. The scattering force is contained in the $OXY$–plane (that is, the plane containing the propagation wavevector of the evanescent wave), namely,
$$\label{eq:evanescent3}
\langle F_x \rangle = \frac{|T|^2}{2} K \Im m \{ \alpha \} \exp(-2qz);$$
For the gradient force, which is purely directed along $OZ$, one has
$$\label{eq:evanescent4}
\langle F_z \rangle = -\frac{|T|^2}{2}q \Re e \{ \alpha \} \exp(-2qz).$$
In Equations (\[eq:evanescent3\]) and (\[eq:evanescent4\]) $T$ stands for either $T_{\perp }$ or $T_{\parallel }$, depending on whether the polarization is $s$ or $p$, respectively.
For an absorbing particle, on introducing Equation (\[eq:alfa\]) for $\alpha$ into Equations (\[eq:evanescent3\]) and (\[eq:evanescent4\]), one gets for the scattering force
$$\label{eq:fx1}
\langle F_x \rangle = \frac{|T|^2}{2} K \exp(-2qz)\frac{\Im m \{ \alpha_0 \}
+(2/3)k^3 |\alpha_0|^2}{1+(4/9)k^6|\alpha_0|^2},$$
and for the gradient force
$$\label{eq:fz1}
\langle F_z \rangle = -\frac{|T|^2}{2}q \frac{\Re e \{ \alpha_0 \} }{1+(4/9)k^6|
\alpha_0|^2} \exp(-2qz).$$
It should be remarked that, except near resonances, in general $\Im m \{ \alpha _{0} \}$ is a positive quantity and therefore the scattering force in Equation (\[eq:fx1\]) is positive in the propagation direction $K$ of the evanescent wave, thus pushing the particle parallel to the surface, whereas the gradient force Equation (\[eq:fz1\]) is negative or positive along $OZ$, therefore, attracting or repelling the particle towards the surface, respectively, according to whether $\Re e \{ \alpha \} > 0$ or $\Re e \{ \alpha \} < 0$. The magnitudes of these forces increase with the decrease of distance to the interface and it is larger for $p$ polarization since in this case the dipoles induced by the electric vector at both the particle and the surface are oriented parallel to each other, thus resulting in a smaller interaction than when these dipoles are induced in the $OXZ$–plane ($s$–polarization) [@alonsofinn68].
In particular, if $ka\ll 1$, Equation (\[eq:fx1\]) becomes
$$\label{eq:fx2}
\langle F_x \rangle = \frac{|T|^2}{2} K \exp(-2qz) \left[a^3 \Im m \left\{
\frac{\epsilon-1} {\epsilon+2}\right \} + \frac{2}{3}k^3 a^6 \left |\frac{
\epsilon-1}{\epsilon+2} \right|^2 \right];$$
The first term of Equation (\[eq:fx2\]) is the radiation pressure of the evanescent wave on the particle due to absorption, whereas the second term corresponds to scattering. This expression can be further expressed as
$$\label{eq:fx3}
\langle F_x \rangle = \frac{|T|^2}{8 \pi} \frac{K}{k} \exp(-2qz)~C_{ext}.$$
where the particle extinction cross section $C_{ext}$ has been introduced as
$$\label{eq:eficaz}
C_{ext}=4\pi k a^3 \Im m \left \{ \frac{\epsilon-1}{\epsilon+2} \right\}
+\frac{8
\pi}{3}k^4 a^6 \left |\frac{\epsilon-1}{\epsilon+2} \right|^2.$$
Notice that Equation (\[eq:eficaz\]) coincides with the value obtained from Mie’s theory for small particles in the low–order expansion of the size parameter $ka$ of the extinction cross section [@vandehulst81].
Although the above equations do not account either for multiple scattering as described by Mie’s theory for larger particles or for multiple interactions of the wave between the particle and the dielectric surface, they are useful to understand the fundametals of the force effects induced by a single evanescent wave on a particle. It should be remarked, however, that as shown at the end of this section, once Mie’s theory becomes necessary, multiple scattering with the surface demands that its contribution be taken into account.
![ Forces in the $Z$ direction and in the $X$ direction (as insets) acting on a sphere with radius $a=60$ $nm$, in the dipolar approximation. The angle of incidence is $\protect\theta _0=42^o$, larger than the critical angle $\protect\theta _c=41.8^o$ (for the glass–air interface). $\protect
\lambda=632.8$ $nm$. Solid lines: $s$ polarization, dashed lines: $p$ polarization. The sphere material is: (a): glass, (b): silicon, (c): gold.[]{data-label="fig:dipolo"}](dipolo.eps){width="\linewidth"}
Figure \[fig:dipolo\] shows the evolution of the scattering and gradient forces on three kinds of particles, namely, glass ($\epsilon_2 =2.25$), silicon ($\epsilon_2 =15+i0.14$) and gold ($\epsilon_2 =-5.65+i0.75$), all of radius $a=60$ $nm,$ as functions of the gap distance $d$ between the particle and the surface at which the evanescent wave is created. The illuminating evanescent wave is due to refraction of an incident plane wave of power $P=\frac{c\sqrt{\epsilon_1 }}{8\pi }|A|^{2}=1.9\times 10^{-2}$ $mW/\mu m^{2}$, equivalent to $150$ $mW$ over a circular section of radius $50$ $\mu m$, on a glass–air interface at angle of incidence $\theta
_{0}=42^{o} $ and $\lambda =632.8$ $nm$ (the critical angle is $\theta
_{c}=41.8^{o}$), both at $s$ and $p$ polarization (electric vector perpendicular and parallel, respectively, to the incidence plane at the glass–air interface, namely, $|T_{\perp }|^{2}=4\epsilon_1 \cos ^{2}\theta
_{0}|A|^{2}/(\epsilon_1 -1) $, $|T_{\parallel }|^{2}=4\epsilon_1 \cos ^{2}\theta
_{0}|A|^{2}/[(\epsilon_1 -1)((1+\epsilon_1 )\sin ^{2}\theta _{0}-1)]$).
These values of forces are consistent with the magnitudes obtained on similar particles by applying Maxwell’s stress tensor (to be discussed in Section \[sec:CDM\]) via Mie’s scattering theory [@almaas95]. However, as shown in the next section, as the size of the particle increases, the multiple interaction of the illuminating wave between the particle and the substrate cannot be neglected. Therefore, the above results, although of interpretative value, should be taken with care at distances smaller than $10$ $nm$, since in that case multiple scattering makes the force stronger. This will be seen next.
Influence of Interaction with the Substrate {#sec:substrate}
===========================================
Among the several studies on forces of evanescent waves over particles, there are several models which calculate the forces from Maxwell’s stress tensor on using Mie’s theory to determine the scattered field in and around the sphere, without, however, taking into account the multiple scattering between the interface at which the evanescent wave is created and the sphere [@almaas95; @chang94; @walz99]. We shall next see that, except at certain distances, this multiple interaction cannot be neglected.
The equations satisfied by the electric and magnetic vectors in a non–magnetic medium are
$$\begin{aligned}
\nabla \times \nabla \times {\bf E}-k^{2}{\bf E} & = & 4\pi k^{2}{\bf P},
\label{eq:green1}
\\ [+3mm]
\nabla \times \nabla \times {\bf H}-k^{2}{\bf H} & = &
-i4\pi k\nabla \times {\bf P},
\label{eq:green2}\end{aligned}$$
where ${\bf P}$ is the polarization vector. The solutions to Equations (\[eq:green1\]) and (\[eq:green2\]) are written in integral form as
$$\begin{aligned}
{\bf E}({\bf r}) & = & k^{2}\int d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime })
\cdot\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime }),
\label{eq:green3}
\\ [+3mm]
{\bf H}({\bf r}) & = & -ik\int d^{3}r^{\prime }~\nabla \times {\bf P}({\bf r}
^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}
^{\prime }),
\label{eq:green4}\end{aligned}$$
In Equations (\[eq:green3\]) and (\[eq:green4\]) $\overset{
\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })$ is the outgoing Green’s dyadic or field created at ${\bf r}$ by a point dipole at ${\bf r}
^{\prime }$. It satisfies the equation
$$\nabla \times \nabla \times \overset{\leftrightarrow }{{\cal G}}({\bf r},
{\bf r}^{\prime })-k^{2}\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}
^{\prime })=4\pi \delta ({\bf r}-{\bf r}^{\prime })\overset{\leftrightarrow }
{{\cal I}}. \label{eq:green5}$$
Let’s introduce the electric displacement vector ${\bf D} = {\bf E} + 4 \pi {\bf P}$. Then,
$$\nabla \times \nabla \times {\bf E} = \nabla \times \nabla \times {\bf D} -
4 \pi \nabla \times \nabla \times {\bf P}. \label{eq:identity1}$$
Using the vectorial identity $\nabla \times \nabla \times {\bf D} =
\nabla (\nabla \cdot {\bf D}) - \nabla ^2 {\bf D}$, and the fact that in the absence of free charges, $\nabla \cdot {\bf D}=0$, it is easy to obtain
$$\begin{aligned}
\nabla \times \nabla \times {\bf E} & = & - \nabla ^2 {\bf D} - 4 \pi
\left[ \nabla (\nabla \cdot {\bf P}) - \nabla ^2 {\bf P} \right] \\
[+3mm]
& = &
- \nabla ^2 {\bf E} - 4 \pi \nabla (\nabla \cdot {\bf P}),
\label{eq:identity2}\end{aligned}$$
which straightforwardly transforms Equation (\[eq:green1\]) into:
$$\nabla ^{2}{\bf E}+k^{2}{\bf E}=-4\pi \lbrack k^{2}{\bf P}+\nabla (\nabla
\cdot {\bf P})], \label{eq:green6}$$
whose solution is
$${\bf E}({\bf r})=k^{2}\int d^{3}r^{\prime }~[{\bf P}+\nabla (\nabla \cdot
{\bf P})]({\bf r}^{\prime })~G({\bf r},{\bf r}^{\prime }) \, .
\label{eq:electrico}$$
In a homogenous infinite space the function $G$ of Equation (\[eq:electrico\]) is $G_{0}({\bf r},{\bf r}^{\prime })=\exp (ik|{\bf r}-{\bf r}
^{\prime }|)/|{\bf r}-{\bf r}^{\prime }|$, namely, a spherical wave, or scalar Green’s function, corresponding to radiation from a point source at ${\bf r}^{\prime }$.
To determine $\overset{\leftrightarrow }{{\cal G}}$, we consider the case in which the radiation comes from a dipole of moment ${\bf p}$, situated at ${\bf r}_{0}$; the polarization vector ${\bf P}$ is expressed as
$${\bf P}({\bf r})={\bf p}\delta ({\bf r}-{\bf r}_{0}).
\label{eq:polarizacion}$$
Introducing Equation (\[eq:polarizacion\]) into Equation (\[eq:electrico\]) one obtains the well known expression for the electric field radiated by a dipole
$${\bf E}({\bf r})=k^{2}{\bf p}\nabla ({\bf p}\cdot \nabla )\frac{\exp (ik|
{\bf r}-{\bf r}^{\prime }|)}{|{\bf r}-{\bf r}^{\prime }|}.
\label{eq:dipole1}$$
On the other hand, if Equation (\[eq:polarizacion\]) is introduced into Equation (\[eq:green3\]) one obtains
$${\bf E}({\bf r})=k^{2}{\bf p}\cdot \overset{\leftrightarrow }{{\cal G}}_{0}(
{\bf r},{\bf r}^{\prime }). \label{eq:dipole2}$$
On comparing Equations (\[eq:dipole1\]) and (\[eq:dipole2\]), since both give identical value for ${\bf E}$, we get
$$k^{2}{\bf P}({\bf r}^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}
_{0}({\bf r},{\bf r}^{\prime })=[k^{2}{\bf p}\nabla ({\bf p}\cdot \nabla
)]G_{0}({\bf r},{\bf r}^{\prime }), \label{eq:dyadic1}$$
[*i.e*]{}., the tensor Green’s function in a homogenous infinite space is
$$\overset{\leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })=\left(
\overset{\leftrightarrow }{{\cal I}}+\frac{1}{k^{2}}\nabla \nabla \right)
G_{0}({\bf r},{\bf r}^{\prime }). \label{eq:dyadic2}$$
A remark is in order here. When applying Equation (\[eq:dyadic2\]) in calculations one must take into account the singularity at ${\bf r}={\bf r}
^{\prime }$, this is accounted for by writing $\overset{\leftrightarrow }{
{\cal G}}_{0}$ as [@yaghjian80]:
$$\overset{\leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })={\cal P}
\left[ {{{{{{{{\ \left( \overset{\leftrightarrow }{{\cal I}}+\frac{1}{k^{2}}
\nabla \nabla \right) G_{0}({\bf r},{\bf r}^{\prime })}}}}}}}}\right] -\frac{
1}{k^{2}}\delta ({\bf r}-{\bf r}^{\prime })\overset{\leftrightarrow }{{\bf L}
}_{v}. \label{eq:dyadic3}$$
In Equation (\[eq:dyadic3\]) ${\cal P}$ represents the principal value and $\overset{\leftrightarrow }{{\bf L}}_{v}$ is a dyadic that describes the singularity and corresponds to an exclusion volume around ${\bf r}={\bf r}
^{\prime }$, on whose shape it depends [@yaghjian80].
The Coupled Dipole Method {#sec:CDM}
-------------------------
Among the several methods of calculating multiple scattering between bodies of arbitrary shape ([*e.g*]{}., transition matrix, finite–difference time domain, integral procedures, discrete dipole approximation, [*etc*]{}.) we shall next address the [*coupled dipole method*]{} (Purcell and Pennipacker [@purcell73]). This procedure is specially suitable for multiple scattering between a sphere and a flat interface.
Let us return to the problem of determining the interaction of the incident wave with the substrate and the sphere. The scattered electromagnetic field is obtained from the contribution of all polarizable elements of the system under the action of the illuminating wave. The electric vector above the interface is given by the sum of the incident field ${\bf E}_{i}$ and that expressed by Equation (\[eq:green3\]) with the dyadic Green function $\overset{\leftrightarrow }{{\cal G}}$ being given by
$$\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })=\overset{
\leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })+\overset{
\leftrightarrow }{{\cal G}}_{s}({\bf r},{\bf r}^{\prime }).
\label{eq:sumdyadic}$$
In Equation (\[eq:sumdyadic\]) $\overset{\leftrightarrow }{{\cal G}}_{0}$ is given by Equation (\[eq:dyadic2\]) and, as such, it corresponds to the field created by a dipole in a homogeneous infinite space. On the other hand, $\overset{\leftrightarrow }{{\cal G}}_{s}$ represents the field from the dipole after reflection at the interface.
![ Normalized force in the $Z$ direction acting on a glass sphere on a glass–vacuum interface. The angle of incidence $\protect\theta_ 0=42 ^o$ is larger than the critical angle $\protect\theta_ c=41.8 ^o$. $\protect
\lambda=632.8$ $nm$. Thin lines: $S$ polarization, Thick lines: $P$ polarization. (a): $a=10$ $nm$, full line: dipole approximation, dashed line: CDM–A, dotted line: CDM–B. The inset shows the scattering geometry. (b): $a=100$ $nm$, full line: calculation with CDM–B, dashed line: static approximation. (From Ref. [@chaumet00a]). []{data-label="fig:dielec"}](dielec.eps){width="\linewidth"}
The polarization vector ${\bf P}$ is represented by the collection of $N$ dipole moments ${\bf p}_j$ corresponding to the $N$ polarizable elements of all materials included in the illuminated system, namely,
$$\label{eq:sumpolarizacion}
{\bf P}({\bf r})=\sum_{j}^{N}{\bf p}_j\delta({\bf r}-{\bf r}_i).$$
The relationship between the $k^{{\rm th}}$ dipole moment ${\bf p}_{k}$ and the exciting electric field is, as before, given by ${\bf p}_{k}=\alpha _{k}
{\bf E}({\bf r}_{k})$, with $\alpha _{k}$ expressed by Equation (\[eq:alfa\]). Then, Equations (\[eq:electrico\]), (\[eq:sumdyadic\]) and (\[eq:sumpolarizacion\]) yield
$${\bf E}({\bf r}_{j})=k^{2}\sum_{k}^{N}\alpha _{k}[
\overset{\leftrightarrow }{{\cal G}}_{0}({\bf r}_{j},{\bf r}_{k})+\overset{
\leftrightarrow }{{\cal G}}_{s}({\bf r}_{j},{\bf r}_{k})]\cdot {\bf E}({\bf r
}_{k}). \label{eq:electricdip}$$
The determination of $\overset{\leftrightarrow }{{\cal G}}_{s}$ either above or below the flat interface is discussed next (one can find more details in Ref. [@agarwal75a]). Let us summarize the derivation of its expression above the surface. The field ${\bf E}$ in the half–space $z>0$, from a dipole situated in this region, is the sum of that from the dipole in free space and the field ${\bf E}_{r}$ produced on reflection of the latter at the interface. Taking Equation (\[eq:dipole2\]) into account, this is therefore
$${\bf E}({\bf r})=k^{2}{\bf p}\cdot \overset{\leftrightarrow }{{\cal G}}_{0}(
{\bf r},{\bf r}^{\prime })+{\bf E}_{r}({\bf r}),\text{ \ \ \ }z>0 \, .
\label{eq:electricdipfin}$$
Both the spherical wave $G_{0}$ and ${\bf E}_{r}$ are expanded into plane waves. The former is, according to Weyl’s representation [@banios66ch; @nieto-vesperinas91],
$$G_{0}({\bf r},{\bf r}^{\prime })=\frac{i}{2\pi }\int_{-\infty }^{\infty }
\frac{d^{2}K}{k_z ({\bf K})}\exp [i({\bf K}\cdot ({\bf R}-{\bf R}^{\prime
})+k_z |z-z^{\prime }|)] \, . \label{eq:weyl}$$
On the other hand, ${\bf E}_{r}$ is expanded as an angular spectrum of plane waves [@nieto-vesperinas91; @mandel95]
$${\bf E}_{r}({\bf r})=\int_{-\infty }^{\infty }d^{2}K~{\bf A}_{r}({\bf K}
)\exp [i({\bf K}\cdot {\bf R}+k_z z)]. \label{eq:angularspectrum}$$
Introducing Equation (\[eq:weyl\]) into Equation (\[eq:sumdyadic\]) one obtains a plane wave expansion for $\overset{\leftrightarrow }{{\cal G}}_{0}$. This gives the plane wave components ${\bf A}_{h}({\bf K})$ of the first term of Equation (\[eq:electricdipfin\]). Then, the plane wave components ${\bf A}_{r}({\bf K})$ of the second term of Equation (\[eq:electricdipfin\]) are given by
$${\bf A}_{r}({\bf K})=r({\bf K}){\bf A}_{h}({\bf K}), \label{eq:amplitude}$$
In Equation (\[eq:amplitude\]) $r({\bf K})$ is the Fresnel reflection coefficient corresponding to the polarization of ${\bf A}_{h}$. The result is therefore that $\overset{\leftrightarrow }{{\cal G}}$, Equation (\[eq:sumdyadic\]), is
$$\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })=\frac{1}{
4\pi ^{2}}\int_{-\infty }^{\infty }d^{2}K~{\overset{\leftrightarrow }{{\bf S}
}}^{-1}({\bf K})\cdot \overset{\leftrightarrow }{{\bf g}}({\bf K}
,z,z^{\prime })\cdot \overset{\leftrightarrow }{{\bf S}}({\bf K})\exp [i{\bf
K}\cdot ({\bf R}-{\bf R}^{\prime })], \label{eq:CDMdyadic}$$
where [@agarwal75a; @agarwal75b; @keller93]
$$\overset{\leftrightarrow }{{\bf S}}({\bf K})=\frac{1}{K}
\left( \begin{array}{ccc}
k_{x} & k_{y} & 0 \\
-k_{y} & k_{x} & 0 \\
0 & 0 & K
\end{array}
\right) , \label{eq:Stensor}$$
and the dyadic $\overset{\leftrightarrow }{{\bf g}}$ has the elements [@greffet97]
$$\begin{aligned}
g_{11} & = & \frac{-ik_z^{(0)}}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon
_{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}
{\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{(0)}}\exp
[ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)\right] ,
\label{eq:g11}
\\ [+3mm]
g_{22} & = & \frac{-i}{2k_z^{(0)}}\left[ {{{{{\frac{k_z^{(0)}-k_z^{(1)}}
{k_z^{(0)}+k_z^{(1)}}\exp
[ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)}}}}}\right] ,
\label{eq:g22}
\\ [+3mm]
g_{33} & = & \frac{iK^{2}}{2\epsilon _{0}k_z^{(0)}k_0^{2}}\left[ \frac{\epsilon
_{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+
\epsilon _{1}k_z^{(0)}}\exp
[ik_z^{(0)}(z+z^{\prime })]-\exp (ik_z^{(0)}|z-z^{\prime }|)\right]
\nonumber \\ [+3mm]
& + &
\frac{1}{\epsilon _{0}k_0^{2}}\delta (z-z^{\prime }),
\label{eq:g33}
\\ [+3mm]
g_{12} & = & 0, \label{eq:g12}
\\ [+3mm]
g_{13} & = & \frac{-iK}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon
_{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{(
0)}}\exp
[ik_z^{(0)}(z+z^{\prime })]-\exp (ik_z^{(0)}|z-z^{\prime }|)\right] ,
\label{eq:g13}
\\ [+3mm]
g_{31} & = & \frac{iK}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon
_{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{(0)}}\exp
[ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)\right] .
\label{eq:g31}\end{aligned}$$
We have used $k_z^{(j)} = iq_j = (K^2- \epsilon_j k_0^2) ^{1/2}$, $j=0$ ,$1$, and $k_0=\omega/c$. To determine the force acting on the particle, we also need the magnetic field. This is found by the relationships ${\bf B}({\bf r})=-i/k\nabla
\times {\bf E}({\bf r})$. Then the time–averaged force obtained from Maxwell’s stress tensor $\overset{\leftrightarrow }
{{\bf T}}$ [@stratton41; @jackson75] is
$$\langle {\bf F}\rangle =\int_{S}d^{2}r~\left\langle \overset{\leftrightarrow }{
{\bf T}}({\bf r})\right\rangle \cdot {\bf n}. \label{eq:totalforce}$$
Equation (\[eq:totalforce\]) represents the flow of the time–average of Maxwell’s stress tensor $\langle \overset{\leftrightarrow }{{\bf T}}\rangle$ across a surface $S$ enclosing the particle, ${\bf n}$ being the local outward normal. The elements $T_{\alpha \beta }$ are [@jackson75]
$$\left\langle T_{\alpha \beta }({\bf r})\right\rangle
=\frac{1}{8\pi }\left[ E_{\alpha
}E_{\beta }^{\ast }+B_{\alpha }B_{\beta }^{\ast }-\frac{1}{2}({\bf E}\cdot
{\bf E}^{\ast }+{\bf B}\cdot {\bf B}^{\ast })\delta _{\alpha \beta }\right]
,(\alpha ,\beta =1,2,3). \label{eq:maxwellstresstensor}$$
For dipolar particles, one can use instead of Maxwell’s stress tensor the expression given by Equation (\[eq:chaumetfin\]) directly. In fact, for dielectric spheres of radii smaller than $5\times 10^{-2}\lambda $ there is no appreciable difference between using Equation (\[eq:chaumetfin\]) or Equation (\[eq:maxwellstresstensor\]) except at distances from the flat substrate smaller than $10^{-3}\lambda $.
Figure \[fig:dielec\] shows the normalized $Z$–force for two glass particles ($\epsilon =2.25$) at $\lambda =632.8$ $nm$, one with $a=10$ $nm$ (Figure \[fig:dielec\](a)), and other with $a=100$ $nm$ (Figure \[fig:dielec\](b)), the flat interface is illuminated from the dielectric side at $\theta_0 =42^{o}$, (the critical angle is $\theta_c =41.8^{o}$). Two calculation procedures are shown: a multiple scattering evaluation of the field via Equations (\[eq:electricdip\])–(\[eq:g31\]) and then, either use of Equation (\[eq:chaumetfin\]), integrated over all induced dipoles (CDM–B), or Equation (\[eq:totalforce\]) (CDM–A). The normalization of the forces has been carried out by dividing them by $\exp (-2qz)$. Thus, as seen in these curves, the force tends, as $d$ increases, to the constant value given by Equation (\[eq:evanescent4\]): $-(|T|^{2}/2)q\Re e \{ \alpha \}$. The incident power is $1.19$ $mW$ distributed on a surface of $10$ $\mu
m^{2}$, then the force on a sphere of $a=10$ $nm$ is $2.7991\times 10^{-10}$ $pN$ [@chaumet00a]. We see, therefore, the effect on the vertical force of the multiple interaction of the scattered wave with the substrate: as the particle gets closer to the flat interface at which the evanescent wave is created, the magnitude of the attractive force increases beyond the value predicted by neglecting this interaction. As the distance to the surface grows, the force tends to its value given by Equation (\[eq:evanescent4\]) in which no multiple scattering with the substrate takes place. Also, due to the standing wave patterns that appear in the field intensity distribution between the sphere and the substrate, the magnitude of this force oscillates as $d$ varies. This is appreciated for larger particles (Figure \[fig:dielec\](b)) except for very small particles (Figure \[fig:dielec\](a)), whose scattering cross section is large enough to produce noticeable interferences. On the other hand, the horizontal force on the particle is of the form given by Equation (\[eq:evanescent3\]) and always has the characteristics of a scattering force.
![ (a): From top to bottom: the first three curves represent the polarizability of a silver sphere with radius $a=10$ $nm$ versus the wavelength. The fourth curve is the force on this particle in free space. Plain line: Mie calculation, dashed line: polarizability of Eq. (\[eq:alfa\]), symbol $+$: Dungey and Bohren’s polarizability [@dungey91]. (b): Force along the $Z$ direction on a silver sphere with $a=100$ $nm$ versus distance $d$ with $\protect\theta_0=50^o$ for the following wavelengths: Plain line: $\protect
\lambda=255$ $nm$, dashed line: $\protect\lambda=300$ $nm$, and dotted line: $\protect\lambda=340$ $nm$. Thin lines: $S$ polarization, thick lines: $P$ polarization. (From Ref. [@chaumet00b]). []{data-label="fig:metallic"}](metallic.eps){width="\linewidth"}
As regards a metallic particle, we notice that $\Re e \{ \alpha \}$ may have negative values near plasmon resonances (Figure \[fig:metallic\](a), where we have plotted two models: that of Draine [@draine88] and that of Ref. [@dungey91] (see also [@chaumet00c])) and thus the gradient force, or force along $OZ$, may now be repulsive, namely, positive (Figure \[fig:metallic\](b)) [@chaumet00c]. We also observe that this force is larger at the plasmon polariton resonance excitation ($\lambda =350$ $nm$). We shall later return to this fact (Section \[sec:resonances\]). We next illustrate how, no matter how small the particle is, the continuous approach to the surface makes the multiple scattering noticeable.
Corrugated Surfaces: Integral Equations for Light Scattering from Arbitrary Bodies {#sec:corrugated}
----------------------------------------------------------------------------------
At corrugated interfaces, the phenomenon of TIR is weakened, and the contribution of propagating components to the transmitted field becomes important, either from conversion of evanescent waves into radiating waves, or due to the primary appearance of much propagating waves from scattering at the surface defects. The size of the asperities is important in this respect. However, TIR effects are still strong in slightly rough interfaces, namely, those for which the defect size is much smaller than the wavelength. Then the contribution of evanescent components is dominant on small particles ([*i.e*]{}., of radius no larger than $0.1$ wavelengths). On the other hand, the use of such particles as probes in near–field microscopy may allow high resolution of surface details as they scan above it. We shall next study the resulting force signal effects due to corrugation, as a model of photonic force microscopy under TIR conditions.
When the surface in front of the sphere is corrugated, finding the Green’s function components is not as straightforward as in the previous section. We shall instead employ an integral method that we summarize next.
Let an electromagnetic field, with electric and magnetic vectors ${\bf E}
^{(inc)}({\bf r})$ and ${\bf H}^{(inc)}({\bf r})$, respectively, be incident on a medium of permittivity $\epsilon $ occupying a volume $V$, constituted by two scattering volumes $V_{1}$ and $V_{2}$, each being limited by a surface $S_{1}$ and $S_{2}$, respectively. Let ${\bf r}^{<}$ be the position vector of a generic point inside the volume $V_{j}$, and by ${\bf r}^{>}$ that of a generic point in the volume $\hat{V}$, which is outside all volumes $V_{j}$. The electric and magnetic vectors of a monochromatic field satisfy, respectively, the wave equations, [*i.e*]{}., Equations (\[eq:green1\]) and (\[eq:green2\]).
The vector form of Green’s theorem for two vectors ${\bf P}$ and ${\bf Q}$ well behaved in a volume $V$ surrounded by a surface $S$ reads [@morsefeshbach53]
$$\begin{aligned}
\int_{V}d^{3}r~({\bf Q}\cdot \nabla \times \nabla \times {\bf P}-{\bf P}
\cdot \nabla \times \nabla \times {\bf Q}) & = & \nonumber \\
\int_{S}d^{2}r~({\bf P}\times
\nabla \times {\bf Q}-{\bf Q}\times \nabla \times {\bf P})\cdot {\bf n},
\label{eq:greentheorem}\end{aligned}$$
with ${\bf n}$ being the unit outward normal.
Let us now apply Equation (\[eq:greentheorem\]) to the vectors ${\bf P}=
\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })\cdot {\bf C}$ , (${\bf C}$ being a constant vector) and ${\bf Q}={\bf E}({\bf r})$. Taking Equations (\[eq:green1\]) and (\[eq:green5\]) into account, we obtain
$$\int_{V}d^{3}r^{\prime }~{\bf E}({\bf r}^{\prime })\delta ({\bf r}-{\bf r}
^{\prime })=k^{2}\int_{V}d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime })\cdot
\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })-\frac{1}{
4\pi }{\bf S}_{e}({\bf r}), \label{eq:ET1}$$
where ${\bf S}_{e}$ is
$${\bf S}_{e}({\bf r})=\nabla \times \nabla \times \int_{S}d^{2}r^{\prime
}\left( {\bf E}({\bf r}^{\prime })\frac{\partial G({\bf r},{\bf r}^{\prime })
}{\partial {\bf n}}-G({\bf r},{\bf r}^{\prime })\frac{\partial {\bf E}({\bf r
}^{\prime })}{\partial {\bf n}}\right) . \label{eq:ET2}$$
Equation (\[eq:ET2\]) adopts different forms depending on whether the points ${\bf r}$ and ${\bf r}^{\prime }$ are considered in $V$ or in $\hat{V}
$. By means of straightforward calculations one obtains the following:
- If ${\bf r}$ and ${\bf r}^{\prime }$ belong to any of the volumes $V_{j}$, $(j=1,2)$, namely, $V$ becomes either of the volumes $V_{j}$:
$${\bf E}({\bf r}^{<})=k^{2}\int_{V_{j}}d^{3}r^{\prime }~{\bf P}({\bf r}
^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r}^{<},{\bf r}
^{\prime })-\frac{1}{4\pi }{\bf S}_{j}^{(in)}({\bf r}^{<}), \label{eq:ET3}$$
where
$$\begin{aligned}
{\bf S}_{j}^{(in)}({\bf r}^{<}) & = & \nabla \times \nabla \times
\nonumber \\ [+3mm]
& &
\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{
\partial G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<},
{\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime })}{\partial
{\bf n}}\right) \, . \, \, \, \, \, \, \, \, \, \,
\label{eq:ET4}\end{aligned}$$
In Equation (\[eq:ET4\]) ${\bf E}_{in}$ represents the limiting value of the electric vector on the surface $S_{j}$ taken from inside the volume $V_{j}$. Equation (\[eq:ET3\]) shows that the field inside each of the scattering volumes $V_{j}$ does not depend on the sources generated in the other volumes.
- If ${\bf r}$ belongs to any of the volumes $V_{j}$, namely, $V$ becomes $V_{j}$, and ${\bf r}^{\prime }$ belongs to $\hat{V}$:
$$0={\bf S}_{ext}({\bf r}^{<}). \label{eq:ET5}$$
In Equation (\[eq:ET5\]) ${\bf S}_{ext}$ is
$${\bf S}_{ext}({\bf r}^{<})=\sum_{j}{\bf S}_{j}^{(out)}({\bf r}^{<})-{\bf S}
_{\infty }({\bf r}^{<}), \label{eq:ET6}$$
where
$$\begin{aligned}
{\bf S}_{j}^{(out)}({\bf r}^{<}) & = & \nabla \times \nabla \times
\nonumber \\ [+3mm]
& &
\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{\partial
G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<},{\bf r}
^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n}}
\right) \, . \, \, \, \, \, \,
\label{eq:ET7}\end{aligned}$$
In Equation (\[eq:ET7\]) the surface values of the electric vector are taken from the volume $\hat{V}$. The normal ${\bf n}$ now points towards the interior of each of the volumes $V_{j}$.
Also, ${\bf S}_{\infty }$ has the same meaning as Equation (\[eq:ET7\]), the surface of integration now being a large sphere whose radius will eventually tend to infinity. It is not difficult to see that $-{\bf S}
_{\infty }$ in Equation (\[eq:ET6\]) is $4\pi $ times the incident field ${\bf E}^{(inc)}({\bf r}^{<})$ ([*cf*]{}. Refs. [@nieto-vesperinas91] and [@pattanayak76a; @pattanayak76b]). Therefore Equation (\[eq:ET5\]) finally becomes
$$0={\bf E}^{(inc)}({\bf r}^{<})+\frac{1}{4\pi }\sum_{j}{\bf S}_{j}^{(out)}({\bf
r}^{<}). \label{eq:ET8}$$
Note that when Equation (\[eq:ET8\]) is used as a non–local boundary condition, the unknown [*sources*]{} to be determined, given by the limiting values of ${\bf E}({\bf r}^{\prime })$ and $\partial {\bf E}({\bf r}^{\prime
})/\partial {\bf n}$ on each of the surfaces $S_{j}$, ([*cf*]{}. Equation (\[eq:ET7\])), appear coupled to those corresponding sources on the other surface $S_{k}$, $k\neq j$.
Following similar arguments, one obtains:
- For ${\bf r}$ belonging to $\hat{V}$ and ${\bf r}^{\prime }$ belonging to either volume $V_{j}$, $(j=1,2)$ namely, $V$ becoming $V_{j}$
$$0=k^2 \int_{V_{j}}d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime
})\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r}^{>},{\bf r}^{\prime })-
\frac{1}{4\pi }{\bf S}_{j}^{(in)}({\bf r}^{>}), \label{eq:ET9}$$
with ${\bf S}_{j}^{(in)}$ given by Equation (\[eq:ET4\]), this time evaluated at ${\bf r}^{>}$.
- For both ${\bf r}$ and ${\bf r}^{\prime }$ belonging to $\hat{V}$
$${\bf E}({\bf r}^{>})={\bf E}^{(inc)}({\bf r}^{>})+\frac{1}{4\pi }\sum_{j}{\bf S
}_{j}^{(out)}({\bf r}^{>}), \label{eq:ET10}$$
Hence, the exterior field is the sum of the fields emitted from each scattering surface $S_{j}$ $(j=1,2)$ with sources resulting from the coupling involved in Equation (\[eq:ET9\]).
One important case corresponds to a penetrable, optically homogeneous, isotropic, non–magnetic and spatially nondispersive medium (this applies for a real metal or a pure dielectric). In this case, Equations (\[eq:ET3\]) and (\[eq:ET9\]) become, respectively,
$$\begin{aligned}
{\bf E}({\bf r}^{<}) &=&-\frac{1}{4\pi k_{0}^{2}\epsilon }\nabla \times
\nabla \times \nonumber \\
&&\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{
\partial G^{(in)}({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G^{(in)}(
{\bf r}^{<},{\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime })
}{\partial {\bf n}}\right) , \label{eq:ET11}\end{aligned}$$
$$\begin{aligned}
0 &=&{\bf E}^{(inc)}({\bf r}^{<})+\frac{1}{4\pi k_{0}^{2}}\nabla \times \nabla
\times \nonumber \label{eq:ET12} \\
&&\sum_{j}\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{
\partial G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<},
{\bf r}^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n}
}\right) ,\end{aligned}$$
whereas Equations (\[eq:ET5\]) and (\[eq:ET10\]) yield
$$\begin{aligned}
0 &=&\frac{1}{4\pi k_{0}^{2}}\nabla \times \nabla \times \nonumber
\label{eq:ET13} \\
&&\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{
\partial G^{(in)}({\bf r}^{>},{\bf r}^{\prime })}{\partial {\bf n}}-G^{(in)}(
{\bf r}^{>},{\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime })
}{\partial {\bf n}}\right) ,\end{aligned}$$
$$\begin{aligned}
{\bf E}({\bf r}^{>}) &=&{\bf E}^{(inc)}({\bf r}^{>})+\frac{1}{4\pi
k_{0}^{2}\epsilon }\nabla \times \nabla \times \nonumber \label{eq:ET14} \\
&&\sum_{j}\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{
\partial G({\bf r}^{>},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{>},
{\bf r}^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n}
}\right) .\end{aligned}$$
In Equations (\[eq:ET11\]) and (\[eq:ET13\]) $``in"$ means that the limiting values on the surface are taken from inside the volume $V_{j}$; note that this implies for both $G^{(in)}$ and ${\bf E}_{in}$ that $k=k_{0}
\sqrt{\epsilon }$.
The continuity conditions
$${\bf n}\times \lbrack {\bf E}_{in}({\bf r}^{<})-{\bf E}({\bf r}
^{>})]=0,\,\,\,\,\,\,{\bf n}\times \lbrack {\bf H}_{in}({\bf r}^{<})-{\bf H}(
{\bf r}^{>})]=0 \, , \label{eq:ET15}$$
and the use of Maxwell’s equations lead to (cf. Ref. [@jackson75], Section I.5, or Ref. [@bornwolf99], Section 1.1):
$$\begin{aligned}
\left. E _{in} ({\bf r}) \right| _{{\bf r} \in S_j^{(-)}} & = &
\left. E ({\bf r}) \right| _{{\bf r} \in S_j^{(+)}} \, ,
\label{eq:continuity1}
\\ [+5mm]
\left. \frac {\partial E _{in}({\bf r})} {\partial {\bf n}}
\right|
_{{\bf r} \in S_j^{(-)}} & = &
\left. \frac {\partial E ({\bf r})} {\partial {\bf n}}
\right|
_{{\bf r} \in S_j^{(+)}} \, ,
\label{eq:continuity2}\end{aligned}$$
where $S_j^{(+)}$ and $S_j^{(-)}$ denote the surface profile when approached from outside or inside the volume $V_j$, respectively. Equations (\[eq:continuity1\]) and (\[eq:continuity2\]) permit to find both ${\bf E}$ and $\partial {\bf E}/\partial {\bf n}$ from either the pair Equations (\[eq:ET13\]) and (\[eq:ET14\]), or, equivalently, from the pair Equations (\[eq:ET11\]) and (\[eq:ET12\]), as both ${\bf r}^{>}$ and ${\bf r}^{<}$ tend to a point in $S_{j}$. Then the scattered field outside the medium is given by the second term of Equation (\[eq:ET14\]).
In the next section, we apply this theory to finding the near–field distribution of light scattered from a small particle in front of a corrugated dielectric surface when illumination is done from the dielectric half–space at angles of incidence larger than the critical angle. The non–local boundary conditions that we shall use are Equations (\[eq:ET13\]) and (\[eq:ET14\]).
Photonic Force Microscopy of Surfaces with Defects {#sec:PFM}
==================================================
The [*Photonic Force Microscope*]{} (PFM) is a technique in which one uses a probe particle trapped by a tweezer trap to image soft surfaces. The PFM [@ghislain93; @florin96; @wada00] was conceived as a scanning probe device to measure ultrasmall forces, in the range from a few to several hundredths $pN/nm$ with laser powers of some $mW$, between colloidal particles [@crocker94], or in soft matter components such as cell membranes [@stout97] and protein or other macromolecule bonds [@smith96]. In such a system, a dielectric particle of a few hundred nanometers, held in an optical tweezer [@ashkin86; @clapp99; @sugiura93; @dogariu00], scans the object surface. The spring constant of the laser trap is three or four orders of magnitude smaller than that of AFM cantilevers, and the probe position can be measured with a resolution of a few nanometers within a range of some microseconds [@florin96]. As in AFM, surface topography imaging can be realized with a PFM by transducing the optical force induced by the near field on the probe [@horber01bookproc]. As in near–field scanning optical microscopy (NSOM[^3]) [@pohl93], the resolution is given by the size of the particle and its proximity to the surface. It is well known, however [@nieto-vesperinas91; @greffet97] that multiple scattering effects and artifacts, often hinder NSOM images so that they do not bear resemblance to the actual topography . This has constituted one of the leading basic problems in NSOM [@hecht96]. Numerical simulations [@yo01b; @yo01c; @yo99b; @yo00; @yo01bookproc] based on the theory of Section \[sec:corrugated\] show that detection of the optical force on the particle yields topographic images, and thus they provide a method of prediction and interpretation for monitoring the force signal variation with the topography, particle position and illumination conditions. This underlines the fundamentals of the PFM operation. An important feature is the signal enhancement effects arising from the excitation of [*Mie resonances*]{} of the particle, which we shall discuss next. This allows to decrease its size down to the nanometric scale, thus increasing resolution both of force magnitudes and spatial details.
Nanoparticle Resonances {#sec:resonances}
-----------------------
Electromagnetic eigenmodes of small particles are of importance in several areas of research. On the one hand, experiments on the linewidth of surface plasmons in metallic particles [@klar98] and on the evolution of their near fields, both in isolated particles and in arrays [@krenn99], seek a basic understanding and possible applications of their optical properties.
Mie resonances of particles are often called [*morphology–dependent resonances*]{} ([*MDR*]{}). They depend on the particle shape, permittivity, and the [*size parameter*]{}: $x=2\pi a/\lambda $. In dielectric particles, they are known as [*whispering–gallery modes*]{} ([*WGM*]{}) [@owen81; @barber82; @benincasa87; @hill88; @barber88; @barber90]. On the other hand, in metallic particles, they become [*surface plasmons*]{} ([*SPR*]{}), coming from electron plasma oscillations [@raether88]. All these resonances are associated to surface waves which exponentially decay away from the particle boundary.
Morphology–dependent resonances in dielectric particles are interpreted as waves propagating around the object, confined by total internal reflection, returning in phase to the starting point. A [*Quality factor*]{} is also defined as $Q=2\pi $ (Stored energy) $/$ (Energy lost per cycle) $=\omega
_{0}/\delta \omega $, where $\omega _{0}$ is the resonace frequency and $\delta \omega $ the resonance full width. The first theoretical studies of [*MDR*]{} were performed by Gustav Mie, in his well–known scattering theory for spheres. The scattered field, both outside and in the particle, is decomposed into a sum of partial waves. Each partial wave is weighted by a coefficient whose poles explain the existence of peaks at the scattering cross section. These poles correspond to complex frequencies, but true resonances ([*i.e*]{}.,the real values of frequency which produce a finite for the coefficient peaks) have a size parameter value close to the real part of the complex poles. The imaginary part of the complex frequency accounts for the width of the resonance peak. [*MDR*]{}’s are classified by three integer numbers: one related to the partial wave ([*order number*]{}), another one which accounts for the several poles that can be present in the same coefficient ([*mode number*]{}), and a third one accounting for the degeneration of a resonance ([*azimuthal mode number*]{}). In the first experimental check at optical frequencies, the variation of the radiation pressure (due to [*MDR*]{}) on highly transparent, low–vapor–pressure silicone oil drops (index $1.4-1.53$) was measured by Ashkin [@ashkin77]. The drops were levitated by optical techniques and the incident beam was focused at either the edge or the axis of the particles showing the creeping nature of the surface waves.
It is important to note, as regards resonances, the enhanced directional scattering effects such as the [*Glory*]{} [@bryant66; @fahlen68; @khare77] found in water droplets. The Glory theory accounts for the backscattering intensity enhancements found in water droplets. These enhancements are associated with rays grazing the surface of the droplet, involving hundreds of circumvolutions (surface effects). Axial rays (geometrical effects) also contribute. They have been observed in large particle sizes ($x>10^{2}$) and no Glory effects have been found for sizes in the range $x\sim 1$. These backscattering intensity enhancements cannot be associated to a unique partial wave, but to a superposition of several partial waves.
Distributions of Forces on Dielectric Particles Over Corrugated Surfaces Illuminated Under TIR {#sec:forcedielec}
----------------------------------------------------------------------------------------------
We now model a rough interface separating a dielectric of permittivity $\epsilon _{1}=2.3104$, similar to that of glass, from air. We have addressed (Figure \[fig:ol1\], left) the profile consisting of two protrusions described by $z=h[\exp (-(x-X_{0})^{2}/\sigma ^{2})+\exp (-(x+X_{0})^{2}/\sigma ^{2})]$ on a plane surface $z=0$. (It should be noted that in actual experiments, the particle is immersed in water, which changes the particle’s relative refractive index weakly. But the phenomena shown here will remain, with the interesting features now occurring at slightly different wavelengths.) Illumination, linearly polarized, is done from the dielectric side under TIR (critical angle $\theta _{c}=41.14^{o}$) at $\theta
_{0}=60^{o}$ with a Gaussian beam of half–width at half–maximum $W=4000$ $nm$ at wavelength $\lambda $ (in air). For the sake of computing time and memory, the calculation is done in two dimensions (2D). This retains the main physical features of the full 3D configuration, as far as multiple interaction of the field with the surface and the probe is concerned [@lester99]. The particle is then a cylinder of radius $a$, permittivity $\epsilon _{2}$, and axis $OY$, whose center moves at constant height $z=d+a$. Maxwell’s stress tensor is used to calculate the force on the particle resulting from the scattered near–field distribution created by multiple interaction of light between the surface and the particle. Since the configuration is 2D, the incident power and the force are expressed in $mW/nm$ and in $pN/nm$, respectively, namely, as power and force magnitudes per unit length (in $nm$) in the transversal direction, [*i.e*]{}., that of the cylinder axis. We shall further discuss how these magnitudes are consistent with three–dimensional (3D) experiments.
), and at: $(191.4, 192.6)$ $nm$ (Fig. \[fig:ol1\](b)). The wavelength ($\protect\lambda=638$ $nm$) excites the $(n,l)$ Mie resonance. (From Ref. [@yo01b]). []{data-label="fig:ol1"}](ol1.eps){width="\linewidth"}
A silicon cylinder of radius $a=60$ $nm$ in front of a flat dielectric surface with the same value of $\epsilon _{1}$ as considered here, has a Mie resonance excited by the transmitted evanescent wave at $\lambda =638$ $nm$ ($\epsilon _{2}=14.99+i0.14$) [@yo00]. Those eigenmodes are characterized by $n=0$, $l=1$, for $p$ polarization, and $n=1$, $l=1$, for $s$ polarization. We consider the two protrusion interface (Figure \[fig:ol1\], left). Insets of Figures \[fig:ol1\](a) and \[fig:ol1\](b), corresponding to a $p$–polarized incident beam, show the electromagnetic force on the particle as it scans horizontally above the flat surface with two protrusions with parameters $\sigma =63.8$ $nm$, $h=127.6$ $nm$ and $X_{0}=191.4$ $nm$, both at resonant $\lambda $ and out of resonance ($\lambda =538$ $nm$). The particle scans at $d=132.6$ $nm$. Inset (a) shows the force along the $OX$ axis. As seen, the force is positive and, at resonance, it has two remarkable maxima corresponding to the two protrusions, even though they appear slightly shifted due to surface propagation of the evanescent waves transmitted under TIR, which produce the Goos–Hänchen shift of the reflected beam. The vertical force on the particle, on the other hand, is negative, namely attractive, ([*cf*]{}. Inset (b)), and it has two narrow peaks at $x$ just at the position of the protrusions. The signal being again remarkably stronger at resonant illumination. Similar force signal enhancements are observed for $s$–polarization. In this connection, it was recently found that this attractive force on such small dielectric particles monotonically increases as they approach a dielectric flat inteface [@chaumet00a].
), and at: $(638,
642)$ $nm$ (Fig. \[fig:ol2\](b)). The wavelength ($\protect\lambda=919$ $nm$) excites the $(n,l)$ Mie resonance. (From Ref. [@yo01b]). []{data-label="fig:ol2"}](ol2.eps){width="\linewidth"}
It should be remarked that, by contrast and as expected [@nieto-vesperinas91; @greffet97], the near field–intensity distribution for the magnetic field $H$, normalized to the incident one $H_{0}$, has many more components and interference fringes than the force signal, and thus, the resemblance of its image with the interface topography is worse. This is shown in Inset (b) (thin solid line), where we have plotted this distribution in the absence of particle at $z=d+a$ for the same illumination conditions and parameters as before. This is the one that ideally the particle scan should detect in NSOM.
It is also interesting to investigate the near–field intensity distribution map. Figures \[fig:ol1\](a) and \[fig:ol1\](b) show this for the magnetic field $H$ in $p$ polarization for resonant illumination, $\lambda =638$ $nm$, at two different positions of the cylinder, which correspond to $x=0$ and $191.4$ $nm$, respectively. We notice, first, the strong field concentration inside the particle, corresponding to the excitation of the $(n=0,l=1)$ –eigenmode. When the particle is over one bump, the variation of the near field intensity is larger in the region closer to it, this being responsible for the stronger force signal on the particle at this position. Similar results are observed for $s$–polarized waves, in this case, the $(n=1,l=1)$–eigenmode of the cylinder is excited and one can appreciate remarkable fringes along the whole interface due to interference of the surface wave, transmitted under TIR, with scattered waves from both the protrusions and the cylinder.
An increase of the particle size yields stronger force signals at the expense of losing some resolution. Figures \[fig:ol2\](a) and \[fig:ol2\](b) show the near electric field intensity distribution for $s$ polarization at two different positions of a cylinder with radius $a=200$ $nm$, the parameters of the topography now being $\sigma =212.7$ $nm$, $h=425.3$ $nm$ and $X_{0}=\pm 638$ $nm$. The distance is $d=442$ $nm$. The resonant wavelength is now $\lambda =919$ $nm$ ($\epsilon _{2}=13.90+i0.07$). Insets of Figures \[fig:ol2\](a) and \[fig:ol2\](b) illustrate the force distribution as the cylinder moves along $OX$. The force peaks, when the resonant wavelength is considered, are positive, because now, the scattering force on this particle of larger scattering cross section is greater than the gradient force. They also appear shifted with respect to the protrusion positions, once again due to surface travelling waves under TIR. There are weaker peaks, or absence of them at non–resonant $\lambda $. Similar results occur for a $p$–polarized beam at the resonant wavelength $\lambda =759$ $nm$ ($\epsilon _{2}=13.47+i0.04$). For both polarizations the $(n=3,l=1)$ Mie eigenmode of the cylinder is now excited. The field distribution is well–localized inside the particle and it has the characteristic standing wave structure resulting from interference between counterpropagating whispering–gallery modes circumnavegating the cylinder surface. It is remarkable that this structure appears as produced by the excitation of propagating waves incident on the particle [@yo00], these being due to the coupling of the incident and the TIR surface waves with radiating components of the transmitted field, which are created from scattering with the interface protrusions. Although not shown here, we should remark that illumination at non–resonant wavelengths do not produce such a field concentration within the particle, then the field is extended throughout the space, with maxima attached to the flat portions of the interface (evanescent wave) and along certain directions departing from the protrusions (radiating waves from scattering at these surface defects).
Evanescent components of the electromagnetic field and multiple scattering among several objects are often difficult to handle in an experiment. However, there are many physical situations that involve these phenomena. In this section we have seen that the use of the field inhomogeneity, combined with (and produced by) morphology–dependent resonances and multiple scattering, permit to imaging a surface with defects. Whispering–gallery modes in dielectric particles, on the other hand, produce also evanescent fields on the particle surface which enhance the strength of the force signal. The next section is aimed to study metallic particles under the same situation as previously discussed, exciting now plasmon resonances on the objects.
Distributions of Forces on Metallic Particles Over Corrugated Surfaces Illuminated Under TIR {#sec:forcemetal}
--------------------------------------------------------------------------------------------
Dielectric particles suffer intensity gradient forces under light illumination due to radiation pressure, which permit one to hold and manipulate them by means of optical tweezers [@ashkin77] in a variety of applications such as spectroscopy [@sasaki91; @misawa92; @misawa91], phase transitions in polymers [@hotta98], and light force microscopy of cells [@pralle99; @pralle98] and biomolecules [@smith96]. Metallic particles, however, were initially reported to suffer [*repulsive*]{} electromagnetic scattering forces due to their higher cross sections [@ashkin92], although later [@svoboda94] it was shown that nanometric metallic particles (with diameters smaller than $50$ $nm$) can be held in the focal region of a laser beam. Further, it was demonstrated in an experiment [@sasaki00] that metallic particles illuminated by an evanescent wave created under TIR at a substrate, experience a vertical attractive force towards the plate, while they are pushed horizontally in the direction of propagation of the evanescent wave along the surface. Forces in the $fN$ range were measured.
: $\protect\lambda =387$ $nm$ (on resonance), $\protect\theta_ o=0^o$. \[fig:prb3\](b): $\protect\lambda =387$ $nm$ (on resonance), $\protect\theta_o=66^o$. \[fig:prb3\](c): $\protect\lambda =316$ $nm$ (off resonance), $\protect\theta_o=66^o$. \[fig:prb3\](d): $\protect\lambda=387$ $nm$ (on resonance), $\protect\theta_o=66^o$. The cylinder center is placed at $(0, 192.6)$ nm in \[fig:prb3\](a), \[fig:prb3\](b) and \[fig:prb3\](c), and at $(191.4, 192.6)$ nm in \[fig:prb3\](d). (From Ref. [@yo01c]). []{data-label="fig:prb3"}](prb3.eps){width="9.5cm"}
Plasmon resonances in metallic particles are not so efficiently excited as mor-phology–dependent resonances in non–absorbing high–refractive–index dielectric particles ([*e.g*]{}., see Refs. [@yo01a; @yo00]) under incident evanescent waves. The distance from the particle to the surface must be very small to avoid the evanescent wave decay normal to the propagation direction along the surface. In this section we address the same configuration as before using water as the immersion medium. The critical angle for the glass–water interface is $\theta _{c}=61.28^{o}$. A silver cylinder of radius $a$ at distance $d+a$ from the flat portion of the surface is now studied.
In Figure \[fig:prb3\] we plot the near–field intensity distribution $|H/H_{0}|^{2}$ corresponding to the configuration of inset in Figure \[fig:ol1\]. A silver cylinder of radius $a=60$ $nm$ scans at constant distance $d=162.6$ $nm$ above the interface. The system is illuminated by a $p$–polarized Gaussian beam ($W=4000$ $nm$) at $\theta _{0}=0^{o}$ and $\lambda =387$ $nm$ ($\epsilon _{2}=-3.22+i0.70$). The surface protrusions are positioned at $X_{0}=\pm 191.4$ $nm$ with height $h=127.6$ $nm$ and $\sigma =63.8$ $nm$. Figure \[fig:prb3\](a) shows the aforementioned distribution when the particle is centered between the protrusions. The plasmon resonance is excited as manifested by the field enhancement on the cylinder surface, which is higher in its lower portion. At this resonant wavelength, the main Mie coefficient contributor is $n=2$, which can also be deduced from the interference pattern formed along the particle surface: the number of lobes must be $2n$ along this surface [@owen81]. Figure \[fig:prb3\](b) shows the same situation but with $\theta _{0}=66^{o}$. The field intensity close to the particle is higher in Figure \[fig:prb3\](a) because in Figure \[fig:prb3\](b) the distance $d$ is large enough to obliterate the resonance excitation due to the decay of the evanescent wave created by TIR [@yo01a]. However, the field intensity is markedly different from the one shown in Figure \[fig:prb3\](c), in which the wavelength has been changed to $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$) so that there is no particle resonance excitation at all. Figure \[fig:prb3\](d) shows the same situation as in Figure \[fig:prb3\](b) but at a different $X$–position of the particle. In Figure \[fig:prb3\](c), the interference in the scattered near–field due to the presence of the particle is rather weak; the field distribution is now seen to be mainly concentrated at low $z$ as an evanescent wave travelling along the interface, and this distribution does not substantially change as the particle moves over the surface at constant $z$. By contrast, in Figures \[fig:prb3\](b) and \[fig:prb3\](d) the intensity map is strongly perturbed by the presence of the particle. As we shall see, this is the main reason due to which optical force microscopy is possible at resonant conditions with such small metallic particles used as nanoprobes, and not so efficient at non–resonant wavelengths. In connection with these intensity maps ([*cf*]{}. Figures \[fig:prb3\](b) and \[fig:prb3\](d)), we should point out the interference pattern on the left side of the cylinder between the evanescent wave and the strongly reflected waves from the particle, that in resonant conditions behaves as a strongly radiating antenna [@yo01a; @yo00; @krenn99]. This can also be envisaged as due to the much larger scattering cross section of the particle on resonance, hence reflecting backwards higher intensity and thus enhancing the interference with the evanescent incident field. The fringe spacing is $\lambda /2$ ($\lambda $ being the corresponding wavelength in water). This is explained as follows: The interference pattern formed by the two evanescent waves travelling on the surface opposite to each other, with the same amplitude and no dephasing, is proportional to $\exp (-2\kappa z)\cos ^{2}(n_{1}k_{0}\sin \theta _{0}x)$, with $\kappa =(n_{1}^{2}\sin ^{2}\theta _{0}-n_{0}^{2})^{1/2}$. The distance between maxima is $\Delta x=\lambda /(2n_{1}\sin \theta _{0})$. For the angles of incidence used in this work under TIR ($\theta _{0}=66^{0}$ and $72^{0}$), $\sin \theta _{0}\approx 0.9$, and taking into account the refractive indices of water and glass, one can express this distance as $\Delta x\approx \lambda /2n_{0}$. The quantity $\Delta x$ is similar to the fringe period below the particle in Figure \[fig:prb3\](a), now attributted to the interference between two opposite travelling plane waves, namely, the one transmitted through the interface and the one reflected back from the particle.
: Horizontal force. \[fig:prb5\](b): Vertical force. Solid curves: $\protect\lambda=
387$ $nm$ (on resonance), broken curves: $\protect\lambda= 316$ $nm$ (off resonance). Thin lines in \[fig:prb5\](b) show $|H/H_0|^2$ (in arbitrary units), averaged on the perimeter of the cylinder cross section, while it scans the surface. The actual magnitude of the intensity in the resonant case is almost seven times larger than in the non–resonant one. (From Ref. [@yo01c]). []{data-label="fig:prb5"}](prb5.eps){width="\linewidth"}
: $\protect
\lambda =441$ $nm$ (on resonance), $\protect\theta_ o=0^o$ and the cylinder center placed at $(-1276, 642)$ $nm$. \[fig:prb8\](b): $\protect\lambda
=441$ $nm$ (on resonance), $\protect\theta_o=66^o$ and the cylinder center placed at $(1276, 642)$ $nm$. \[fig:prb8\](c): $\protect\lambda =316$ $nm$ (off resonance), $\protect\theta_o=66^o$ and the cylinder center placed at $(1276, 642)$ $nm$. (From Ref. [@yo01c]). []{data-label="fig:prb8"}](prb8.eps){width="11cm"}
Figure \[fig:prb5\] shows the variation of the Cartesian components of the electromagnetic force ($F_x$, Fig. \[fig:prb5\](a) and $F_z$, Fig. \[fig:prb5\](d)) on scanning the particle at constant distance $d$ above the interface, at either plasmon resonance excitation ($\lambda=387$ $nm$, solid lines), or off resonance ($\lambda=316$ $nm$, broken lines). The incident beam power (per unit length) on resonance is $3.9320 \times 10^{-6}$ $mW/nm$, and $3.9327\times 10^{-6}$ $mW/nm$ at $\lambda=316$ $nm$. The incidence is done with a $p$–polarized Gaussian beam of $W=4000$ $nm$ at $\theta_0=66^o$. It is seen from these curves that the force distributions resembles the surface topography on resonant conditions with a signal which is remarkably larger than off–resonance. This feature is specially manifested in the $Z$ component of the force, in which the two protrusions are clearly distinguished from the rest of interference ripples, as explained above. Figure \[fig:prb5\](b) also shows (thin lines) the scanning that conventional near field microscopy would measure in this configuration, namely, the normalized magnetic near field intensity, averaged on the cylinder cross section. These intensity curves are shown in arbitrary units, and in fact the curve corresponding to plasmon resonant conditions is almost seven times larger than the one off–resonance. The force curves show, on the one hand, that resonant conditions also enhance the contrast of the surface topography image. Thus, the images obtained from the electromagnetic force follows more faithfully the topography than that from the near field intensity. This is a fact also observed with other profiles, including surface–relief gratings. When parameter $h$ is inverted, namely, the interface profile on the left in Fig. \[fig:ol1\], then the vertical component of the force distribution presents inverted the contrast. On the whole, one observes from these results that both the positions and sign of the defect height can be distinguished by the optical force scanning.
: Horizontal force. \[fig:prb9\](b): Vertical force. Solid curves: $\protect\lambda= 441$ $nm$ (on resonance), broken curves: $\protect\lambda= 316$ $nm$ (off resonance). Thin solid curves: $\protect\lambda= 441$ $nm$ (on resonance) at $\protect\theta_
0=72^o $, thin broken curves: $\protect\lambda= 316$ $nm$ (off resonance) at $\protect\theta_ 0=72^o$. (From Ref. [@yo01c]). []{data-label="fig:prb9"}](prb9.eps){width="\linewidth"}
Figure \[fig:prb8\] displays near–field intensity maps for a larger particle ($a=200$ $nm$). Figure \[fig:prb8\](a) corresponds to $\theta _{0}=0^{o}$ and a resonant wavelength $\lambda =441$ $nm$ ($\epsilon _{2}=-5.65+i0.75$), with the particle being placed on the left of both protrusions. Figure \[fig:prb8\](b) corresponds to $\theta _{0}=66^{o}$ (TIR illumination conditions), at the same resonant wavelength, the particle now being on the right of the protrusions. Figure \[fig:prb8\](c) corresponds to $\theta _{0}=66^{o}$ (TIR incidence), at the no resonant wavelength $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$), the particle being placed at the right of the protrusions. The incident beam is $p$–polarized with $W=4000$ $nm$. The surface protrusions are positioned at $X_{0}=\pm 638$ $nm$ with height $h=425.3$ $nm$ and $\sigma =212.7$ $nm$. All the relevant size parameters are now comparable to the wavelength, and hence to the decay length of the evanescent wave. That is why now the plasmon resonance cannot be highly excited. When no resonant wavelength is used, the intensity interference fringes due to the presence of the particle are weaker. On the other hand, Figure \[fig:prb8\](a) shows the structure of the near–field scattered under $\theta _{0}=0^{o}$. There are three objects that scatter the field: the two protrusions and the particle. They create an inteference pattern with period $\lambda /2$ (with $\lambda $ being the wavelength in water). Besides, the particle shows an inteference pattern around its surface due to the two counterpropagating plasmon waves which circumnavegate it [@yo01a; @yo00]. The number of lobes along the surface is nine, which reflects that the contribution to the field enhancement at this resonant wavelength comes from Mie’s coefficients $n=5$ and $n=4$. Figure \[fig:prb8\](b) shows weaker excitation of the same plasmon resonance under TIR conditions. Now, the interference pattern at the incident side of the configuration is also evident. This pattern again has a period $\lambda /2$ ($\lambda $ being the wavelength in water). If non–resonant illumination conditions are used, the particle is too far from the surface to substantially perturb the transmitted evanescent field, then the intensity distribution of this field remains closely attached to the interface, and it is scattered by the surface protrusions. The field felt by the particle in this situation is not sufficient to yield a well–resolved image of the surface topography, as shown next for this same configuration.
Figure \[fig:prb9\] shows the components of the force ($F_{x}$, Figure \[fig:prb9\](a) and $F_{z}$, Figure \[fig:prb9\](b)) for either plasmon excitation conditions ($\lambda =441$ $nm$, solid lines), or off–resonance ($\lambda =316$ $nm$, broken lines), as the cylinder scans at constant distance $d$ above the surface. The incidence is done with a $p$–polarized Gaussian beam of $W=4000$ $nm$ at either $\theta _{0}=66^{o}$ (thick curves) or $\theta _{0}=72^{o}$ (thin curves). The incident beam power (per unit length) is $3.9313\times 10^{-6}$ $mW/nm$ on resonance and $3.9327\times
10^{-6}$ $mW/nm$ at $\lambda =316$ $nm$ when $\theta _{0}=66^{o}$, and $3.9290\times 10^{-6}$ $mW/nm$ on resonance and $3.9315\times 10^{-6}$ $mW/nm$ at $\lambda =316$ $nm$ when $\theta _{0}=72^{o}$. As before, resonant conditions provide a better image of the surface topography making the two protrusions distinguishable with a contrast higher than the one obtained without plasmon excitation. The surface image corresponding to the force distribution is better when the protrusions (not shown here) are inverted because then the particle can be kept closer to the interface. Again, the curve contrast yielded by protrusions and grooves is inverted from each other. The positions of the force distribution peaks corresponding to the protrusions now appear appreciably shifted with respect to the actual protrusions’ position. This shift is explained as due to the Goos–Hänchen effect of the evanescent wave [@yo01b]. We observe that the distance between these peaks in the $F_{z}$ curve is aproximately $2X_{0}$. This shift is more noticeable in the force distribution as the probe size increases[^4]. Again, the $F_{z}$ force distribution has a higher contrast at the (shifted) position of the protrusions. The force signal with these bigger particles is larger, but the probe has to be placed farther from the surface at constant height scanning. This affects the strength of the signal. Finally, it is important to state that the angle of incidence (supposed to be larger than the critical angle $\theta _{c}$) influences both the contrast and the strength of the force: the contrast decreases as the angle of incidence increases. At the same time, the strength of the force signal also diminishes.
As seen in the force figures for both sizes of particles, most curves contain tiny ripples. They are due to the field intensity interference pattern as shown in Figures \[fig:prb3\] and \[fig:prb8\], and discussed above. As the particle moves, the force on it is affected by this interference. As a matter of fact, it can be noted in the force curves that these tiny ripples are mainly present at the left side of the particle, which is the region where stronger interference takes place.
It is worth remarking, however, that these oscillations are less marked in the force distribution ([*cf*]{}. their tiny ripples), than in the near field intensity distribution, where the interference patterns present much higher contrast.
As stated in the previous section, evanescent fields and multiple scattering are fruitful to extracting information from a detection setup. The latter is, at the same time, somewhat troublesome as it cannot be neglected at will. This incovenient is well–known in NSOM, but it is diminished in PFM, as remarked before. The smoother signal provided by the force is underlined by two facts: one is the averaging process on the particle surface, quantitatively interpreted from the field surface integration involved in Maxwell’s stress tensor. Other is the local character of the force acting at each point of the particle surface. Metallic particles are better candidates as probes of PFM in comparison to dielectric particles, since the force signal is not only enhanced at resonance conditions, but it is also bigger and presents better resolution. However, dielectric particles are preferred when the distance to the interface is large, since then the weak evanescent field present at these distances presents better coupling to the whispering–gallery modes than to the plasmon surface waves of metallic particles.
On the Attractive and Repulsive Nature of Vertical Forces and Their Orders of Magnitude {#sec:discussion}
---------------------------------------------------------------------------------------
The horizontal forces acting on the particle are scattering forces due to radiation pressure of both the incident evanescent wave and the field scattered by the protrusions, thus the forces are positive in all the cases studied. As for the vertical forces, two effects compete in determining their sign. First, is the influence of the polarizability [@chaumet00b; @chaumet00a], which depends on the polarization of the illumination. On the other hand, it is well known that an evanescent wave produces only gradient forces in the vertical direction. For silver cylinders, the force at wavelength $\lambda =387$ ($\epsilon
_{2}=-3.22+i0.70 $) and at $\lambda =441$ $nm$ ($\epsilon _{2}=-5.65+i0.75$) must be attractive, while at $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$), the real part of the polarizability changes its sign, and so does the gradient force, thus becoming repulsive (on cylinders of not very large sizes, as here). However, in the cases studied here, not only the multiple scattering of light between the cylinder and the flat portion of the interface, but also the surface defects, produce scattered waves both propagating (into $z>0$) and evanescent under TIR conditions. Thus, the scattering forces also contribute to the $z$–component of the force. This affects the sign of the forces, but it is more significant as the size of the objects increases. In larger cylinders and defects ([*cf*]{}. Figure \[fig:prb9\]), the gradient force is weaker than the scattering force thus making $F_{z}$ to become repulsive on scanning at $\lambda =441$ $nm$ (plasmon excited). On the other hand, for the smaller silver cylinders studied ([*cf*]{}. Figure \[fig:prb5\]), the gradient force is greater than the scattering force at $\lambda =387$ $nm$ (plasmon excited), and thus the force is attractive in this scanning. Also, as the distance between the particle and the surface decreases, the gradient force becomes more attractive [@chaumet00b; @chaumet00a]. This explains the dips and change of contrast in the vertical force distribution on scanning both protrusions and grooves. At $\lambda =316$ $nm$ (no plasmon excited), both scattering and gradient forces act cooperatively in the vertical direction making the force repulsive, no matter the size of the cylinder. For the silicon cylinder, as shown, the vertical forces acting under TIR conditions are attractive in absence of surface interaction (for both polarizations and the wavelengths used). However, this interaction is able to turn into repulsive the vertical force for $S$ polarization at $\lambda =538$ $nm$, due to the scattering force.
This study also reveals the dependence of the attractive or repulsive nature of the forces on the size of the objects (probe and defects of the surface), apart from the polarizability of the probe and the distance to the interface, when illumination under total internal reflection is considered. The competition between the strength of the scattering and the gradient force determines this nature.
The order of magnitude of the forces obtained in the preceding 2D calculations is consistent with that of forces in experiments and 3D calculations of Refs. [@pohl93; @guntherodt95; @depasse92; @sugiura93; @kawata92; @dereux94; @girard94; @almaas95; @novotny97; @hecht96; @okamoto99; @chaumet00b] and [@chaumet00a]. Suppose a truncated cylinder with axial length $L=10$ $\mu m$, and a Gaussian beam with $2W\sim 10$ $\mu m$. Then, a rectangular section of $L\times 2W=10^{2}$ $\mu m^{2}$ is illuminated on the interface. For an incident power $P_{0}\sim 1$ $mW$, spread over this rectangular section, the incident intensity is $I_{0}\sim 10^{-2}$ $mW/\mu m^{2}$, and the force range from our calculations is $F\sim 10^{-2}-10^{-1}$ $pN$. Thus, the forces obtained in Figures \[fig:prb9\](b) and \[fig:prb9\](d) are consistent with those presented, for example, in Ref. [@kawata92].
Concluding Remarks & Future Prospects {#sec:concluding}
=====================================
The forces exerted by both propagating and evanescent fields on small particles are the basis to understand estructural characteristics of time–harmonic fields. The simplest evanescent field that can be built is the one that we have illustrated on transmission at a dielectric interface when TIR conditions occur. Both dielectric and metallic particles are pushed along the direction of propagation of the evanescent field, independently of the size (scattering and absorption forces). By contrast, forces behave differently along the decay direction (gradient forces) on either dielectric and metallic particles, as studied for dipolar–sized particles, Section \[sec:dipapprox\]. The analysis done in presence of a rough interface, and with particles able to interact with it show that scattering, absorption and gradient forces act both in the amplitude and phase directions, when multiple scattering takes place. Moreover, the excitation of particle resonances enhances this interaction, and, at the same time, generates evanescent fields (surface waves) on their surface, which makes even more complex this mixing among force components. Thus, an analysis based on a small particle isolated is not feasible due to the high inhomogeneity of the field.
It is, however, this inhomogeneity what provides a way to imaging a surface with structural features, such as topography. The possibility offered by the combination of evanescent fields and Mie resonances is however not unique. As inhomgeneous fields (which can be analitically decomposed into propagating and evanescent fields [@nieto-vesperinas91]) play an important role in the mechanical action of the electromagnetic wave on dielectric particles (either on or out of resonance), they can be used to operate at the nanometric scale on such entities, to assist the formation of ordered particle structures as for example [@burns89; @burns90; @antonoyiannakis97; @antonoyiannakis99; @malley98; @bayer98; @barnes02], with help of these resonances. Forces created by evanescent fields on particles and morphology–dependent resonances are the keys to control the optical binding and the formation of photonic molecules. Also, when a particle is used as a nanodetector, these forces are the signal in a scheme of photonic force microscopy as modeled in this article. It has been shown that the evanescent field forces and plasmon resonance excitations permit to manipulate metallic particles [@novotny97; @chaumet01; @chaumet02], as well as to make such microscopy [@yo01c]. Nevertheless, controlled experiments on force magnitudes, both due to evanescent and propagating waves, are yet scarce and thus desirable to be fostered.
The concepts released in this article open an ample window to investigate on soft matter components like in cells and molecules in biology. Most folding processes require small forces to detect and control in order not to alter them and be capable of actuating and extracting information from them.
General Annotated References {#sec:references .unnumbered}
============================
Complementary information and sources for some of the contents treated in this report can be found in next bibliography:
- Electromagnetic optics: there are many books where to find the basis of the electromagnetic theory and optics. We cite here the most common: [@jackson75; @bornwolf99]. The Maxwell’s Stress Tensor is analysed in [@jackson75; @stratton41]. The mathematical level of these textbooks is similar to the one in this report.
- Mie theory can be found in [@vandehulst81; @kerker69; @bohren83]. In these textbooks, Optics of particles is develop with little mathematics and all of them are comparable in contents.
- Resonances can be understood from the textbooks before, but a more detailed information, with applications and the implications in many topics can be found in the following references: [@yophd; @hill88; @barber90]. The last two references are preferently centred in dielectric particles. The first one compiles some of the information in these two references and some other from scientific papers. Surface plasmons, on the other hand, are studied in depth in [@raether88]. They are easy to understand from general physics.
- Integral equations in scattering theory and angular spectrum representation (for the decomposition of time–harmonic fields in propagating and evanescent components) are treated in [@nieto-vesperinas91]. The mathematical level is similar to the one in this report.
- The Coupled Dipole Method can be found in the scientific papers cited in Section \[sec:CDM\]. A More didactical reference is [@chaumetphd; @rahmaniphd]. The information shed in this report on the CDM is extended in these references.
- The dipolar approximation, in the context of optical forces and evanescent fields, can be complemented in scientific papers: [@gordon73; @chaumet00c; @yo02b] and in the monograph: [@novotny00].
- A more detailed discussion on the sign of optical forces for dipolar particles, as well as larger elongated particles, can be found in the scientific papers [@yo02a; @yo02b; @chaumet00b].
- Monographs on NSOM and tweezers: [@nieto-vesperinas96; @sheetz97].
We thank P. C. Chaumet and M. Lester for work that we have shared through the years. Grants from DGICYT and European Union, as well as a fellowship of J. R. Arias-González from Comunidad de Madrid, are also acknowledged.
[Ba[ñ]{}os 66a]{}
G. S. Agarwal. .Phys. Rev. A, 11, pages 230–242, 1975.
G. S. Agarwal. .Phys. Rev. A, 12, pages 1475–1497, 1975.
M. Allegrini, N. García and O. Marti, editors.Nanometer scale science and technology, Amsterdam, 2001. Società Italiana di Fisica, IOS PRESS.in [*Proc. Int. Sch. E. Fermi, Varenna*]{}.
E. Almaas and I. Brevik. . J. Opt. Soc. Am. B, 12, pages 2429–2438, 1995.
M. Alonso and E. J. Finn. .Addison–Wesley Series in Physics, Addison–Wesley, Reading, MA, 1968.
M. I. Antonoyiannakis and J. B. Pendry. . Europhys. Lett., 40, pages 613–618, 1997.
M. I. Antonoyiannakis and J. B. Pendry. . Phys. Rev. B, 60, pages 2363–2374, 1999.
J. R. Arias-González, M. Nieto-Vesperinas and A. Madrazo. .J. Opt. Soc. Am. A, 16, pages 2928–2934, 1999.
J. R. Arias-González and M. Nieto-Vesperinas. .Opt. Lett., 25, pages 782–784, 2000.
J. R. Arias-González, P. C. Chaumet and M. Nieto-Vesperinas. .In Nanometer Scale Science and Technology, 2001. [@allegrinigarciamarti01].
J. R. Arias-González and M. Nieto-Vesperinas. .J. Opt. Soc. Am. A, 18, pages 657–665, 2001.
J. R. Arias-González, M. Nieto-Vesperinas and M. Lester.. Phys. Rev. B, 65, page 115402, 2002.
J. R. Arias-González and M. Nieto-Vesperinas.. Opt. Lett., submitted, 2002.
J. R. Arias-González and M. Nieto-Vesperinas.. J. Opt. Soc. Am. A, submitted, 2002.
J. R. Arias-González . Universidad Complutense de Madrid, Spain, 2002.
A. Ashkin and J. M. Dziedzic. . Phys. Rev. Lett., 38, pages 1351–1354, 1977.
A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm and S. Chu. .Opt. Lett., 11, pages 288–290, 1986.
A. Ashkin and J. M. Dziedzic. .Appl. Phys. Lett., 24, pages 586–589, 1992.
A. Ba[ñ]{}os.1966.in [@banios66], chapter 2.
A. Ba[ñ]{}os. .Pergamon Press, Oxford, 1966.
P. W. Barber, J. F. Owen and R. K. Chang. .IEEE Trans. Antennas Propagat., 30, pages 168–172, 1982.
P. W. Barber and R. K. Chang, editors. .World Scientific, Singapore, 1988.
P. W. Barber and S. C. Hill. .World Scientific, Singapore, 1990.
M. D. Barnes, S. M. Mahurin, A. Mehta, B. G. Sumpter and D. W. Noid. . Phys. Rev. Lett., 88, page 015508, 2002.
M. Bayer, T. Gutbrod, J. P. Reithmaier, A. Forchel, T. L. Reinecke, P. A. Knipp, A. A. Dremin and V. D. Kulakovskii. . Phys. Rev. Lett., 81, pages 2582–2585 , 1998.
D.S. Benincasa, P.W. Barber, J-Z. Zhang, W-F. Hsieh and R.K. Chang. .Appl. Opt., 26, pages 1348–1356, 1987.
C. F. Bohren and D. R. Huffman. . Wiley–Interscience Publication, New York, 1983.
M. Born and E. Wolf.1999. in [@bornwolf99], section 11.4.2.
M. Born and E. Wolf.1999. in [@bornwolf99], pp 34.
M. Born and E. Wolf. .Cambridge University Press, Cambridge, 7nd edition, 1999.
H. C. Bryant and A. J. Cox. .J. Opt. Soc. Am. A, 56, pages 1529–1532, 1966.
M. M. Burns, J.-M. Fournier and J. A. Golovchenco. . Science, 249, pages 749–754, 1990.
M. M. Burns, J.-M. Fournier and J. A. Golovchenco. . Phys. Rev. Lett., 63, pages 1233-1236, 1989.
S. Chang, J. H. Jo and S. S. Lee. .Opt. Commun., 108, pages 133–143, 1994.
P. C. Chaumet and M. Nieto-Vesperinas. .Phys. Rev. B, 61, pages 14119–14127, 2000.
P. C. Chaumet and M. Nieto-Vesperinas. .Phys. Rev. B, 62, pages 11185–11191, 2000.
P. C. Chaumet and M. Nieto-Vesperinas. .Opt. Lett., 25, pages 1065–1067, 2000.
P. C. Chaumet and M. Nieto-Vesperinas. . Phys. Rev. B, 64, page 035422, 2001.
P. C. Chaumet, A. Rahmani and M. Nieto-Vesperinas. . Phys. Rev. Lett., 88, page 123601, 2002.
P. C. Chaumet. . Université de Bourgogne, France, 1998.
H. W. Chew, D.-S. Wang and M. Kerker.. Appl. Opt., 18, 2679, 1979.
A. R. Clapp, A. G. Ruta and R. B. Dickinson. .Rev. Sci. Instr., 70, pages 2627–2636, 1999.
L. Collot, V. Lefèvre-Seguin, M. Brune, J.M. Raimond and S. Haroche. .Europhys. Lett., 23, pages 327–334, 1993.
J. C. Crocker and D. G. Grier. .Phys. Rev. Lett., 73, pages 352–355, 1994.
F. Depasse and D. Courjon.Opt. Commun., 87, 79, 1992.
A. Dereux, C. Girard, O. J. F. Martin and M. Devel. .Europhys. Lett., 26, pages 37–42, 1994.
A. C. Dogariu and R. Rajagopalan..Langmuir, 16, pages 2770–2778, 2000.
B. T. Draine. .Astrophys. J., 333, pages 848–872, 1988.
C. E. Dungey and C. F. Bohren. .J. Opt. Soc. Am. A, 8, pages 81–87, 1991.
T. S. Fahlen and H. C. Bryant. .J. Opt. Soc. Am., 58, pages 304–310, 1968.
E.-L. Florin, J. K. H. Hörber and E. H. K. Stelzer. . Appl. Phys. Lett., 69, pages 446–448, 1996.
L. P. Ghislain and W. W. Webb..Opt. Lett., 18, pages 1678–1680, 1993.
C. Girard, A. Dereux and O. J. F. Martin. .Phys. Rev. B, 49, pages 13872–13881, 1994.
J. P. Gordon. .Phys. Rev. A, 8, pages 14–21, 1973.
J. J. Greffet and R. Carminati..Prog. Surf. Sci., 56, pages 133–235, 1997.
H.-J. Güntherodt, D. Anselmetti and E. Meyer, editors., Dordrecht, 1995. NATO ASI Series, Kluwer Academic Publishing.
B. Hecht, H. Bielefeldt, L. Novotny, Y. Inouye and D. W. Pohl. .Phys. Rev. Lett., 77, pages 1889–1892, 1996.
S. C. Hill and R. E. Benner. Morphology–dependent resonances, chapitre 1.World Scientific, 1988. [@barber88].
J. K. H. Hörber. .In Nanometer Scale Science and Technology, 2001. [@allegrinigarciamarti01].
J. Hotta, K. Sasaki, H. Masuhara and Y. Morishima. .J. Phys. Chem. B, 102, pages 7687–7690, 1998.
J. D. Jackson. .Wiley–Interscience Publication, New York, 2nd edition, 1975.
S. Kawata and T. Sugiura. .Opt. Lett., 17, pages 772–774, 1992.
S. Kawata and T. Tani. . Opt. Lett., 21, pages 1768–1770, 1996.
S. Kawata, editor. Near-field Optics and Surface Plasmon Polaritons, Topics in Applied Physics, Springer–Verlag, Berlin, 2000.
O. Keller, M. Xiao and S. Bozhevolnyi. .Surf. Sci., 280, pages 217–230, 1993.
M. Kerker. . Academic Press, New York, 1969.
V. Khare and H. M. Nussenzveig. .Phys. Rev. Lett., 38, pages 1279–1282, 1968.
T. Klar, M. Perner, S. Grosse, G.V. Plessen, W. Spirkl and J. Feldmann. .Phys. Rev. Lett., 80, pages 4249–4252, 1988.
J.C. Knight, N. Dubreuil, V. Sandoghdar, J. Hare, V. Lefèvre-Seguin, J.M. Raimond and S. Haroche. . Opt. Lett., 20, pages 1515–1517, 1995.
J. R. Krenn, A. Dereux, J. C. Weeber, E. Bourillot, Y. Lacroute, J. P. Goudonnet, G. Schider, W. Gotschy, A. Leitner, F. R. Aussenegg and C. Girard. . Phys. Rev. Lett., 82, pages 2590–2593, 1999.
M. Lester and M. Nieto-Vesperinas.. Opt. Lett., 24, pages 936–938, 1999.
M. Lester, J. R. Arias-González and M. Nieto-Vesperinas. .Opt. Lett., 26, pages 707–709, 2001.
L. E. Malley, D. A. Pommet and M. A. Fiddy. . J. Opt. Soc. Am. B, 15, pages 1590–1595, 1998.
L. Mandel and E. Wolf. .Cambridge University Press, Cambridge, 1995.
H. Misawa, M. Koshioka, K. Sasaki, N. Kitamura and H. Masuhara. .J. Appl. Phys., 70, pages 3829–3836, 1991.
H. Misawa, K. Sasaki, M. Koshioka, N. Kitamura and H. Masuhara. .Appl. Phys. Lett., 60, pages 310–312, 1992.
P. M. Morse and H. Feshbach..McGraw–Hill, New York, 1953.
M. Nieto-Vesperinas. .John Wiley & Sons, Inc, New York, 1991.
M. Nieto-Vesperinas and N. García, editors., Dordrecht, 1996. NATO ASI Series, Kluwer Academic Publishing.
L. Novotny, R. X. Bian and X. S. Xie. .Phys. Rev. Lett., 79, pages 645–648, 1997.
L. Novotny. . In Near-field Optics and Surface Plasmon Polaritons, Topics in Applied Physics, 81, pages 123–141, 2000. [@kawata00].
K. Okamoto and S. Kawata. . Phys. Rev. Lett., 83, pages 4534–4537, 1999.
J. F. Owen, R. K. Chang and P. W. Barber..Opt. Lett., 6, pages 540–542, 1981.
M. A. Paesler and P. J. Moyer. .John Wiley & Sons, Inc, New York, 1996.
D. N. Pattanayak and E. Wolf. .Phys. Rev. E, 13, pages 2287–2290, 1976.
D. N. Pattanayak and E. Wolf. .Phys. Rev. E, 13, pages 913–923, 1976.
D. W. Pohl and D. Courjon, editors., Dordrecht, 1993. NATO ASI Series, Kluwer Academic Publishing.
A. Pralle, E.-L. Florin, E. H. K. Stelzer and J. K. H. Hörber. .Appl. Phys. A, 66, pages S71–S73, 1998.
A. Pralle, M. Prummer, E.-L. Florin, E. H. K. Stelzer and J. K. H. Hörber. .Microsc. Res. Tech., 44, pages 378–386, 1999.
D. C. Prieve and J. Y. Walz. .Appl. Opt.-LP, 32, 1629, 1993.
E. M. Purcell and C. R. Pennypacker. .Astrophys. J., 186, pages 705–714, 1973.
H. Raether.. Springer–Verlag, Berlin Heidelberg, 1988.
A. Rahmani and F. de Fornel. . Eyrolles and France Télécom-CNET, Paris, 2000.
K. Sasaki, M. Koshioka, H. Misawa, N. Kitamura and H. Masuhara. .Opt. Lett., 16, pages 1463–1465, 1991.
K. Sasaki, M. Tsukima and H. Masuhara. .Appl. Phys. Lett., 71, pages 37–39, 1997.
K. Sasaki, J. Hotta, K. Wada and H. Masuhara. .Opt. Lett., 25, pages 1385–1387, 2000.
M. P. Sheetz, editor. .Academic Press, San Diego, CA, 1997.
S. B. Smith, Y. Cui and C. Bustamante..Science, 271, pages 795–799, 1996.
A. L. Stout and W. W. Webb. .Methods Cell Biol., 55, 99, 1997. in [@sheetz97].
J. A. Stratton. .McGraw–Hill, New York, 1941.
T. Sugiura and S. Kawata. .Bioimaging, 1, pages 1–5, 1993.
K. Svoboda and S. M. Block. .Opt. Lett., 19, pages 13–15, 1994.
T. Tamir. .Optik, 36, pages 209–232, 1972.
T. Tamir. .Optik, 37, pages 204–228, 1972.
H. C. van de Hulst. .Dover, New York, 1981.
M. Vilfan, I. Mus[ě]{}vi[č]{} and M. [Č]{} opi[č]{}. .Europhys. Lett., 43, pages 41–46, 1998.
K. Wada, K. Sasaki and H. Masuhara. .Appl. Phys. Lett., 76, pages 2815–2817, 2000.
J. Y. Walz. .Appl. Opt., 38, pages 5319–5330, 1999.
D. S. Weiss, V. Sandoghdar, J. Hare, V. Lefè vre-Seguin, J.M. Raimond and S. Haroche. .Opt. Lett., 20, pages 1835–1837, 1995.
A. D. Yaghjian. .Proc. IEEE, 68, pages 248–263, 1980.
[^1]: mnieto@@icmm.csic.es
[^2]: ricardo.arias@@imdea.org
[^3]: NSOM is also called SNOM, abbreviation for scanning near–field optical microscopy.
[^4]: For a better picture of this shift, see the grating case in Ref. [@yo01b].
|
{
"pile_set_name": "arxiv"
}
|
Marianna Csörnyei
Marianna Csörnyei (born October 8, 1975 in Budapest) is a Hungarian mathematician who works as a professor at the University of Chicago. She does research in real analysis, geometric measure theory, and geometric nonlinear functional analysis. She proved the equivalence of the zero measure notions of infinite dimensional Banach spaces.
Education and career
Csörnyei received her doctorate from Eötvös Loránd University in 1999, supervised by György Petruska. She was a professor at the Mathematics Department of University College London between 1999–2011, and spent the 2009–2010 academic year at Yale University as visiting professor. Currently, she is at the University of Chicago.
She is contributing editor of the mathematical journal Real Analysis Exchange.
Awards and honors
Csörnyei won a 2002 Whitehead Prize and a Royal Society Wolfson Research Merit Award that same year.
She was also awarded the Philip Leverhulme Prize for Mathematics and Statistics in 2008 for her work in geometric measure theory.
She was an invited sectional speaker at the International Congress of Mathematicians, in 2010.
External links
Csörnyei's faculty page at the University of Chicago
References
Category:1975 births
Category:Living people
Category:21st-century Hungarian mathematicians
Category:Mathematical analysts
Category:Academics of University College London
Category:Royal Society Wolfson Research Merit Award holders
Category:Whitehead Prize winners
Category:21st-century women mathematicians
|
{
"pile_set_name": "wikipedia_en"
}
|
/** @file
Intel Processor Power Management ACPI Code.
Copyright (c) 2018 - 2019, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
#include "CpuPowerMgmt.h"
DefinitionBlock (
"CPU0PSD.aml",
"SSDT",
0x02,
"PmRef",
"Cpu0Psd",
0x3000
)
{
External(\PC00, IntObj)
External(\TCNT, FieldUnitObj)
External(\_SB.CFGD, FieldUnitObj)
External(\_SB.PR00, DeviceObj)
Scope(\_SB.PR00)
{
Name(HPSD,Package() // HW_ALL
{
Package() {5, // NumEntries. Current Value is 5.
0, // Revision. Current Value is 0.
0, // Domain.
0xFE, // Coordination type 0xFE = HW_ALL
0x80 // Number of processors.
}
})
Name(SPSD,Package() // SW_ALL
{
Package() {5, // NumEntries. Current Value is 5.
0, // Revision. Current Value is 0.
0, // Domain.
0xFC, // Coordination type 0xFC = SW_ALL
0x80 // Number of processors.
}
})
//
// The _PSD object provides information to the OSPM related
// to P-State coordination between processors in a multi-processor
// configurations.
//
Method(_PSD,0)
{
If (And(\_SB.CFGD, PPM_TURBO_BOOST_MAX)) // Intel Turbo Boost Max 3.0
{
Store (0, Index(DerefOf(Index(HPSD, 0)),2)) // Domain
Store (1, Index(DerefOf(Index(HPSD, 0)),4)) // Number of processors belonging to the domain.
} Else {
Store (TCNT, Index(DerefOf(Index(HPSD, 0)),4))
Store (TCNT, Index(DerefOf(Index(SPSD, 0)),4))
}
If(And(PC00,0x0800)) // If Hardware co-ordination of P states
{
Return(HPSD)
}
Return(SPSD)
}
} // End of Scope(\_SB.PR00)
} // End of Definition Block
|
{
"pile_set_name": "github"
}
|
Manhattan Airport Foundation
The Manhattan Airport Foundation is a parody advocacy organization lobbying, as part of a hoax, for the development of an international airport replacing Central Park between 59th Street and 110th Street in Manhattan. The Foundation claims to have been founded in 2006 and that it is composed of members of civic, environmental and community groups as well as elected officials and city and state agencies.
The Foundation states that their proposed 'Manhattan International Airport' would be the largest public works project to be undertaken in New York since the creation of Central Park. Once built, the Airport would provide a much needed international air hub offering vital transportation access to individuals living and working in the center of Manhattan.
See also
Aviation in the New York metropolitan area
References
External links
The Manhattan Airport Foundation website
Curbed
Monogocoro
Gothamist
Treehugger
U.S. News & World Report
Category:Hoaxes in the United States
Category:Parodies
Category:Central Park
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We analyze heat and charge transport through a single-level quantum dot coupled to two BCS superconductors at different temperatures to first order in the tunnel coupling. In order to describe the system theoretically, we extend a real-time diagrammatic technique that allows us to capture the interplay between superconducting correlations, strong Coulomb interactions and nonequilibrium physics. We find that a thermoelectric effect can arise due to the superconducting proximity effect on the dot. In the nonlinear regime, the thermoelectric current can also flow at the particle-hole symmetric point due to a level renormalization caused by virtual tunneling between the dot and the leads. The heat current through the quantum dot is sensitive to the superconducting phase difference. In the nonlinear regime, the system can act as a thermal diode.'
author:
- Mathias Kamp
- Björn Sothmann
title: 'Phase-dependent heat and charge transport through superconductor-quantum dot hybrids'
---
\[sec:intro\]Introduction
=========================
Understanding, manipulating and managing heat flows at the nanoscale is of crucial importance for modern electronics where Joule heating constitutes a major nuisance in the operation of computer chips. Heat transport can occur via electrons [@giazotto_opportunities_2006], phonons [@li_colloquium:_2012] and photons [@meschke_single-mode_2006; @ronzani_tunable_2018]. A promising direction to achieve control over thermal transport by electrons is phase-coherent caloritronics [@martinez-perez_coherent_2014; @fornieri_towards_2017] in superconducting circuits. Phase-coherent caloritronics is based on the observation that not only the charge current depends on the phase difference across the junction via the Josephson effect [@josephson_possible_1962] but that also the heat current is sensitive to the phase difference [@maki_entropy_1965; @maki_entropy_1966; @guttman_phase-dependent_1997; @guttman_thermoelectric_1997; @guttman_interference_1998; @zhao_phase_2003; @zhao_heat_2004]. The phase-dependent contribution to the heat current arises from Andreev like processes where an incident electronlike quasiparticle above the superconducting gap is reflected as a holelike quasi particle and vice versa.
Recently, phase-coherent heat transport in superconducting circuits has been observed experimentally [@giazotto_josephson_2012]. The possibility to control heat currents via magnetic fields has led to a number of proposals for phase-coherent caloritronic devices such as heat interferometers [@giazotto_phase-controlled_2012; @martinez-perez_fully_2013] and diffractors [@giazotto_coherent_2013; @guarcello_coherent_2016], thermal rectifiers [@giazotto_thermal_2013; @martinez-perez_efficient_2013; @fornieri_normal_2014; @fornieri_electronic_2015], transistors [@giazotto_proposal_2014; @fornieri_negative_2016], switches [@sothmann_high-efficiency_2017] and circulators [@hwang_phase-coherent_2018], thermometers [@giazotto_ferromagnetic-insulator-based_2015; @guarcello_non-linear_2018] as well as heat engines [@marchegiani_self-oscillating_2016; @hofer_autonomous_2016; @vischi_coherent_2018] and refrigerators [@solinas_microwave_2016; @marchegiani_-chip_2017]. Experimentally, heat interferometers [@giazotto_josephson_2012; @fornieri_nanoscale_2016; @fornieri_0_2017], the quantum diffraction of heat [@martinez-perez_quantum_2014], thermal diodes [@martinez-perez_rectification_2015] and a thermal router [@timossi_phase-tunable_2018] have been realized so far. Apart from potential applications in caloritronic and thermal logic [@paolucci_phase-tunable_2018], phase-coherent heat transport can also serve as a diagnostic tool that allows one, e.g., to probe the existence of topological Andreev bound states [@sothmann_fingerprint_2016].
![\[fig:model\]Schematic sketch of our setup. A single-level quantum dot is tunnel coupled to two superconducting electrodes at temperatures $T_\text{L}$ and $T_\text{R}$.](Systemsetup.pdf){width="\columnwidth"}
So far, the theoretical and experimental investigation of phase-coherent heat transport has been restricted to systems such as tunnel barriers and point contacts where the effects of electron-electron interactions can be neglected. While such setups already offer a lot of interesting physics, this raises the question of how Coulomb interactions can affect phase-dependent heat currents. In this paper, we address this important question by analyzing phase-coherent heat and charge transport through a thermally biased hybrid structure consisting of a strongly interacting single-level quantum dot tunnel coupled to superconducting electrodes, cf. Fig. \[fig:model\].
Superconductor-quantum dot hybrids have received a lot of attention, see Ref. [@de_franceschi_hybrid_2010] and [@martin-rodero_josephson_2011] for recent reviews on experiments and theory, respectively. In particular, there are investigations of the Josephson effect through quantum dots [@van_dam_supercurrent_2006; @jarillo-herrero_quantum_2006; @jorgensen_critical_2007; @baba_superconducting_2015; @szombati_josephson_2016; @probst_signatures_2016], multiple Andreev reflections [@levy_yeyati_resonant_1997; @buitelaar_multiple_2003; @cuevas_full_2003; @nilsson_supercurrent_2011; @rentrop_nonequilibrium_2014; @hwang_hybrid_2016], the interplay between superconducting correlations and the Kondo effect [@clerk_loss_2000; @buitelaar_quantum_2002; @avishai_superconductor-quantum_2003; @eichler_even-odd_2007; @lopez_josephson_2007; @karrasch_josephson_2008], the generation of unconventional superconducting correlations in quantum dots [@sothmann_unconventional_2014; @kashuba_majorana_2017; @weiss_odd-triplet_2017; @hwang_odd-frequency_2017], Cooper pair splitting [@recher_andreev_2001; @hofstetter_cooper_2009; @herrmann_carbon_2010; @hofstetter_finite-bias_2011; @das_high-efficiency_2012; @schindele_near-unity_2012] and the generation of Majorana fermions [@leijnse_parity_2012; @sothmann_fractional_2013; @fulga_adaptive_2013; @deng_majorana_2016]. Thermoelectric effects in superconductor-quantum dot hybrids have been studied in the absence of Coulomb interactions [@kleeorin_large_2016]. Here, we use a superconductor-quantum dot hybrid as a playground to investigate the interplay between superconductivity, strong Coulomb interactions and thermal nonequilibrium. Compared to tunnel junctions, quantum dots offer additional tunability of their level position by gate voltages. We extend a real-time diagrammatic approach [@konig_zero-bias_1996; @konig_resonant_1996; @schoeller_transport_1997; @konig_quantum_1999; @governale_real-time_2008; @governale_erratum:_2008] to describe thermally-driven transport which allows us to treat Coulomb interactions exactly and to perform a systematic expansion in the tunnel coupling between the dot and the superconducting leads. It allows for a treatment of superconducting correlations induced on the dot via the proximity effect and captures renormalization effects due to virtual tunneling which affect transport already in lowest order of perturbation theory. We evaluate charge and heat currents both in linear and nonlinear response. In particular, we find a thermoelectric effect in the vicinity of the particle-hole symmetric point which arises from the proximity effect. Furthermore, our device can act as an efficient thermal diode in nonlinear response.
The paper is organized as follows. In Sec. \[sec:model\], we introduce the model of our setup. The real-time diagrammatic transport theory used to investigate transport is introduced in Sec. \[sec:method\]. We present the results of our analysis in Sec. \[ssec:linear\] for the linear and in Sec. \[ssec:nonlinear\] for the nonlinear transport regime. Conclusions are drawn in Sec. \[sec:conclusion\].
\[sec:model\]Model
==================
We consider a single-level quantum dot weakly tunnel coupled to two conventional superconducting electrodes. Both superconductors are kept at the same chemical potential $\mu=0$ but at different temperatures $T_\text{L}$ and $T_\text{R}$ resulting in a nonequilibrium situation. The system is described by the total Hamiltonian $$H=\sum_{\eta=\text{L,R}}\left(H_\eta+H_{\text{tun},\eta}\right)+H_\text{dot},$$ where $\eta$ denotes the left (L) and right (R) superconductor. The superconducting leads are characterized by the mean-field BCS Hamiltonian $$\label{eq:BCS}
H_\eta=\sum_{{\mathbf{k}}\sigma} \varepsilon_{\eta{\mathbf{k}}} a^\dagger_{\eta{\mathbf{k}}\sigma} a_{\eta{\mathbf{k}}\sigma}+\Delta_\eta e^{i\phi_\eta }\sum_{{\mathbf{k}}}a_{\eta -{\mathbf{k}}{\uparrow}}a_{\eta {\mathbf{k}}{\downarrow}}+\text{H.c.},$$ where $a_{\eta{\mathbf{k}}\sigma}^\dagger$ ($a_{\eta{\mathbf{k}}\sigma}$) denotes the creation (annihilation) operator of an electron with momentum ${\mathbf{k}}$, spin $\sigma$ and kinetic energy $\varepsilon_{\eta {\mathbf{k}}}$ in lead $\eta$. The second term on the right-hand side of Eq. describes the BCS pair interaction on a mean-field level. The two superconducting order parameters are characterized by their absolute value $\Delta_\eta$ and their phase $\phi_\eta$. The temperature dependence of $\Delta_\eta$ is determined by the solution of the self-consistency equation for the order parameter which can be found only numerically. However, it can be approximated with an accuracy of better than 2% by $$\Delta_\eta(T_\eta)=\Delta_{0} \tanh \left(1.74 \sqrt{\frac{T_{c}}{T_\eta}-1}\right),$$ in the whole temperature range from 0 to the critical temperature $T_{c}$. The latter is connected to the superconducting order parameter at zero temperature via ${k_\text{B}}T_{c}\approx 0.568 \Delta_{0}$.
The single-level quantum dot is described by the Hamiltonian $$H_\text{dot}=\sum_\sigma \varepsilon c_\sigma^\dagger c_\sigma+U c_{\uparrow}^\dagger c_{\uparrow}c_{\downarrow}^\dagger c_{\downarrow}.$$ While the first term describes the energy of the dot level $\varepsilon$ that can be tuned by applying a gate voltage, the second term denotes the Coulomb interaction that has to be supplied in order to occupy the dot with two electrons at the same time. We remark that the dot spectrum is particle-hole symmetric at $\varepsilon=-U/2$. For later convenience, we introduce the detuning $\delta=2\varepsilon+U$ from the particle-hole symmetric point.
The tunneling Hamiltonian which couples the dot to the superconducting leads is given by $$H_\text{tun}=\sum_{\eta {\mathbf{k}}\sigma}t_\eta a_{\eta{\mathbf{k}}\sigma}^\dagger c_\sigma+\text{H.c.}$$ Here, $t_\eta$ denotes a tunnel matrix element which we assume to be energy and momentum independent. It is connected to the tunnel coupling strength $\Gamma_\eta=2\pi|t_\eta|^2\rho_\eta$ where $\rho_\eta$ denotes the density of states of lead $\eta$ in the normal state.
\[sec:method\]Real-time diagrammatic transport theory
=====================================================
In order to describe transport through the quantum-dot setup, we make use of a real-time diagrammatic technique [@konig_zero-bias_1996; @konig_resonant_1996; @schoeller_transport_1997; @konig_quantum_1999] for systems with superconducting leads with a finite gap [@governale_real-time_2008; @governale_erratum:_2008]. It allows us to treat nonequilibrium physics, superconducting correlations and strong Coulomb interactions exactly while performing a systematic expansion in the dot-lead couplings. In the following, we are going to extend this diagrammatic framework to allow for the calculation of thermally-driven charge and heat currents through quantum dot-superconductor hybrids on equal footing.
The central idea of the diagrammatic approach is to integrate out the noninteracting leads and to describe the remaining quantum dot system by its reduced density matrix. The reduced density matrix $\rho_\text{red}$ has matrix elements $P^{\chi_1}_{\chi_2}={\langle \chi_1|}\rho_\text{red}{|\chi_2\rangle}$. For the system under investigation, the nonvanishing density matrix elements are given by the probability to find the quantum dot empty, $P_0$, occupied with a single electron with spin $\sigma$, $P_\sigma$, or doubly occupied, $P_d$. Furthermore, the coupling to the superconductors gives rise to finite off-diagonal density matrix elements $P^d_0$ and $P^0_d$ that describe the coherent superposition of the dot being empty and occupied with two electrons. The generation of these coherent superpositions is a hallmark of the superconducting proximity effect on the quantum dot.
The time evolution of the reduced density matrix is given by the generalized master equation which in the stationary limit reads $$0=-i(E_{\chi_1}-E_{\chi_2})P^{\chi_1}_{\chi_2}+\sum_{\chi_1'\chi_2'}W^{\chi_1\chi_1'}_{\chi_2\chi_2'}P^{\chi_1'}_{\chi_2'},$$ where $E_\chi$ is the energy of the many-body dot state $\chi$. The first term describes the coherent evolution of the dot states. The second term arises due to the dissipative coupling to the superconductors. The generalized transition rates $W^{\chi_1\chi_1'}_{\chi_2\chi_2'}$ are obtained from irreducible self-energy diagrams of the dot propagator on the Keldysh contour [@governale_real-time_2008; @governale_erratum:_2008], cf. also Appendix \[app:RTD\] for a detailed explanation of the connection between diagrams and physical processes. By expanding both the density matrix elements as well as the generalized transition rates up to first order in the tunnel couplings, we find that the coherent superpositions $P^0_d$ and $P^d_0$ are finite to lowest order in $\Gamma_\eta$ only if the empty and doubly occupied dot states are nearly degenerate, $\delta\lesssim\Gamma_\eta$ [@sothmann_influence_2010]. For this reason, we are going to restrict ourselves to the analysis of transport in the vicinity of the particle-hole symmetric point to first order in the tunnel coupling in the following.
The generalized master equation can be brought into a physically intuitive form by introducing the probabilities to find the dot occupied with an even and odd number of electrons, $${\mathbf{P}}=\left(\begin{array}{c} P_\text{e}\\P_\text{o} \end{array}\right)=\left(\begin{array}{c} P_0+P_d \\ P_{\uparrow}+P_{\downarrow}\end{array}\right),$$ as well as a pseudospin degree of freedom that characterizes the coherences between empty and doubly occupied dot and, thus, the superconducting proximity effect on the quantum dot $$\begin{aligned}
I_x&=\frac{P^0_d+P^d_0}{2},\\
I_y&=i\frac{P^0_d-P^d_0}{2},\\
I_z&=\frac{P_0-P_d}{2}.\end{aligned}$$
The generalized master equation can be decomposed into one set of equations that arises from the time evolution of the dot occupations and another set due to the pseudospin. The former is given by $$\label{eq:MEP}
0
=\sum_\eta \left[\left(\begin{array}{cc} -Z^-_\eta & Z^+_\eta \\ Z^-_\eta & -Z^+_\eta \end{array}\right){\mathbf{P}}
+
\left(\begin{array}{c} 4X^-_\eta \\ -4X^-_\eta \end{array}\right){\mathbf{I}}\cdot {\mathbf{n}}_\eta\right],$$ where $$X^\pm_\eta=\pm\frac{\Gamma_\eta}{\hbar}\frac{\Delta_\eta \Theta(U/2-\Delta_\eta)}{\sqrt{(U/2)^2-\Delta_\eta^2}}f_\eta(\pm U/2),$$ $$Z^\pm_\eta=\frac{\Gamma_\eta}{\hbar}\frac{U\Theta(U/2-\Delta_\eta)}{\sqrt{(U/2)^2-\Delta_\eta^2}}f_\eta(\pm U/2),$$ with the Fermi function $f_\eta(\omega)=[\exp(\omega/({k_\text{B}T}_\eta))+1]^{-1}$. ${\mathbf{n}}_\eta=(\cos\phi_\eta,\sin\phi_\eta,0)$ denotes a unit vector whose direction is determined by the phase of the superconducting order parameters. Interestingly, in Eq. the dot occupations are coupled to the pseudospin degree of freedom. This is in direct analogy to the case of a quantum dot weakly coupled to ferromagnetic electrodes where the dot occupations are linked to the spin accumulation in the dot [@konig_interaction-driven_2003; @braun_theory_2004]. The second set of equations is given by a Bloch-type equation for the pseudospin, $$ 0
=\left(\frac{d{\mathbf{I}}}{dt}\right)_\text{acc}-\frac{{\mathbf{I}}}{\tau_\text{rel}}+{\mathbf{I}}\times{\mathbf{B}}.$$ The first term, $$\left(\frac{d{\mathbf{I}}}{dt}\right)_\text{acc}=\sum_\eta \left(X^-_\eta P_\text{e}+X^+_\eta P_\text{o}\right){\mathbf{n}}_\eta,$$ describes the accumulation of pseudospin on the dot due to tunneling in and out of electrons. The second term characterizes the relaxation of the pseudospin due to electron tunneling on a time scale given by $\tau_\text{rel}^{-1}=\sum_\eta Z^-_\eta$. Finally, the last term gives rise to a precession of the pseudospin in an effective exchange field, $${\mathbf{B}}=B_\text{L}{\mathbf{n}}_\text{L}+B_\text{R}{\mathbf{n}}_\text{R}+\delta {\mathbf{e}}_z,$$ which arises from virtual charge fluctuations on the dot as well as from a detuning away from the particle-hole symmetric point. The exchange field contribution from the two leads is given by $$\label{eq:Bex}
B_\eta=\frac{2\Gamma_\eta}{\pi\hbar}\int'd\omega\frac{\Delta_\eta\Theta(|\omega|-\Delta_\eta)}{\sqrt{\omega^2-\Delta_\eta^2}}\frac{f_\eta(\omega)}{\omega+U/2}\operatorname{sign}\omega,$$ where the prime indicates the principal value. The integral can be solved analytically as an infinite sum over Matsubara frequencies, see Appendix \[app:Bex\] for details. The interplay of pseudospin accumulation, pseudospin relaxation and pseudospin precession in the exchange field leads to a nontrivial pseudospin dynamics on the dot which acts back on the dot occupations via Eq. . It is this nontrivial pseudospin behavior that gives rise to interesting transport properties of the system under investigation.
The charge on the quantum dot is related to the $z$ component of the pseudospin via $Q_\text{dot}=e(1-2I_z)$. This allows us to connect the time evolution of $I_z$ directly to the charge current flowing between the dot and lead $\eta$ via $$\label{eq:Ic}
I^e_\eta=-2e(Z_\eta^-I_z-I_xB_{\eta,y}+I_yB_{\eta,x}).$$ We remark that the real-time diagrammatic approach conserves charge currents automatically. Therefore, we define $I^e=I^e_\text{L}=-I^e_\text{R}$ in the following. In analogy to the charge, we can relate the average dot energy to the probability to find the dot with an odd occupation, $E_\text{dot}=-UP_\text{o}/2$, to derive for the heat current between the dot and lead $\eta$ $$I^h_\eta=-\frac{U}{2}\left(Z^+_\eta P_\text{o}-Z^-_\eta P_\text{e}+4X^-_\eta {\mathbf{I}}\cdot{\mathbf{n}}_\eta\right).$$ We remark that in the absence of any bias voltage there is no Joule heating and, hence, heat and energy currents are equal to each other. This implies that heat currents are conserved such that we can define $I^h=I^h_\text{L}=-I^h_\text{R}$.
\[sec:results\]Results
======================
In this section, we are going to analyze the charge and heat currents flowing through the system in response to an applied temperature bias. We will first focus on the linear-response regime and then turn to a discussion of nonlinear transport.
\[ssec:linear\]Linear response
------------------------------
For the sake of concreteness, we consider a symmetric quantum-dot setup. To this end, we define the temperatures of the superconducting leads as $T_\eta=T+\Delta T_\eta$ with the reference temperature $T$ and the temperature bias $\Delta T_\text{L}=-\Delta T_\text{R}\equiv \Delta T/2$. The tunnel couplings are chosen equal, $\Gamma_\text{L}=\Gamma_\text{R} \equiv\Gamma/2$. Furthermore, we assume that the two superconducting order parameters have the same absolute value, $\Delta_\text{L}(T)=\Delta_\text{R}(T)=\Delta$, and set their phases as $\phi_\text{L}=-\phi_\text{R}\equiv\phi/2$.
To zeroth order in $\Delta T$, i.e., in thermal equilibrium the occupation probabilities of the dot are given by Boltzman factors $P_\chi^{(0)}\propto e^{-E_\chi/{k_\text{B}T}}$. At the same time, the pseudospin accumulation on the dot vanishes exactly. In consequence, there is no charge and heat current flowing through the system. Since we consider only tunnel events that are first order in the tunnel coupling, there is no supercurrent through the quantum dot [@governale_real-time_2008]. The latter would manifest itself as a phase-dependent equilibrium contribution to the charge current. It requires, however, the coherent transfer of Cooper pairs through the dot and, hence, higher order tunnel processes.
![\[fig:Iclin\]Linear-response charge current $I^e$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters are $U=4{k_\text{B}T}$ and $\Delta=1.75{k_\text{B}T}$.](Icharge_lin_phi.pdf "fig:"){width="\columnwidth"} ![\[fig:Iclin\]Linear-response charge current $I^e$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters are $U=4{k_\text{B}T}$ and $\Delta=1.75{k_\text{B}T}$.](Icharge_lin_delta.pdf "fig:"){width="\columnwidth"}
A finite temperature bias $\Delta T$ generates a finite pseudospin accumulation on the dot. To first order in $\Delta T$ the accumulation is along the direction ${\mathbf{n}}_\text{L}-{\mathbf{n}}_\text{R}$, i.e., a finite pseudospin component $I^{(1)}_y$ is generated due to nonequilibrium tunneling of electrons. The magnitude of the pseudospin accumulation is limited by the pseudospin relaxation term $-{\mathbf{I}}/\tau_\text{rel}$. In addition, the effective exchange field ${\mathbf{B}}$ gives rise to a precession of the accumulated pseudospin and leads to finite pseudospin components $I^{(1)}_x$ and $I^{(1)}_z$. According to Eq. , the pseudospin accumulation leads to a finite charge current given by $$\label{eq:Ielin}
I^e=-e\frac{2B_0 X_1^-Z_0^-\sin^2\frac{\phi}{2}}{Z_0^-\frac{\delta}{\hbar}+2[(Z_0^-)^2+B_0^2\cos^2\frac{\phi}{2}]\tan\beta}\frac{\Delta T}{T}.$$ Here, we introduced the expansions $$\begin{aligned}
X^\pm_\eta&=X^\pm_0+X^\pm_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\\
Z^\pm_\eta&=Z^\pm_0+Z^\pm_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\\
B_\eta&=B_0+B_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\end{aligned}$$ as well as the angle $\beta=\arctan (I^{(1)}_y/I^{(1)}_x)$ which can be written as $$\tan\beta=\frac{2\hbar}{\delta Z^-_0}\left[(Z_0^-)^2-4(X_0^-)^2\cos^2\frac{\phi}{2}\right].$$ The thermoelectric charge current Eq. arises in the vicinity of the particle-hole symmetric point. It relies crucially on the superconducting proximity effect and the resulting pseudospin accumulation on the dot because the Fermi functions in the generalized transition rates $W^{\chi_1\chi_1'}_{\chi_2\chi_2'}$ are evaluated at the particle-hole symmetric point $\delta=0$ and, therefore, do not lead to any thermoelectric effect. It is, thus, the pseudospin accumulation that introduces a nontrivial $\delta$ dependence into the master equation via the effective exchange field ${\mathbf{B}}$. In consequence, the thermoelectric current vanishes for $\Delta\to0$, i.e., in the absence of superconductivity in the leads.
In Fig. \[fig:Iclin\] (a), the charge current is shown as a function of the phase difference $\phi$. At zero phase difference, the charge current vanishes independently of the detuning $\delta$ because there is no pseudospin accumulation on the quantum dot. In contrast, at $\phi=\pi$ the charge current becomes maximal due to the strong pseudospin accumulation on the dot. Figure \[fig:Iclin\] (b) shows the charge current as a function of the detuning $\delta$. For $\delta=0$ the charge current vanishes due to particle-hole symmetry. For positive (negative) values of the detuning the charge current takes positive (negative) values indicating electron (hole) transport. The maximal current occurs for a phase difference of $\phi=\pi$ and detuning $\delta=\pm2\hbar Z^-_0$ and takes the value $I^e=-(e B_0 X_1^-\Delta T)/(2Z_0^- T)$. The maximum current is exponentially suppressed in $U/({k_\text{B}T})$ due to the requirement of thermally excited quasiparticles. At the same time, it is *not* enhanced by the divergence of the superconducting density of states close to the gap. For large detunings, the strong exchange field along the $z$ direction averages out the pseudospin accumulation along the $x$ and $y$ direction. As a consequence, the charge current tends to zero.
![\[fig:Ihline\]Linear-response heat current $I^h$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters as in Fig. \[fig:Iclin\].](Iheat_lin_phi.pdf "fig:"){width="\columnwidth"} ![\[fig:Ihline\]Linear-response heat current $I^h$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters as in Fig. \[fig:Iclin\].](Iheat_lin_delta.pdf "fig:"){width="\columnwidth"}
The heat current driven by a finite temperature bias $\Delta T$ is given by $$I^h=-\frac{U}{2}\left(Z^+_1+4I^{(1)}_yX_0^-\sin\frac{\phi}{2}\right)\frac{\Delta T}{T}.$$ It consists of two contributions. The first one is independent of the phase difference $\phi$ and depends only on the tunnel coupling $\Gamma$, the Coulomb interaction $U$ and the superconducting order parameter $\Delta$. In contrast, the second contributions is sensitive to the phase difference $\phi$ and, thus, gives rise to a phase-coherent flow of heat which arises from the superconducting proximity effect on the dot. In consequence, it vanishes in the limit $\Delta\to0$. Interestingly, the phase-dependent part of the heat current is proportional to $I^{(1)}_y$, i.e., it provides in principle direct information about the pseudospin accumulation on the dot. We remark that just like the charge current the heat current is also exponentially suppressed in $U/{k_\text{B}T}$. At the same time, however, it is enhanced by the increased superconducting density of states close to the gap. Hence, for the system heat currents in units of $\Gamma U/\hbar$ tend to be much larger than charge currents in units of $e\Gamma/\hbar$.
The phase dependence of the heat current is shown in Fig. \[fig:Ihline\](a). At $\phi=0$, the heat current is maximal and takes the value $I^h=-UZ_1^+ \Delta T/(2T)$. The minimal heat current occurs at $\phi=\pi$ since $X^0_-$ is negative while the pseudospin accumulation $I^{(1)}_y$ is positive. This $\phi$ dependence of the thermal conductance differs from that of a tunneling Josephson junction which exhibits a maximum of the thermal conductance at $\phi=\pi$ [@maki_entropy_1965; @maki_entropy_1966]. It rather resembles the phase-dependent thermal conductance of a transparent or topological Josephson junction which also has a minimum at $\phi=\pi$ [@zhao_phase_2003; @zhao_heat_2004; @sothmann_fingerprint_2016]. The ratio between the minimal and maximal heat current is given by $1-4\Delta^2/U^2$, i.e., it can be maximized by tuning the superconducting gap via the average temperature to be close to the Coulomb energy $U$. At the same time, this is also the regime where the relative modulation of the heat current becomes largest.
The $\delta$ dependence of the heat current is depicted in Fig. \[fig:Ihline\](b). The largest modulation of the heat current occurs for $\delta=0$. In this case, the exchange field component along the $z$ axis vanishes which would otherwise reduce $I^{(1)}_y$ and thus the modulation amplitude. For the same reason, the modulation of the thermal conductance is strongly suppressed for large detunings $\delta\gg\Gamma$.
\[ssec:nonlinear\]Nonlinear response
------------------------------------
![\[fig:Icnonlinear\]Charge current $I^e$ in units of $10^{-3}e\Gamma/\hbar$ as a function of phase difference $\phi$ and detuning $\delta$. The red line indicates a vanishing charge current. Parameters are $\Delta_0=2.32{k_\text{B}T}_\text{L}$, $U=5{k_\text{B}T}_\text{L}$, $\Gamma_\text{R}=4\Gamma_\text{L}$ and $T_\text{R}=T_\text{L}/2$.](Icharge_nonlin.pdf){width="\columnwidth"}
We now turn to a discussion of transport in the nonlinear regime where a large temperature bias is applied across the system. The resulting charge current is shown as a function of phase difference and detuning in Fig. \[fig:Icnonlinear\]. Interestingly, for a phase difference $\phi\neq 0,\pi$, there is a finite charge current at the particle-hole symmetric point $\delta=0$.
This finite thermoelectric effect can be understood as follows. If the dot is empty (doubly occupied), electrons can virtually tunnel on (off) the dot and back. These virtual tunneling events give rise to a renormalization of the dot level energies which is captured by the real-time diagrammatic technique. Importantly, in the presence of Coulomb interactions, the renormalization is different for the empty and doubly occupied state and, thus, can break particle-hole symmetry effectively. Hence, similarly to charge transport in quantum-dot spin valves [@konig_interaction-driven_2003; @braun_theory_2004; @hell_spin_2015], thermoelectric effects in superconductor-quantum dot hybrids constitute an important case where interaction-induced renormalization effects have a drastic impact on transport properties. Using Eq. the condition for a vanishing current can be cast into the compact form $$\frac{Z^-_\text{L}}{Z^-_\text{R}}=\frac{B_\text{L}\sin\left(\varphi-\frac{\phi}{2}\right)}{B_\text{R}\sin\left(\varphi+\frac{\phi}{2}\right)},$$ where $\varphi$ denotes the $\delta$-dependent angle between the pseudospin and the $x$ axis. It illustrates the interplay between pseudospin relaxation and precession that influences the nonlinear charge current in a nontrivial way and is indicated by the red line in Fig. \[fig:Icnonlinear\].
The nonlinear heat current behaves qualitatively similar to the linear-response case, i.e., it exhibits a minimum at phase difference $\phi=\pi$ and detuning $\delta=0$. We remark that the amplitude of the heat current oscillation is reduced in the nonlinear regime because the heat current at $\phi=\pi$ increases stronger with the temperature bias than the heat current at $\phi=0$.
![\[fig:heatdiode\]Nonlinear heat current as a function of the asymmetry $a$. Parameters are $\Delta_0=2.32 {k_\text{B}T}_\text{L}$, $U=4.64{k_\text{B}T}_\text{L}$, $\delta=10\Gamma$ and $T_\text{R}=0.1T_\text{L}$.](Iheat_asym.pdf){width="\columnwidth"}
In the nonlinear regime, an asymmetric quantum-dot setup with $\Gamma_\text{L}\neq\Gamma_\text{R}$ can act as a thermal diode where the heat currents in the forward and backward direction are different. To discuss this effect in more detail, we introduce the asymmetry of tunnel couplings as $a=(\Gamma_\text{L}-\Gamma_\text{R})/(\Gamma_\text{L}+\Gamma_\text{R})$. The heat current in the forward direction is given by $I^h(a)$ while in the backward direction it is given by $I^h(-a)$. This definition is equivalent to denoting the forward (backward) direction as the one for which $T_\text{L}>T_\text{R}$ ($T_\text{L}<T_\text{R}$) at fixed tunnel couplings as long as $\Delta_{0,\text{L}}=\Delta_{0,\text{R}}$.
Figure \[fig:heatdiode\] shows the nonlinear heat current as a function of the asymmetry parameter $a$. For negative values of $a$, the heat current increases with $a$ while for positive values of $a$ it has a pronounced maximum. This nontrivial dependence on $a$ is most pronounced when the Coulomb energy is slightly larger than the superconducting gap. Since the heat current is not an even function of $a$, the system can rectify heat with a large heat current in the forward direction and a small heat current in the backward direction. For the chosen parameters we find that rectification efficiencies $I^h(a)/I^h(-a)\approx50$ can be achieved at the maximum forward heat current.
In order to understand the mechanism behind the thermal rectification, let us first consider the case of a single-level quantum dot coupled to two normal metal electrodes. At the particle-hole symmetric point, the heat current depends on the tunnel couplings via $\Gamma_\text{L}\Gamma_\text{R}/(\Gamma_\text{L}+\Gamma_\text{R})$. Hence, the heat current is an even function of the asymmetry $a$, $I^h(+a)=I^h(-a)$, such that thermal rectification does not occur.
For the superconducting system, the dependence of the heat current on the tunnel barriers is modified by the BCS density of states and is given by $$\frac{\Gamma_\text{L}\Gamma_\text{R}}{\Gamma_\text{L}\sqrt{U^2-4\Delta_\text{R}^2}+\Gamma_\text{R}\sqrt{U^2-4\Delta_\text{L}^2}}.$$ Hence, due to the temperature dependence of the superconducting gap the heat current exhibits a nontrivial dependence on the asymmetry $a$ which forms the basis of the heat rectification mechanism. In addition, the coherent pseudospin dynamics of the dot can enhance the thermal diode effect for a finite phase difference $\phi$. As can be seen in Fig. \[fig:heatdiode\] it can increase the rectification efficiency by nearly a factor of 4 if the tunnel coupling asymmetry is adjusted to maximize the heat current in the forward direction. We remark that the enhancement of the rectification efficiency comes at the price of a slightly reduced heat current in the forward direction compared to the case $\phi=0$.
\[sec:conclusion\]Conclusions
=============================
We have analyzed thermally-driven transport through a superconductor-quantum dot hybrid in the sequential tunneling regime. We find that in linear response a finite thermoelectric effect can be generated close to the particle-hole symmetric point due to the superconducting proximity effect on the dot. In addition, there is a phase-dependent heat current through the quantum dot which in linear response is sensitive to the pseudospin accumulation in the dot, i.e., it provides direct access to information about the proximity effect on the dot. In nonlinear response, an interaction-induced level renormalization due to virtual tunneling gives rise to a finite thermoelectric response at the particle-hole symmetric point. Furthermore, the system can act as a thermal diode which is based on the temperature-dependence of the superconducting gap as well as the superconducting proximity effect.
Finally, we comment on potential experimental realizations of our proposal. For superconducting electrodes based on Al, the zero-temperature gap is given by , while the critical temperature is . Hence, the device should be operated at temperatures around while the Coulomb interaction should be of the order of . Assuming furthermore tunnel couplings of the order of , we estimate charge currents of the order of and heat currents of the order of which are both within the reach of present experimental technology [@fornieri_nanoscale_2016; @timossi_phase-tunable_2018; @dutta_thermal_2017].
We thank Fred Hucht for valuable discussions and Stephan Weiss and Sun-Yong Hwang for feedback on the manuscript. We acknowledge financial support from the Ministry of Innovation NRW via the “Programm zur Förderung der Rückkehr des hochqualifizierten Forschungsnachwuchses aus dem Ausland”. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
\[app:RTD\]Real-time diagrammatics
==================================
![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-1.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-2.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-3.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-4.pdf "fig:"){width=".2\textwidth"}
In this Appendix, we discuss the connection between real-time diagrams and the underlying physical processes. For a details on the diagrammatic theory for superconducting systems we refer the reader to Ref. [@governale_erratum:_2008].
Real-time diagrams consist of horizontal lines describing the forward and backward propagation of the quantum dot along the Keldysh contour. Dots on the Keldysh contour correspond to tunneling vertices where an electron is created (annihilated) on the dot and annihilated (created) in one of the superconductors. When we integrate out the noninteracting lead degrees of freedom, pairs of tunneling vertices get connected by tunneling lines. In superconducting systems, two different types of tunneling lines arise (i) normal lines which connect a vertex that creates an electron on the dot with a vertex that annihilates a dot electron and (ii) anomalous lines where two vertices that both annihilate (create) a dot electron are connected. The anomalous lines arise because the BCS Hamiltonian is diagonalized by Bogoliubov quasipartices which are superpositions of electrons and holes. Physically, they describe Andreev reflection processes where two electrons on the dot are created (annihilated) while a Cooper pair in the superconductor is annihilated (created).
Let us now focus on first order diagrams as depicted in Fig. \[fig:diagrams\]. Diagrams such as the one in Fig. \[fig:diagrams\](a) describe the transition between two diagonal density matrix elements. They correspond to the usual transition rates that are obtained via Fermi’s golden rule in conventional rate equation approaches. Diagrams such as shown in Fig. \[fig:diagrams\](b) yield the diagonal elements of the rate matrix. While in rate equation approaches they are typically set by hand to be $W_{\chi,\chi}^{\chi,\chi}=-\sum_{\chi'\neq\chi}W_{\chi',\chi}^{\chi',\chi}$ in order to ensure the conservation of probability, they appear naturally in the diagrammatic framework and, thus, provide an additional consistency check of the results.
In superconducting systems, additional diagrams involving anomalous tunneling lines such as the ones depicted in Fig. \[fig:diagrams\](c) and (d) appear. They give rise to finite off-diagonal density matrix elements describing coherent superpositions of the dot being empty and doubly occupied and, hence, capture the superconducting proximity effect on the quantum dot. We emphasize that the proximity effect occurs already in first order in the tunnel coupling via these diagrams as they give rise to the coherent transfer of a Cooper pair between the dot and the superconductor. However, the proximity effect on the dot does not give rise to a supercurrent through the system in first order. A finite supercurrent relies on the coherent coupling between the two superconducting leads which for our setup can occur only in second- and higher-order processes. This is different in the case of a simple superconducting tunnel junctions where a finite supercurrent occurs already in first order [@josephson_possible_1962]. Diagrams such as Fig. \[fig:diagrams\](d) give rise to a level renormalization of the empty and doubly occupied state relative to each other and, thus, contribute to the exchange field in Eq. .
\[app:Bex\]Exchange field integral
==================================
The integral appearing in the expression for the exchange field can be solved analytically by performing the substitution $\omega\operatorname{sign}\omega=\Delta\cosh\alpha$. Subsequently, the residue theorem can be applied to the rectangle with corner points $(-R,R,R+2\pi i, -R+2\pi i)$ and taking the limit $R\to\infty$. While the contribution from the vertical edges vanishes, the top and bottom edge yield identical contributions. This allows us to express the exchange field integral as the infinite sum
$$B_\eta=\sum_{n=0}^\infty8\Gamma_\eta {k_\text{B}T}_\eta\frac{U}{[4(2n+1)^2\pi^2{k_\text{B}T}_\eta^2+U^2]}\frac{\Delta_\eta}{\sqrt{(2n+1)\pi^2{k_\text{B}T}_\eta^2+\Delta_\eta^2}}.$$
For our numerical results, we have evaluated the sum by taking into account the first 10.000 summands.
[87]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/RevModPhys.78.217) [****, ()](\doibase
10.1103/RevModPhys.84.1045) [****, ()](\doibase
10.1038/nature05276) [ ()](\doibase 10.1038/s41567-018-0199-4) [****, ()](\doibase
10.1007/s10909-014-1132-6) [****, ()](\doibase
10.1038/nnano.2017.204) [****, ()](\doibase
10.1016/0031-9163(62)91369-0) [****, ()](\doibase 10.1103/PhysRevLett.15.921) [****, ()](\doibase 10.1103/PhysRevLett.16.258) [****, ()](\doibase 10.1103/PhysRevB.55.3849) [****, ()](\doibase 10.1103/PhysRevB.55.12691) [****, ()](\doibase 10.1103/PhysRevB.57.2717) [****, ()](\doibase
10.1103/PhysRevLett.91.077003) [****, ()](\doibase
10.1103/PhysRevB.69.134503) [****, ()](\doibase 10.1038/nature11702) [****, ()](\doibase 10.1063/1.4750068) [****, ()](\doibase doi:10.1063/1.4794412) [****, ()](\doibase 10.1103/PhysRevB.88.094506) [****, ()](\doibase 10.1103/PhysRevB.94.054522) [****, ()](\doibase 10.1063/1.4846375) [****, ()](\doibase
doi:10.1063/1.4804550) [****, ()](\doibase
10.1063/1.4875917) [****, ()](\doibase 10.1063/1.4915899) [****, ()](\doibase
10.1063/1.4893443) [****, ()](\doibase 10.1103/PhysRevB.93.134508) [****, ()](\doibase 10.1088/1367-2630/aa60d4) [****, ()](\doibase 10.1103/PhysRevApplied.10.044062) [****, ()](\doibase
10.1103/PhysRevApplied.4.044016) [ ()](http://arxiv.org/abs/1807.03186), [****, ()](\doibase
10.1103/PhysRevApplied.6.054014) [****, ()](\doibase 10.1103/PhysRevB.94.235420) [ ()](http://arxiv.org/abs/1806.01568), [****, ()](\doibase
10.1103/PhysRevB.93.224521) [****, ()](\doibase
10.1209/0295-5075/124/48005) [****, ()](\doibase 10.1038/nnano.2015.281) [****, ()](\doibase
10.1038/nnano.2017.25) [****, ()](\doibase
10.1038/ncomms4579) [****, ()](\doibase 10.1038/nnano.2015.11) [****, ()](\doibase
10.1021/acs.nanolett.7b04906) [****, ()](\doibase 10.1103/PhysRevApplied.10.024003) [****, ()](\doibase 10.1103/PhysRevB.94.081407) [****, ()](\doibase
10.1038/nnano.2010.173) [****, ()](\doibase
10.1080/00018732.2011.624266) [****, ()](\doibase
10.1038/nature05018) [****, ()](\doibase
10.1038/nature04550) [****, ()](\doibase 10.1021/nl071152w) [****, ()](\doibase 10.1063/1.4936888) [****, ()](\doibase 10.1038/nphys3742) [****, ()](\doibase
10.1103/PhysRevB.94.155445) [****, ()](\doibase 10.1103/PhysRevB.55.R6137) [****, ()](\doibase
10.1103/PhysRevLett.91.057005) [****, ()](\doibase
10.1103/PhysRevLett.91.187001) [****, ()](\doibase 10.1021/nl203380w) [****, ()](\doibase
10.1103/PhysRevB.89.235110) [****, ()](\doibase 10.1088/1367-2630/18/9/093024) [****, ()](\doibase 10.1103/PhysRevB.61.9109) [****, ()](\doibase 10.1103/PhysRevLett.89.256801) [****, ()](\doibase 10.1103/PhysRevB.67.041301) [****, ()](\doibase
10.1103/PhysRevLett.99.126602) [****, ()](\doibase 10.1103/PhysRevB.75.045132) [****, ()](\doibase
10.1103/PhysRevB.77.024517) [****, ()](\doibase
10.1103/PhysRevB.90.220501) [****, ()](\doibase 10.1103/PhysRevB.95.174516) [****, ()](\doibase
10.1103/PhysRevB.96.064529) [****, ()](\doibase
10.1103/PhysRevB.98.161408) [****, ()](\doibase 10.1103/PhysRevB.63.165314) [****, ()](\doibase 10.1038/nature08432) [****, ()](\doibase 10.1103/PhysRevLett.104.026801) [****, ()](\doibase 10.1103/PhysRevLett.107.136801) [****, ()](\doibase 10.1038/ncomms2169) [****, ()](\doibase
10.1103/PhysRevLett.109.157002) [****, ()](\doibase 10.1103/PhysRevB.86.134528) [****, ()](\doibase
10.1088/1367-2630/15/8/085018) [****, ()](\doibase 10.1088/1367-2630/15/4/045020) [****, ()](\doibase 10.1126/science.aaf3961) [****, ()](\doibase 10.1038/srep35116) [****, ()](\doibase 10.1103/PhysRevLett.76.1715) [****, ()](\doibase 10.1103/PhysRevB.54.16820) @noop [**]{}, Habilitation thesis (, ) @noop [**]{} (, , ) [****, ()](\doibase
10.1103/PhysRevB.77.134513) [****, ()](\doibase 10.1103/PhysRevB.78.069902) [****, ()](\doibase
10.1103/PhysRevB.82.205314) [****, ()](\doibase 10.1103/PhysRevLett.90.166602) [****, ()](\doibase 10.1103/PhysRevB.70.195345) [****, ()](\doibase 10.1103/PhysRevB.91.195404) [****, ()](\doibase 10.1103/PhysRevLett.119.077701)
|
{
"pile_set_name": "arxiv"
}
|
The two classes `KinesisRecorder` and `KinesisFirehoseRecorder` allow you to interface with Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose to stream analytics data for real-time processing.
## What is Amazon Kinesis Data Streams?
[Amazon Kinesis Data Streams](http://aws.amazon.com/kinesis/) is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, so you can write applications that process information in real-time. With Amazon Kinesis applications, you can build real-time dashboards, capture exceptions and generate alerts, drive recommendations, and make other real-time business or operational decisions. You can also easily send data to other services such as Amazon Simple Storage Service, Amazon DynamoDB, and Amazon Redshift.
The Kinesis Data Streams `KinesisRecorder` client lets you store your Kinesis requests on disk and then send them all at once using the [PutRecords](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html) API call of Kinesis. This is useful because many mobile applications that use Kinesis Data Streams will create multiple requests per second. Sending an individual request under `PutRecord` action could adversely impact battery life. Moreover, the requests could be lost if the device goes offline. Thus, using the high-level Kinesis Data Streams client for batching can preserve both battery life and data.
## What is Amazon Kinesis Data Firehose?
[Amazon Kinesis Data Firehose](http://aws.amazon.com/kinesis/firehose/) is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift. With Kinesis Data Firehose, you do not need to write any applications or manage any resources. You configure your data producers to send data to Firehose and it automatically delivers the data to the destination that you specified.
The Amazon Kinesis Data Firehose `KinesisFirehoseRecorder` client lets you store your Kinesis Data Firehose requests on disk and then send them using the [PutRecordBatch](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecordBatch.html) API call of Kinesis Data Firehose.
For more information about Amazon Kinesis Data Firehose, see [Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html).
## Integrating Amazon Kinesis
Set up AWS Mobile SDK components by including the following libraries in your `app/build.gradle` dependencies list.
```groovy
dependencies {
implementation 'com.amazonaws:aws-android-sdk-kinesis:2.15.+'
implementation ('com.amazonaws:aws-android-sdk-mobile-client:2.15.+@aar') { transitive = true }
}
```
* `aws-android-sdk-kinesis` library enables sending analytics to Amazon Kinesis.
* `aws-android-sdk-mobile-client` library gives access to the AWS credentials provider and configurations.
Add the following imports to the main activity of your app.
```java
import com.amazonaws.mobileconnectors.kinesis.kinesisrecorder.*;
import com.amazonaws.mobile.client.AWSMobileClient;
import com.amazonaws.regions.Regions;
```
To use Kinesis Data Streams in an application, you must set the correct permissions. The following IAM policy allows the user to submit records to a specific data stream, which is identified by [ARN](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
```json
{
"Statement": [{
"Effect": "Allow",
"Action": "kinesis:PutRecords",
"Resource": "arn:aws:kinesis:us-west-2:111122223333:stream/mystream"
}]
}
```
The following IAM policy allows the user to submit records to a specific Kinesis Data Firehose delivery stream.
```json
{
"Statement": [{
"Effect": "Allow",
"Action": "firehose:PutRecordBatch",
"Resource": "arn:aws:firehose:us-west-2:111122223333:deliverystream/mystream"
}]
}
```
This policy should be applied to roles assigned to the Amazon Cognito identity pool, but you need to replace the `Resource` value with the correct ARN for your Amazon Kinesis or Amazon Kinesis Data Firehose stream. You can apply policies at the [IAM console](https://console.aws.amazon.com/iam/). To learn more about IAM policies, see [Using IAM](http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_Introduction.html).
To learn more about Amazon Kinesis Data Streams policies, see [Controlling Access to Amazon Kinesis Data Streams Resources with IAM](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-iam.html).
To learn more about Amazon Kinesis Data Firehose policies, see [Controlling Access with Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html).
## Working with the API
You can use `AWSMobileClient` to setup the Cognito credentials that are required to authenticate your requests with Amazon Kinesis.
```java
AWSMobileClient.getInstance().initialize(getApplicationContext(), new Callback<UserStateDetails>() {
@Override
public void onResult(UserStateDetails userStateDetails) {
Log.i("INIT", userStateDetails.getUserState().toString());
}
@Override
public void onError(Exception e) {
Log.e("INIT", "Initialization error.", e);
}
}
);
```
Once you have credentials, you can use `KinesisRecorder` with Amazon Kinesis. The following snippet creates a directory and instantiates the `KinesisRecorder` client:
```java
String kinesisDirectory = "YOUR_UNIQUE_DIRECTORY";
KinesisRecorder recorder = new KinesisRecorder(
myActivity.getDir(kinesisDirectory, 0),
Regions.<YOUR-AWS-REGION>,
AWSMobileClient.getInstance()
);
// KinesisRecorder uses synchronous calls, so you shouldn't call KinesisRecorder methods on the main thread.
```
To use `KinesisFirehoseRecorder`, you need to pass the object in a directory where streaming data is saved. We recommend you use an app private directory because the data is not encrypted.
```java
KinesisFirehoseRecorder firehoseRecorder = new KinesisFirehoseRecorder(
context.getCachedDir(),
Regions.<YOUR-AWS-REGION>,
AWSMobileClient.getInstance());
```
Configure Kinesis:
You can configure `KinesisRecorder` or `KinesisFirehoseRecorder` through their properties:
You can configure the maximum allowed storage via the `withMaxStorageSize()` method of `KinesisRecorderConfig`.
You can retrieve the same information by getting the `KinesisRecorderConfig` object for the recorder and calling `getMaxStorageSize():`
```java
KinesisRecorderConfig kinesisRecorderConfig = recorder.getKinesisRecorderConfig();
Long maxStorageSize = kinesisRecorderConfig.getMaxStorageSize();
// Do something with maxStorageSize
```
To check the number of bytes currently stored in the directory passed in to the `KinesisRecorder` constructor, call `getDiskBytesUsed()`:
```java
Long bytesUsed = recorder.getDiskBytesUsed();
// Do something with bytesUsed
```
To see how much space the `KinesisRecorder` client is allowed to use, you can call `getDiskByteLimit()`.
```java
Long byteLimit = recorder.getDiskByteLimit();
// Do something with byteLimit
```
With `KinesisRecorder` created and configured, you can use `saveRecord()` to save records and then send them in a batch.
```java
recorder.saveRecord(
"MyData".getBytes(),
"MyStreamName");
recorder.submitAllRecords();
```
For the `saveRecord()` request above to work, you would have to have created a stream named `MyStreamName`. You can create new streams in the [Amazon Kinesis console](https://console.aws.amazon.com/kinesis).
If `submitAllRecords()` is called while the app is online, requests will be sent and removed from the disk. If `submitAllRecords()` is called while the app is offline, requests will be kept on disk until `submitAllRecords()` is called while online. This applies even if you lose your internet connection midway through a submit. So if you save ten requests, call `submitAllRecords()`, send five, and then lose the Internet connection, you have five requests left on disk. These remaining five will be sent the next time `submitAllRecords()` is invoked online.
Here is a similar snippet for Amazon Kinesis Data Firehose:
```java
// Start to save data, either a String or a byte array
firehoseRecorder.saveRecord("Hello world!\n");
firehoseRecorder.saveRecord("Streaming data to Amazon S3 via Amazon Kinesis Data Firehose is easy.\n");
// Send previously saved data to Amazon Kinesis Data Firehose
// Note: submitAllRecords() makes network calls, so wrap it in an AsyncTask.
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... v) {
try {
firehoseRecorder.submitAllRecords();
} catch (AmazonClientException ace) {
// handle error
}
}
}.execute();
```
To learn more about working with Kinesis Data Streams, see the [Amazon Kinesis Data Streams resources](http://aws.amazon.com/kinesis/developer-resources/).
To learn more about the Kinesis Data Streams classes, see the [class reference for KinesisRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisRecorder.html).
To learn more about the Kinesis Data Firehose classes, see the [class reference for KinesisFirehoseRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisFirehoseRecorder.html).
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We describe the first-principles design and subsequent synthesis of a new material with the specific functionalities required for a solid-state-based search for the permanent electric dipole moment of the electron. We show computationally that perovskite-structure europium barium titanate should exhibit the required large and pressure-dependent ferroelectric polarization, local magnetic moments, and absence of magnetic ordering at liquid helium temperature. Subsequent synthesis and characterization of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics confirm the predicted desirable properties.'
author:
- 'K. Z. Rushchanskii'
- 'S. Kamba'
- 'V. Goian'
- 'P. Vaněk'
- 'M. Savinov'
- 'J. Prokleška'
- 'D. Nuzhnyy'
- 'K. Knížek'
- 'F. Laufek'
- 'S. Eckel'
- 'S. K. Lamoreaux'
- 'A. O. Sushkov'
- 'M. Ležaić'
- 'N. A. Spaldin'
title: 'First-principles design and subsequent synthesis of a material to search for the permanent electric dipole moment of the electron'
---
\
The Standard Model of particle physics incorporates the breaking of the discrete symmetries of parity ($P$) and the combined charge conjugation and parity ($CP$). It is thought however, that the $CP$-violation within the framework of the Standard Model is insufficient to explain the observed matter-antimatter asymmetry of the Universe [@Trodden1999], therefore a so far unknown source of $CP$-violation likely exists in nature. The existence of a non-zero permanent electric dipole moment (EDM) of a particle, such as an electron, neutron, or atom, would violate time reversal ($T$) symmetry (Fig. \[PT\_cartoon\]) and therefore imply $CP$-violation through the $CPT$ theorem [@Khriplovich1997]. In the Standard Model these EDMs are strongly suppressed, the theoretical predictions lying many orders of magnitude below the current experimental limits. However, many theories beyond the Standard Model, such as supersymmetry, contain a number of $CP$-violating phases that lead to EDM predictions within experimental reach [@Bernreuther1991]. Searching for EDMs therefore constitutes a background-free method of probing the $CP$-violating physics beyond the Standard Model.
A number of experimental EDM searches are currently under way or are being developed – systems studied in these experiments include diatomic molecules [@Hudson2002; @Kawall2004], diamagnetic atoms [@Griffith2009; @Guest2007; @Tardiff2007], molecular ions [@Stutz2004], cold atoms [@Weiss2003], neutrons [@Baker2006], liquids [@Ledbetter2005], and solids [@Heidenreich2005; @Bouchard2008] – with one of the most promising novel techniques being electric-field-correlated magnetization measurements in solids [@Shapiro1968; @Lamoreaux2002; @Budker2006]. This technique rests on the fact that, since spin is the only intrinsic vector associated with the electron, a non-vanishing electron EDM is either parallel or antiparallel to its spin and hence its magnetic moment. As a result, when an electric field, which lifts the degeneracy between electrons with EDMs parallel and antiparallel to it, is applied to a sample, the associated imbalance of electron populations generates a magnetization (Fig. \[Zeeman\]). The orientation of the magnetization is reversed when the electric field direction is switched; in our proposed experiment we will monitor this change in sample magnetization using a SQUID magnetometer [@Sushkov2009; @Sushkov2010]. Such [*magnetoelectric responses*]{} in materials with permanent [*macroscopic*]{} magnetizations and polarizations are of great current interest in the materials science community because of their potential for enabling novel devices that tune and control magnetism using electric fields[@Spaldin/Ramesh:2008].
\
Since the experiment aims to detect the intrinsic magnetoelectric response associated with the tiny electric dipole moment of the electron, the design constraints on the material are stringent. First, the solid must contain magnetic ions with unpaired spins, since the equal and opposite spins of paired electrons have corresponding equal and opposite EDMs and contribute no effect. Second, it must be engineered such that the [*conventional*]{} linear magnetoelectric tensor is zero; our approach to achieving this is to use a paramagnet in which the conventional effect is forbidden by time-reversal symmetry[@Fiebig:2005]. To reach the required sensitivity, a high atomic density of magnetic ions ($n\approx 10^{22}$ cm$^{-3}$) is needed, and these magnetic ions must reside at sites with broken inversion symmetry. The energy splitting $\Delta$ shown in Fig. \[Zeeman\] is proportional to the product of the effective electric field experienced by the electron, $E^*$, and its electric dipole moment, $d_e$. The effective electric field, which is equal to the electric field one would have to apply to a free electron to obtain the same energy splitting, is in turn determined by the displacement of the magnetic ion from the center of its coordination polyhedron; for a detailed derivation see Ref. [@Mukhamedjanov2003]. For example, in Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics (see below) with $\sim$1 $\mu$C/cm$^2$ remanent polarization, the mean displacement of the Eu$^{2+}$ ion with respect to its oxygen cage is 0.01 Å and this results in an effective electric field of $\sim$10 MV/cm, even when no external electric field is applied. We choose a ferroelectric so that it is possible to reverse the direction of the ionic displacements, and hence of the effective electric field, with a moderate applied electric field. Finally, the experiment will be performed inside liquid helium, so the material properties described above must persist at low temperature. A detailed derivation of the dependence of the sensitivity on the material parameters is given in Ref. [@Sushkov2010]. Note that conventional impurities such as defects or domain walls are not detrimental to the experiment since they do not violate time-reversal symmetry. In summary, the following material specifications will allow a sensitive EDM search to be mounted: (i) The material should be ferroelectric, with a large electric polarization, and switchable at liquid He temperature. (ii) There should be a high concentration of ions with local magnetic moments that remain paramagnetic at liquid He temperature; both long-range order and freezing into a glassy state must be avoided. (iii) The local environment at each magnetic ion should be strongly modified by the ferroelectric switching, and (iv) the sample should be macroscopic. With these material properties, and optimal SQUID noise levels, the projected experimental sensitivity is 10$^{-28}$ e.cm after ten days of averaging[@Sushkov2010].
No known materials meet all the requirements. Indeed the contra-indication between ferroelectricity and magnetism has been studied extensively over the last decade in the context of multiferroics [@Hill:2000], where the goal has been to achieve simultaneous ferroelectric and ferromagnetic ordering at high temperature. In spite of extensive efforts, a room temperature multiferroic with large and robust ferroelectricity and magnetization at room temperature remains elusive. While the low temperature constraints imposed here seem at first sight more straightforward, avoiding any magnetic ordering at low temperature, while retaining a high concentration of magnetic ions poses a similarly demanding challenge. In addition the problem of ferroelectric switchability at low temperature is challenging, since coercivities tend to increase as temperature is lowered [@Merz:1951].
We proceed by proposing a trial compound and calculating its properties using density functional theory to determine whether an experimental synthesis should be motivated. We choose an alloy of europium titanate, EuTiO$_3$ and barium titanate, BaTiO$_3$, with motivation as follows: To incorporate magnetism we require unfilled orbital manifolds of localized electrons; to avoid magnetic ordering the exchange interactions should be small. Therefore the tightly bound $4f$ electrons are likely to be the best choice. For conventional ferroelectricity we require transition metal ions with empty $d$ orbitals to allow for good hybridization with coordinating anions on off-centering [@Rondinelli/Eidelson/Spaldin:2009]. (Note that while here we use a conventional ferroelectric mechanism, many alternative routes to ferroelectricity that are compatible with magnetism – and which could form a basis for future explorations – have been recently identified; for a review see Ref. ). Both EuTiO$_3$ and BaTiO$_3$ form in the ABO$_3$ perovskite structure, with divalent Eu$^{2+}$ or Ba$^{2+}$ on the A site, and formally $d^0$ Ti$^{4+}$ on the B site. BaTiO$_3$ is a prototypical ferroelectric with a large room temperature polarization of 25 $\mu$C/cm$^2$.[@Wemple:1968] In the cubic paraelectric phase its lattice constant is 3.996 Å [@Miyake/Ueda:1947]. The Ba$^{2+}$ ion has an inert gas electron configuration and hence zero magnetic moment.
The lattice parameter of EuTiO$_3$ is 3.905 Å [@Katsufuji/Takagi:2001], notably smaller than that of BaTiO$_3$. It is not ferroelectric, but has a large dielectric constant ($\epsilon \approx 400$) at low temperature, indicative of proximity to a ferroelectric phase transition; indeed it has recently been reported to be a quantum paraelectric[@Katsufuji/Takagi:2001; @kamba:2007]. First-principles electronic structure calculations have shown that ferroelectricity should be induced along the elongation direction by either compressive or tensile strain [@Fennie/Rabe:2006]. The Eu$^{2+}$ ion has seven unpaired localized $4f$ electrons resulting in a large spin magnetization of 7 $\mu_B$, and EuTiO$_3$ is an antiferromagnet with $G$-type ordering at a low Néel temperature of $\sim$5.3K [@McGuire_et_al:1966; @Chien/DeBenedetti/Barros:1974]. (Independently of the study presented here, EuTiO$_3$ is of considerable current interest because its dielectric response is strongly affected by the magnetic ordering [@Katsufuji/Takagi:2001; @kamba:2007] and because of its unusual third order magnetoelectric response [@Shvartsman_et_al:2010]. These behaviors indicate coupling between the magnetic and dielectric orders caused by sensitivity of the polar soft mode to the magnetic ordering [@Fennie/Rabe:2006; @Goian:2009].)
Our hypothesis is that by alloying Ba on the A-site of EuTiO$_3$, the magnetic ordering temperature will be suppressed through dilution, and the tendency to ferroelectricity will be increased through the expansion of the lattice constant. Our hope is to identify an alloying range in which the magnetic ordering temperature is sufficiently low while the ferroelectric polarization and the concentration of magnetic ions remain sufficiently large. In addition, we expect that the polarization will be sensitive to the lattice constant, allowing its magnitude and consequently the coercivity, to be reduced with pressure.
First-Principles Calculations
=============================
Taking the 50/50 (Eu,Ba)TiO$_3$ ordered alloy as our starting point (Fig. \[th\_phonons\] inset), we next calculate its properties using first-principles. For details of the computations see the Methods section.
We began by calculating the phonon dispersion for the high symmetry, cubic perovskite reference structure at a lattice constant of 3.95 Å (chosen, somewhat arbitrarily, for this first step because it is the average of the experimental BaTiO$_3$ and EuTiO$_3$ lattice constants), with the magnetic spins aligned ferromagnetically; our results are shown in Fig. \[th\_phonons\], plotted along the high symmetry lines of the Brillouin zone. Importantly we find a polar $\Gamma$-point instability with an imaginary frequency of 103$i$ cm$^{-1}$ which is dominated by relative oxygen – Ti/Eu displacements (the eigenmode displacements for Eu, Ba, Ti, O$_{\parallel}$ and O$_{\perp}$ are 0.234, -0.059, 0.394, -0.360 and -0.303 respectively); such polar instabilities are indicative of a tendency to ferroelectricity. The zone boundary rotational instabilities that often occur in perovskite oxides and lead to non-polar, antiferrodistortive ground states are notably absent (in fact the flat bands at $\sim$60 cm$^{-1}$ are stable rotational vibrations). Interestingly we find that the Eu ions have a significant amplitude in the soft-mode eigenvector, in contrast to the Ba ions both here and in the parent BaTiO$_3$.
Next we performed a structural optimization of both the unit cell shape and the ionic positions of our Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ alloy with the total volume constrained to that of the ideal cubic structure studied above (3.95$^3$ Å$^3$ per formula unit). Our main finding is that the Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ alloy is polar with large relative displacements of oxygen and both Ti and Eu relative to the high symmetry reference structure. Using the Berry phase method we obtain a ferroelectric polarization value of $P = 23$ $\mu$C/cm$^2$. Our calculated ground state is orthorhombic with the polarization oriented along a \[011\] direction and lattice parameters $a=3.94$ Å, $b=5.60$ Å and $c=5.59$ Å. As expected from our analysis of the soft mode, the calculated ground state is characterized by large oxygen – Ti/Eu displacements, and the absence of rotations or tilts of the oxygen octahedra. Importantly, the large Eu amplitude in the soft mode manifests as a large off-centering of the Eu from the center of its oxygen coordination polyhedron in the ground state structure. The origin of the large Eu displacement lies in its small ionic radius compared with that of divalent Ba$^{2+}$: The large coordination cage around the Eu ion which is imposed by the large lattice constant of the alloy results in under-bonding of the Eu that can be relieved by off-centering. Indeed, we find that in calculations for fully relaxed single phase EuTiO$_3$, the oxygen octahedra tilt to reduce the volume of the A site in a similar manner to those known to occur in SrTiO$_3$, in which the A cation size is almost identical. This Eu off-centering is desirable for the EDM experiment because the change in local environment at the magnetic ions on ferroelectric switching determines the sensitivity of the EDM measurement.
P ($\mu$C/cm$^2$)
------- ------------------- ----
61.63 (constrained) 23
62.30 (experimental) 28
64.63 (relaxed) 44
: Calculated ferroelectric polarizations, P, of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ at three different volumes.[]{data-label="PversusV"}
We note that the magnitude of the polarization is strongly dependent on the volume used in the calculation (Table \[PversusV\]). At the experimental volume (reported in the next section), which is only slightly larger than our constrained volume of $3.95^3$ Å$^3$, we obtain a polarization of 28 $\mu$C/cm$^2$. At full relaxation, where we find a larger volume close to that of BaTiO$_3$, we obtain a polarization of 44 $\mu$C/cm$^2$, almost certainly a substantial over-estimate. This volume dependence suggests that the use of pressure to reduce the lattice parameters and suppress the ferroelectric polarization could be a viable tool for reducing the coercivity at low temperatures. Indeed our computations show that, at a pressure corresponding to 2.8 GPa applied to the experimental volume the theoretical structure is cubic, with both the polarization and coercive field reduced to zero.
Finally, to investigate the likelihood of magnetic ordering, we calculated the relative energies of the ferromagnetic state discussed above and of two antiferromagnetic arrangements: planes of ferromagnetically ordered spins coupled antiferromagnetically along either the pseudo-cubic $z$ axis or the $x$ or $y$ axes. (Note that these are degenerate in the high-symmetry cubic structure). For each magnetic arrangement we re-relaxed the lattice parameters and atomic positions. As expected for the highly localized Eu $4f$ electrons on their diluted sublattice, the energy differences between the different configurations are small – around 1 meV per 40 atom supercell – suggesting an absence of magnetic ordering down to low temperatures. While our calculations find the ferromagnetic state to be the lowest energy, this is likely a consequence of our A-site ordering and should not lead us to anticipate ferromagnetism at low temperature (Note that, after completing our study, we found a report of an early effort to synthesize (Eu,Ba)TiO$_3$[@Janes/Bodnar/Taylor:1978] in which a large magnetization, attributed to A-site ordering and ferromagnetism, was reported. A-site ordering is now known to be difficult to achieve in perovskite-structure oxides, however, and we find no evidence of it in our samples. Moreover the earlier work determined a tetragonal crystal structure in contrast to our refined orthorhombic structure.)
In summary, our predicted properties of the (Eu,Ba)TiO$_3$ alloy – large ferroelectric polarization, reducible with pressure, with large Eu displacements, and strongly suppressed magnetic ordering – meet the criteria for the electron electric dipole moment search and motivate the synthesis and characterization of the compound, described next.
Synthesis
=========
Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ was synthesized by solid-state reaction using mechanochemical activation before calcination. For details see the Methods section. The density of the sintered pellets was 86-88% of the theoretical density. X-ray diffraction at room temperature revealed the cubic perovskite $Pm\bar{3}m$ structure with a=3.9642(1)Å. At 100K we obtain an orthorhombic ground state with space group $Amm2$, in agreement with the GGA$+U$ prediction, and lattice parameters 3.9563(1), 5.6069(2) and 5.5998(2) Å.
Characterization
================
The final step in our study is the characterization of the samples, to check that the measured properties are indeed the same as those that we predicted and desired. Figure \[Fig3\] shows the temperature dependence of the complex permittivity between 1Hz and 1MHz, measured using an impedance analyzer ALPHA-AN (Novocontrol). The low-frequency data below 100kHz are affected above 150[$\,\mbox{K}$]{} by a small defect-induced conductivity and related Maxwell-Wagner polarization; the high-frequency data clearly show a maximum in the permittivity near $T_c$=213[$\,\mbox{K}$]{} indicating the ferroelectric phase transition. Two regions of dielectric dispersion – near 100[$\,\mbox{K}$]{} and below 75[$\,\mbox{K}$]{} – are seen in tan$\delta(T)$; these could originate from oxygen defects or from ferroelectric domain wall motion.
Measurement of the polarization was adversely affected by the sample conductivity above 150[$\,\mbox{K}$]{}, but at lower temperatures good quality ferroelectric hysteresis loops were obtained (Fig. \[Fig3\], inset). At 135[$\,\mbox{K}$]{} we obtain a saturation polarization of $\sim$8 $\mu$C/cm$^2$. The deviation from the predicted value could be the result of incomplete saturation as well as the strong volume dependence of the polarization combined with the well-known inaccuracies in GGA$+U$ volumes. As expected, at lower temperatures the coercive field strongly increases, and only partial polarization switching was possible even with an applied electric field of 18kV/cm (at higher electric field dielectric breakdown was imminent). The partial switching is responsible for the apparent decrease in saturation polarization below 40K.
Time-domain THz transmission and infrared reflectivity spectra (not shown here) reveal a softening of the polar phonon from $\sim$40[$\,\mbox{cm}^{-1}$]{} at 300[$\,\mbox{K}$]{} to $\sim$15[$\,\mbox{cm}^{-1}$]{} at $T_c$, and then its splitting into two components in the ferroelectric phase. Both components harden on cooling below $T_c$, with the lower frequency component remaining below 20[$\,\mbox{cm}^{-1}$]{} down to 10[$\,\mbox{K}$]{}, and the higher-frequency branch saturating near 70[$\,\mbox{cm}^{-1}$]{} at 10[$\,\mbox{K}$]{}. This behavior is reminiscent of the soft-mode behavior in BaTiO$_{3}$[@Hlinka:2008]. However, when we extract the contribution to the static permittivity that comes from the polar phonon, we find that it is considerably smaller than our measured value (Fig. \[Fig3\]) indicating an additional contribution to the dielectric relaxation. Our observations suggest that the phase transition is primarily soft-mode driven, but also exhibits some order-disorder character.
Finally, we measured the magnetic susceptibility $\chi$ at various static magnetic fields as a function of temperature down to 0.4K. (For details see the Methods section.) Our results are shown in Fig. \[Fig4\]. $\chi(T)$ peaks at $T\sim$1.9K indicating an absence of magnetic ordering above this temperature. The $\chi(T)$ data up to 300K show Curie-Weiss behavior $\chi(T)=\frac{C}{T+\theta}$ with $\theta$=-1.63K and $C = 0.017$ emuK/(gOe). The peak in susceptibility at 1.9K is frequency independent and not influenced by zero field heating measurements after field cooling, confirming antiferromagnetic order below $T_N = 1.9$K. As in pure EuTiO$_3$, the $\chi(T)$ peak is suppressed by a static external magnetic field, indicating stabilization of the paramagnetic phase [@Katsufuji/Takagi:2001]. Magnetization curves (Fig. \[Fig4\] inset) show saturation above $2\times10^4$ Oe at temperatures below $T_{N}$ and slower saturation at 5K. No open magnetic hysteresis loops were observed.
In summary, we have designed a new material – Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ – with the properties required to enable a measurement of the EDM to a higher accuracy than can currently be realized. Subsequent synthesis of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics confirmed their desirable ferroelectric polarization and absence of magnetic ordering above 1.9K. The search for the permanent dipole moment of the electron using Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ is now underway. Initial measurements have already achieved an EDM upper limit of 5 $\times
10^{-23}$ e.cm, which is within a factor of 10 of the current record with a solid-state-based EDM search [@Heidenreich2005]. We are currently studying a number of systematic effects that may mask the EDM signal. The primary error originates from ferroelectric hysteresis-induced heating of the samples during polarization reversal. This heating gives rise to a change in magnetic susceptibility, which, in a non-zero external magnetic field, leads to an undesirable sample magnetization response. We are working to control the absolute magnetic field at the location of the samples to the 0.1 $\mu$G level. Our projected sensitivity of 10$^{-28}$ e.cm should then be achievable.
Acknowledgments
===============
This work was supported by the US National Science Foundation under award number DMR-0940420 (NAS), by Yale University, by the Czech Science Foundation (Project Nos. 202/09/0682 and AVOZ10100520) and the Young Investigators Group Programme of Helmholtz Association, Germany, contract VH-NG-409. We thank O. Pacherova, R. Krupkova and G. Urbanova for technical assistance and Oleg Sushkov for invaluable discussions.
Author contributions
=====================
SKL supervised the EDM measurement effort at Yale. AOS and SE performed the analysis and made preliminary measurements, showing that these materials could be useful in an EDM experiment. ML and NAS selected (Eu,Ba)TiO$_3$ as the candidate material according to the experimental requirements and supervised the ab-initio calculations. KZR performed the ab-initio calculations. ML, NAS and KZR analysed the ab-initio results and wrote the theoretical component of the paper. Ceramics were prepared by PV. Crystal structure was determined by KK and FL. Dielectric measurements were performed by MS. JP investigated magnetic properties of ceramics. VG performed infrared reflectivity studies. DN investigated THz spectra. SK coordinated all experimental studies and wrote the synthesis and characterization part of manuscript. NAS coordinated the preparation of the manuscript.
Methods
=======
Computational details
---------------------
We performed first-principles density-functional calculations within the spin-polarized generalized gradient approximation (GGA) [@PBE:1996]. The strong on-site correlations of the Eu $4f$ electrons were treated using the GGA+$U$ method [@Anisimov/Aryasetiawan/Liechtenstein:1997] with the double counting treated within the Dudarev approach [@Dudarev_et_al:1998] and parameters $U=5.7$ eV and $J=1.0$ eV. For structural relaxation and lattice dynamics we used the Vienna *Ab Initio* Simulation Package (VASP) [@VASP_Kresse:1996] with the default projector augmented-wave (PAW) potentials [@Bloechl:1994] (valence-electron configurations Eu: $5s^2 5p^6 4f^{7}6s^{2}$, Ba: $5s^{2}5p^{6}6s^{2}$, Ti: $3s^{2}3p^{6}3d^{2}4s^{2}$ and O: $2s^{2}2p^{4}$.) Spin-orbit interaction was not included.
The 50/50 (Eu,Ba)TiO$_3$ alloy was represented by an ordered A-site structure with the Eu and Ba ions alternating in a checkerboard pattern (Fig. \[th\_phonons\], inset). Structural relaxations and total energy calculations were performed for a 40-atom supercell (consisting of two 5-atom perovskite unit cells in each cartesian direction) using a $4\times4\times4$ $\Gamma$-centered $k$-point mesh and a plane-wave cutoff of 500 eV. Ferroelectric polarizations and Born effective charges were calculated using the Berry phase method [@King-Smith:1993]. Lattice instabilities were investigated in the frozen-phonon scheme [@Kunc:1982; @Alfe:2009] for an 80 atom supercell using a $\Gamma$-centered $2\times2\times2$ $k$-point mesh and 0.0056 Å atomic displacements to extract the Hellman-Feynman forces.
Synthesis
---------
Eu$_2$O$_3$, TiO$_2$ (anatase) and BaTiO$_3$ powders (all from Sigma-Aldrich) were mixed in stoichiometric ratio then milled intensively in a planetary ball micro mill Fritsch Pulverisette 7 for 120min. in a dry environment followed by 20 min. in suspension with n-heptane. ZrO$_2$ grinding bowls (25ml) and balls (12mm diameter, acceleration 14g) were used. The suspension was dried under an IR lamp and the dried powder was pressed in a uniaxial press (330MPa, 3min.) into 13mm diameter pellets. The pellets were calcined in pure H$_2$ atmosphere at 1200[$\,{}^\circ$C]{} for 24hr (to reduce Eu$^{3+}$ to Eu$^{2+}$), then milled and pressed by the same procedure as above and sintered at 1300[$\,{}^\circ$C]{} for 24hr in Ar+10%H$_2$ atmosphere. Note that pure H$_2$ can not be used for sintering without adversely increasing the conductivity of the sample.
Characterization
----------------
Magnetic susceptibility was measured using a Quantum Design PPMS9 and a He$^3$ insert equipped with a home-made induction coil that allows measurement of ac magnetic susceptibility, $\chi$ from 0.1 to 214Hz.
[10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
. ** ****, ().
& ** (, , ).
& . ** ****, ().
, , & . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
. ** ****, ().
. ** ****, ().
, , & . ** ****, ().
, & . ** ****, ().
, & ** ****, ().
& . ** ****, ().
. ** ****, ().
, & . ** ****, ().
** ****, ().
. ** ****, ().
, & . ** ****, ().
& . ** ****, ().
, & . ** ****, ().
& . ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
, , , & . ** ****, ().
, & . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
, , , & . ** ****, ().
& . ** ****, ().
. ** ****, ().
& . ** ****, ().
& . ** ****, ().
. ** ****, ().
|
{
"pile_set_name": "arxiv"
}
|
The Doom Generation (1995)
October 25, 1995
FILM REVIEW;Gory Kitsch in a Parody of Teen-Age Road Movies
By JANET MASLIN
Published: October 25, 1995
Production notes for Gregg Araki's "Doom Generation" say it is "Araki's first big-budget feature and marks the end of his film adolescence." Well, not exactly. After a promising debut with "The Living End" followed by the angrier, more marginalized "Totally F***ed Up," Mr. Araki is still sounding a note of self-congratulatory teen-age rebellion in a film gruesome and obvious enough to make "Natural Born Killers" look like a model of restraint.
It's not even much of a change to find "The Doom Generation" billed as "a heterosexual movie," since it shares the effective homoerotic energy of his earlier work. That this film includes a teen-age girl, Amy Blue (Rose McGowan), as part of its sexual menage only means one especially clear target of contempt ("Don't get your uterus all tied in a knot" is one of the more printable things anyone says to her) in a film overflowing with it. Amy's insolence and Anna Karina hairdo (like Uma Thurman's in "Pulp Fiction") may offer a touch of Godard. But this film's satire of teen-age-wasteland cinema is so coarsely exaggerated that any homage is beside the point.
Using outlaw characters named Red, White and Blue to condemn all aspects of unhip America, Mr. Araki indulges in such broad parody that thinking it clumsy means failure to get the joke. Though visibly more polished than his earlier films, "The Doom Generation" clings to a midnight movie sensibility founded on deliberate kitsch. So Amy is a one-note, rude, sulky heroine, saying things like "Life is lonely, boring and dumb" while the two men she's sleeping with enjoy an obvious attraction to each other. Not content to leave this as subtext, Mr. Araki throws in the occasional bumper sticker: "Ditch the bitch. Make the switch."
Voluptuous Xavier Red (Johnathon Schaech) is way ahead of charmingly dim Jordan White (James Duval) in getting the hint about this, but it doesn't matter: "The Doom Generation" leads them both to a gory demonstration of America's intolerance toward sexual nonconformists. Obscured by strobe lights and boosted by the alternative-rock soundtrack that's sure to help sell the movie, this already notorious castration sequence is one of several gross-out epiphanies here. Others include the severing of a head that still talks, and even vomits, after it is removed from a vein-spurting body, and a blink-of-the-eye cameo by Heidi Fleiss.
The genuine enthusiasm Mr. Araki brings to this film's bedroom scenes, with their whimsical sets and jokey porn ambiance, is matched by the occasionally workable black humor in his screenplay. ("You murdered two people tonight. Doesn't that faze you at all?" "Yeah, I'm bummed. To the max.") But sledgehammer direction, heavy irony and the easiest imaginable targets hardly show talent off to good advantage.
THE DOOM GENERATION
Written, edited and directed by Gregg Araki; director of photography, Jim Fealy; music by the Jesus and Mary Chain, Nine Inch Nails, Showdive, Curve, Meat Beat Manifesto, Pizzicato Five, Cocteau Twins and others; production designer, Therese Deprez; produced by Andrea Sperling, Mr. Araki and Why Not Productions (France); released by Trimark Pictures. At the Angelika Film Center, Mercer and Houston Streets. Running time: 90 minutes. This film is not rated.
|
{
"pile_set_name": "pile-cc"
}
|
// This file is part of Eigen, a lightweight C++ template library
// for linear algebra.
//
// Copyright (C) 2012 Gael Guennebaud <gael.guennebaud@inria.fr>
//
// This Source Code Form is subject to the terms of the Mozilla
// Public License v. 2.0. If a copy of the MPL was not distributed
// with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
#ifndef EIGEN_REF_H
#define EIGEN_REF_H
namespace Eigen {
template<typename Derived> class RefBase;
template<typename PlainObjectType, int Options = 0,
typename StrideType = typename internal::conditional<PlainObjectType::IsVectorAtCompileTime,InnerStride<1>,OuterStride<> >::type > class Ref;
/** \class Ref
* \ingroup Core_Module
*
* \brief A matrix or vector expression mapping an existing expressions
*
* \tparam PlainObjectType the equivalent matrix type of the mapped data
* \tparam Options specifies whether the pointer is \c #Aligned, or \c #Unaligned.
* The default is \c #Unaligned.
* \tparam StrideType optionally specifies strides. By default, Ref implies a contiguous storage along the inner dimension (inner stride==1),
* but accept a variable outer stride (leading dimension).
* This can be overridden by specifying strides.
* The type passed here must be a specialization of the Stride template, see examples below.
*
* This class permits to write non template functions taking Eigen's object as parameters while limiting the number of copies.
* A Ref<> object can represent either a const expression or a l-value:
* \code
* // in-out argument:
* void foo1(Ref<VectorXf> x);
*
* // read-only const argument:
* void foo2(const Ref<const VectorXf>& x);
* \endcode
*
* In the in-out case, the input argument must satisfies the constraints of the actual Ref<> type, otherwise a compilation issue will be triggered.
* By default, a Ref<VectorXf> can reference any dense vector expression of float having a contiguous memory layout.
* Likewise, a Ref<MatrixXf> can reference any column major dense matrix expression of float whose column's elements are contiguously stored with
* the possibility to have a constant space inbetween each column, i.e.: the inner stride mmust be equal to 1, but the outer-stride (or leading dimension),
* can be greater than the number of rows.
*
* In the const case, if the input expression does not match the above requirement, then it is evaluated into a temporary before being passed to the function.
* Here are some examples:
* \code
* MatrixXf A;
* VectorXf a;
* foo1(a.head()); // OK
* foo1(A.col()); // OK
* foo1(A.row()); // compilation error because here innerstride!=1
* foo2(A.row()); // The row is copied into a contiguous temporary
* foo2(2*a); // The expression is evaluated into a temporary
* foo2(A.col().segment(2,4)); // No temporary
* \endcode
*
* The range of inputs that can be referenced without temporary can be enlarged using the last two template parameter.
* Here is an example accepting an innerstride!=1:
* \code
* // in-out argument:
* void foo3(Ref<VectorXf,0,InnerStride<> > x);
* foo3(A.row()); // OK
* \endcode
* The downside here is that the function foo3 might be significantly slower than foo1 because it won't be able to exploit vectorization, and will involved more
* expensive address computations even if the input is contiguously stored in memory. To overcome this issue, one might propose to overloads internally calling a
* template function, e.g.:
* \code
* // in the .h:
* void foo(const Ref<MatrixXf>& A);
* void foo(const Ref<MatrixXf,0,Stride<> >& A);
*
* // in the .cpp:
* template<typename TypeOfA> void foo_impl(const TypeOfA& A) {
* ... // crazy code goes here
* }
* void foo(const Ref<MatrixXf>& A) { foo_impl(A); }
* void foo(const Ref<MatrixXf,0,Stride<> >& A) { foo_impl(A); }
* \endcode
*
*
* \sa PlainObjectBase::Map(), \ref TopicStorageOrders
*/
namespace internal {
template<typename _PlainObjectType, int _Options, typename _StrideType>
struct traits<Ref<_PlainObjectType, _Options, _StrideType> >
: public traits<Map<_PlainObjectType, _Options, _StrideType> >
{
typedef _PlainObjectType PlainObjectType;
typedef _StrideType StrideType;
enum {
Options = _Options,
Flags = traits<Map<_PlainObjectType, _Options, _StrideType> >::Flags | NestByRefBit
};
template<typename Derived> struct match {
enum {
HasDirectAccess = internal::has_direct_access<Derived>::ret,
StorageOrderMatch = PlainObjectType::IsVectorAtCompileTime || ((PlainObjectType::Flags&RowMajorBit)==(Derived::Flags&RowMajorBit)),
InnerStrideMatch = int(StrideType::InnerStrideAtCompileTime)==int(Dynamic)
|| int(StrideType::InnerStrideAtCompileTime)==int(Derived::InnerStrideAtCompileTime)
|| (int(StrideType::InnerStrideAtCompileTime)==0 && int(Derived::InnerStrideAtCompileTime)==1),
OuterStrideMatch = Derived::IsVectorAtCompileTime
|| int(StrideType::OuterStrideAtCompileTime)==int(Dynamic) || int(StrideType::OuterStrideAtCompileTime)==int(Derived::OuterStrideAtCompileTime),
AlignmentMatch = (_Options!=Aligned) || ((PlainObjectType::Flags&AlignedBit)==0) || ((traits<Derived>::Flags&AlignedBit)==AlignedBit),
MatchAtCompileTime = HasDirectAccess && StorageOrderMatch && InnerStrideMatch && OuterStrideMatch && AlignmentMatch
};
typedef typename internal::conditional<MatchAtCompileTime,internal::true_type,internal::false_type>::type type;
};
};
template<typename Derived>
struct traits<RefBase<Derived> > : public traits<Derived> {};
}
template<typename Derived> class RefBase
: public MapBase<Derived>
{
typedef typename internal::traits<Derived>::PlainObjectType PlainObjectType;
typedef typename internal::traits<Derived>::StrideType StrideType;
public:
typedef MapBase<Derived> Base;
EIGEN_DENSE_PUBLIC_INTERFACE(RefBase)
inline Index innerStride() const
{
return StrideType::InnerStrideAtCompileTime != 0 ? m_stride.inner() : 1;
}
inline Index outerStride() const
{
return StrideType::OuterStrideAtCompileTime != 0 ? m_stride.outer()
: IsVectorAtCompileTime ? this->size()
: int(Flags)&RowMajorBit ? this->cols()
: this->rows();
}
RefBase()
: Base(0,RowsAtCompileTime==Dynamic?0:RowsAtCompileTime,ColsAtCompileTime==Dynamic?0:ColsAtCompileTime),
// Stride<> does not allow default ctor for Dynamic strides, so let' initialize it with dummy values:
m_stride(StrideType::OuterStrideAtCompileTime==Dynamic?0:StrideType::OuterStrideAtCompileTime,
StrideType::InnerStrideAtCompileTime==Dynamic?0:StrideType::InnerStrideAtCompileTime)
{}
EIGEN_INHERIT_ASSIGNMENT_OPERATORS(RefBase)
protected:
typedef Stride<StrideType::OuterStrideAtCompileTime,StrideType::InnerStrideAtCompileTime> StrideBase;
template<typename Expression>
void construct(Expression& expr)
{
if(PlainObjectType::RowsAtCompileTime==1)
{
eigen_assert(expr.rows()==1 || expr.cols()==1);
::new (static_cast<Base*>(this)) Base(expr.data(), 1, expr.size());
}
else if(PlainObjectType::ColsAtCompileTime==1)
{
eigen_assert(expr.rows()==1 || expr.cols()==1);
::new (static_cast<Base*>(this)) Base(expr.data(), expr.size(), 1);
}
else
::new (static_cast<Base*>(this)) Base(expr.data(), expr.rows(), expr.cols());
::new (&m_stride) StrideBase(StrideType::OuterStrideAtCompileTime==0?0:expr.outerStride(),
StrideType::InnerStrideAtCompileTime==0?0:expr.innerStride());
}
StrideBase m_stride;
};
template<typename PlainObjectType, int Options, typename StrideType> class Ref
: public RefBase<Ref<PlainObjectType, Options, StrideType> >
{
typedef internal::traits<Ref> Traits;
public:
typedef RefBase<Ref> Base;
EIGEN_DENSE_PUBLIC_INTERFACE(Ref)
#ifndef EIGEN_PARSED_BY_DOXYGEN
template<typename Derived>
inline Ref(PlainObjectBase<Derived>& expr,
typename internal::enable_if<bool(Traits::template match<Derived>::MatchAtCompileTime),Derived>::type* = 0)
{
Base::construct(expr);
}
template<typename Derived>
inline Ref(const DenseBase<Derived>& expr,
typename internal::enable_if<bool(internal::is_lvalue<Derived>::value&&bool(Traits::template match<Derived>::MatchAtCompileTime)),Derived>::type* = 0,
int = Derived::ThisConstantIsPrivateInPlainObjectBase)
#else
template<typename Derived>
inline Ref(DenseBase<Derived>& expr)
#endif
{
Base::construct(expr.const_cast_derived());
}
EIGEN_INHERIT_ASSIGNMENT_OPERATORS(Ref)
};
// this is the const ref version
template<typename TPlainObjectType, int Options, typename StrideType> class Ref<const TPlainObjectType, Options, StrideType>
: public RefBase<Ref<const TPlainObjectType, Options, StrideType> >
{
typedef internal::traits<Ref> Traits;
public:
typedef RefBase<Ref> Base;
EIGEN_DENSE_PUBLIC_INTERFACE(Ref)
template<typename Derived>
inline Ref(const DenseBase<Derived>& expr)
{
// std::cout << match_helper<Derived>::HasDirectAccess << "," << match_helper<Derived>::OuterStrideMatch << "," << match_helper<Derived>::InnerStrideMatch << "\n";
// std::cout << int(StrideType::OuterStrideAtCompileTime) << " - " << int(Derived::OuterStrideAtCompileTime) << "\n";
// std::cout << int(StrideType::InnerStrideAtCompileTime) << " - " << int(Derived::InnerStrideAtCompileTime) << "\n";
construct(expr.derived(), typename Traits::template match<Derived>::type());
}
protected:
template<typename Expression>
void construct(const Expression& expr,internal::true_type)
{
Base::construct(expr);
}
template<typename Expression>
void construct(const Expression& expr, internal::false_type)
{
m_object.lazyAssign(expr);
Base::construct(m_object);
}
protected:
TPlainObjectType m_object;
};
} // end namespace Eigen
#endif // EIGEN_REF_H
|
{
"pile_set_name": "github"
}
|
How do I edit my profile?
You have a profile on this site. It was created for you on registration. Having a profile means other users can recognize you when you leave a reply or like a comment. Please keep it up to date and all the fields filled.
To edit your profile simply click on your name in the top right corner.
Fill in any missing fields and make sure to click ‘Save Changes’ when you are finished.
|
{
"pile_set_name": "pile-cc"
}
|
About Grand Slam Fishing Charters
As a family owned business we know how important it is that your trip becomes the best memory of your vacation, we are proud of our islands, our waters and our crew and we are desperate show you the best possible time during your stay. We can not guarantee fish every time but we can guarantee you a great time! The biggest perk of our job is seeing so many of our customers become close friends”
A Great Way To Make New Friends!
Our dockside parties are a great way to make new friends! Everyone is welcome!
Andrea runs the whole operation, from discussing your initial needs by phone or email through to ensuring you have sufficient potato chips. Andrea has worked as concierge for many International resorts and fully understands the high expectations of international visitors.
“Life’s A Game But Fishing Is Serious!”
Unlike many tour operators, our crew are highly valued and have been with us since day 1. Each have their own personalities and sense of humour and understand the importance of making your day perfect, for us the saying is true, “Lifes a game but fishing is serious!”
TRIP ADVISOR
Plan Your Trip!
AJ and Earl were excellent. My son and I did a half day deep sea trip and though the fish weren’t too cooperative, they did everything to try to get something to bite. Very knowledgeable about the waters and my son was able to land a nice barracuda. The next day my wife, daughter, son […]
When we arrived the crew made us feel right at home. They made us feel comfortable and answered all questions. The crew worked hard all day to put us on fish. We were successful in landing a nice size Wahoo even though the weather did not cooperate the entire day was enjoyable. I highly recommend […]
|
{
"pile_set_name": "pile-cc"
}
|
/**
* ScriptDev2 is an extension for mangos providing enhanced features for
* area triggers, creatures, game objects, instances, items, and spells beyond
* the default database scripting in mangos.
*
* Copyright (C) 2006-2013 ScriptDev2 <http://www.scriptdev2.com/>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* World of Warcraft, and all World of Warcraft or Warcraft art, images,
* and lore are copyrighted by Blizzard Entertainment, Inc.
*/
/**
* ScriptData
* SDName: bug_trio
* SD%Complete: 75
* SDComment: Summon Player spell NYI; Poison Cloud damage spell NYI; Timers need adjustments
* SDCategory: Temple of Ahn'Qiraj
* EndScriptData
*/
#include "precompiled.h"
#include "temple_of_ahnqiraj.h"
enum
{
// kri
SPELL_CLEAVE = 26350,
SPELL_TOXIC_VOLLEY = 25812,
SPELL_SUMMON_CLOUD = 26590, // summons 15933
// vem
SPELL_CHARGE = 26561,
SPELL_VENGEANCE = 25790,
SPELL_KNOCKBACK = 26027,
// yauj
SPELL_HEAL = 25807,
SPELL_FEAR = 26580,
NPC_YAUJ_BROOD = 15621
};
struct MANGOS_DLL_DECL boss_kriAI : public ScriptedAI
{
boss_kriAI(Creature* pCreature) : ScriptedAI(pCreature)
{
m_pInstance = (ScriptedInstance*)pCreature->GetInstanceData();
Reset();
}
ScriptedInstance* m_pInstance;
uint32 m_uiCleaveTimer;
uint32 m_uiToxicVolleyTimer;
void Reset() override
{
m_uiCleaveTimer = urand(4000, 8000);
m_uiToxicVolleyTimer = urand(6000, 12000);
}
void JustDied(Unit* /*pKiller*/) override
{
// poison cloud on death
DoCastSpellIfCan(m_creature, SPELL_SUMMON_CLOUD, CAST_TRIGGERED);
if (!m_pInstance)
{
return;
}
// If the other 2 bugs are still alive, make unlootable
if (m_pInstance->GetData(TYPE_BUG_TRIO) != DONE)
{
m_creature->RemoveFlag(UNIT_DYNAMIC_FLAGS, UNIT_DYNFLAG_LOOTABLE);
m_pInstance->SetData(TYPE_BUG_TRIO, SPECIAL);
}
}
void JustReachedHome() override
{
if (m_pInstance)
{
m_pInstance->SetData(TYPE_BUG_TRIO, FAIL);
}
}
void UpdateAI(const uint32 uiDiff) override
{
// Return since we have no target
if (!m_creature->SelectHostileTarget() || !m_creature->getVictim())
{
return;
}
// Cleave_Timer
if (m_uiCleaveTimer < uiDiff)
{
if (DoCastSpellIfCan(m_creature->getVictim(), SPELL_CLEAVE) == CAST_OK)
{
m_uiCleaveTimer = urand(5000, 12000);
}
}
else
{ m_uiCleaveTimer -= uiDiff; }
// ToxicVolley_Timer
if (m_uiToxicVolleyTimer < uiDiff)
{
if (DoCastSpellIfCan(m_creature, SPELL_TOXIC_VOLLEY) == CAST_OK)
{
m_uiToxicVolleyTimer = urand(10000, 15000);
}
}
else
{ m_uiToxicVolleyTimer -= uiDiff; }
DoMeleeAttackIfReady();
}
};
struct MANGOS_DLL_DECL boss_vemAI : public ScriptedAI
{
boss_vemAI(Creature* pCreature) : ScriptedAI(pCreature)
{
m_pInstance = (ScriptedInstance*)pCreature->GetInstanceData();
Reset();
}
ScriptedInstance* m_pInstance;
uint32 m_uiChargeTimer;
uint32 m_uiKnockBackTimer;
void Reset() override
{
m_uiChargeTimer = urand(15000, 27000);
m_uiKnockBackTimer = urand(8000, 20000);
}
void JustDied(Unit* /*pKiller*/) override
{
// Enrage the other bugs
DoCastSpellIfCan(m_creature, SPELL_VENGEANCE, CAST_TRIGGERED);
if (!m_pInstance)
{
return;
}
// If the other 2 bugs are still alive, make unlootable
if (m_pInstance->GetData(TYPE_BUG_TRIO) != DONE)
{
m_creature->RemoveFlag(UNIT_DYNAMIC_FLAGS, UNIT_DYNFLAG_LOOTABLE);
m_pInstance->SetData(TYPE_BUG_TRIO, SPECIAL);
}
}
void JustReachedHome() override
{
if (m_pInstance)
{
m_pInstance->SetData(TYPE_BUG_TRIO, FAIL);
}
}
void UpdateAI(const uint32 uiDiff) override
{
// Return since we have no target
if (!m_creature->SelectHostileTarget() || !m_creature->getVictim())
{
return;
}
// Charge_Timer
if (m_uiChargeTimer < uiDiff)
{
if (Unit* pTarget = m_creature->SelectAttackingTarget(ATTACKING_TARGET_RANDOM, 0))
{
if (DoCastSpellIfCan(pTarget, SPELL_CHARGE) == CAST_OK)
{
m_uiChargeTimer = urand(8000, 16000);
}
}
}
else
{ m_uiChargeTimer -= uiDiff; }
// KnockBack_Timer
if (m_uiKnockBackTimer < uiDiff)
{
if (DoCastSpellIfCan(m_creature, SPELL_KNOCKBACK) == CAST_OK)
{
if (m_creature->GetThreatManager().getThreat(m_creature->getVictim()))
{
m_creature->GetThreatManager().modifyThreatPercent(m_creature->getVictim(), -80);
}
m_uiKnockBackTimer = urand(15000, 25000);
}
}
else
{ m_uiKnockBackTimer -= uiDiff; }
DoMeleeAttackIfReady();
}
};
struct MANGOS_DLL_DECL boss_yaujAI : public ScriptedAI
{
boss_yaujAI(Creature* pCreature) : ScriptedAI(pCreature)
{
m_pInstance = (ScriptedInstance*)pCreature->GetInstanceData();
Reset();
}
ScriptedInstance* m_pInstance;
uint32 m_uiHealTimer;
uint32 m_uiFearTimer;
void Reset() override
{
m_uiHealTimer = urand(25000, 40000);
m_uiFearTimer = urand(12000, 24000);
}
void JustDied(Unit* /*Killer*/) override
{
// Spawn 10 yauj brood on death
float fX, fY, fZ;
for (int i = 0; i < 10; ++i)
{
m_creature->GetRandomPoint(m_creature->GetPositionX(), m_creature->GetPositionY(), m_creature->GetPositionZ(), 10.0f, fX, fY, fZ);
m_creature->SummonCreature(NPC_YAUJ_BROOD, fX, fY, fZ, 0.0f, TEMPSUMMON_TIMED_OOC_DESPAWN, 30000);
}
if (!m_pInstance)
{
return;
}
// If the other 2 bugs are still alive, make unlootable
if (m_pInstance->GetData(TYPE_BUG_TRIO) != DONE)
{
m_creature->RemoveFlag(UNIT_DYNAMIC_FLAGS, UNIT_DYNFLAG_LOOTABLE);
m_pInstance->SetData(TYPE_BUG_TRIO, SPECIAL);
}
}
void JustReachedHome() override
{
if (m_pInstance)
{
m_pInstance->SetData(TYPE_BUG_TRIO, FAIL);
}
}
void UpdateAI(const uint32 uiDiff) override
{
// Return since we have no target
if (!m_creature->SelectHostileTarget() || !m_creature->getVictim())
{
return;
}
// Fear_Timer
if (m_uiFearTimer < uiDiff)
{
if (DoCastSpellIfCan(m_creature, SPELL_FEAR) == CAST_OK)
{
DoResetThreat();
m_uiFearTimer = 20000;
}
}
else
{ m_uiFearTimer -= uiDiff; }
// Heal
if (m_uiHealTimer < uiDiff)
{
if (Unit* pTarget = DoSelectLowestHpFriendly(100.0f))
{
if (DoCastSpellIfCan(pTarget, SPELL_HEAL) == CAST_OK)
{
m_uiHealTimer = urand(15000, 30000);
}
}
}
else
{ m_uiHealTimer -= uiDiff; }
DoMeleeAttackIfReady();
}
};
CreatureAI* GetAI_boss_yauj(Creature* pCreature)
{
return new boss_yaujAI(pCreature);
}
CreatureAI* GetAI_boss_vem(Creature* pCreature)
{
return new boss_vemAI(pCreature);
}
CreatureAI* GetAI_boss_kri(Creature* pCreature)
{
return new boss_kriAI(pCreature);
}
void AddSC_bug_trio()
{
Script* pNewScript;
pNewScript = new Script;
pNewScript->Name = "boss_kri";
pNewScript->GetAI = &GetAI_boss_kri;
pNewScript->RegisterSelf();
pNewScript = new Script;
pNewScript->Name = "boss_vem";
pNewScript->GetAI = &GetAI_boss_vem;
pNewScript->RegisterSelf();
pNewScript = new Script;
pNewScript->Name = "boss_yauj";
pNewScript->GetAI = &GetAI_boss_yauj;
pNewScript->RegisterSelf();
}
|
{
"pile_set_name": "github"
}
|
---
abstract: |
We give a general construction of debiased/locally robust/orthogonal (LR) moment functions for GMM, where the derivative with respect to first step nonparametric estimation is zero and equivalently first step estimation has no effect on the influence function. This construction consists of adding an estimator of the influence function adjustment term for first step nonparametric estimation to identifying or original moment conditions. We also give numerical methods for estimating LR moment functions that do not require an explicit formula for the adjustment term.
LR moment conditions have reduced bias and so are important when the first step is machine learning. We derive LR moment conditions for dynamic discrete choice based on first step machine learning estimators of conditional choice probabilities.
We provide simple and general asymptotic theory for LR estimators based on sample splitting. This theory uses the additive decomposition of LR moment conditions into an identifying condition and a first step influence adjustment. Our conditions require only mean square consistency and a few (generally either one or two) readily interpretable rate conditions.
LR moment functions have the advantage of being less sensitive to first step estimation. Some LR moment functions are also doubly robust meaning they hold if one first step is incorrect. We give novel classes of doubly robust moment functions and characterize double robustness. For doubly robust estimators our asymptotic theory only requires one rate condition.
Keywords: Local robustness, orthogonal moments, double robustness, semiparametric estimation, bias, GMM.
JEL classification:
: C13; C14; C21; D24
author:
- |
Victor Chernozhukov\
*MIT*
- |
Juan Carlos Escanciano\
*Indiana University*
- |
Hidehiko Ichimura\
*University of Tokyo*
- |
Whitney K. Newey\
*MIT*
- |
James M. Robins\
*Harvard University*
date: April 2018
title: Locally Robust Semiparametric Estimation
---
Introduction
============
There are many economic parameters that depend on nonparametric or large dimensional first steps. Examples include dynamic discrete choice, games, average consumer surplus, and treatment effects. This paper shows how to construct moment functions for GMM estimators that are debiased/locally robust/orthogonal (LR), where moment conditions have a zero derivative with respect to the first step. We show that LR moment functions can be constructed by adding the influence function adjustment for first step estimation to the original moment functions. This construction can also be interpreted as a decomposition of LR moment functions into identifying moment functions and a first step influence function term. We use this decomposition to give simple and general conditions for root-n consistency and asymptotic normality, with different properties being assumed for the identifying and influence function terms. The conditions are easily interpretable mean square consistency and second order remainder conditions based on estimated moments that use cross-fitting (sample splitting). We also give numerical estimators of the influence function adjustment.
LR moment functions have several advantages. LR moment conditions bias correct in a way that eliminates the large biases from plugging in first step machine learning estimators found in Belloni, Chernozhukov, and Hansen (2014). LR moment functions can be used to construct debiased/double machine learning (DML) estimators, as in Chernozhukov et al. (2017, 2018).
We illustrate by deriving LR moment functions for dynamic discrete choice estimation based on conditional choice probabilities. We provide a DML estimator for dynamic discrete choice that uses first step machine learning of conditional choice probabilities. We find that it performs well in a Monte Carlo example. Such structural models provide a potentially important application of DML, because of potentially high dimensional state spaces. Adding the first step influence adjustment term provides a general way to construct LR moment conditions for structural models so that machine learning can be used for first step estimation of conditional choice probabilities, state transition distributions, and other unknown functions on which structural estimators depend.
LR moment conditions also have the advantage of being relatively insensitive to small variation away from the first step true function. This robustness property is appealing in many settings where it may be difficult to get the first step completely correct. Many interesting and useful LR moment functions have the additional property that they are doubly robust (DR), meaning moment conditions hold when one first step is not correct. We give novel classes of DR moment conditions, including for average linear functionals of conditional expectations and probability densities. The construction of adding the first step influence function adjustment to an identifying moment function is useful to obtain these moment conditions. We also give necessary and sufficient conditions for a large class of moment functions to be DR. We find DR moments have simpler and more general conditions for asymptotic normality, which helps motivate our consideration of DR moment functions as special cases of LR ones. LR moment conditions also help minimize sensitivity to misspecification as in Bonhomme and Weidner (2018).
LR moment conditions have smaller bias from first step estimation. We show that they have the small bias property of Newey, Hsieh, and Robins (2004), that the bias of the moments is of smaller order than the bias of the first step. This bias reduction leads to substantial improvements in finite sample properties in many cases relative to just using the original moment conditions. For dynamic discrete choice we find large bias reductions, moderate variance increases and even reductions in some cases, and coverage probabilities substantially closer to nominal. For machine learning estimators of the partially linear model, Chernozhukov et al. (2017, 2018) found bias reductions so large that the LR estimator is root-n consistent but the estimator based on the original moment condition is not. Substantial improvements were previously also found for density weighted averages by Newey, Hsieh, and Robins (2004, NHR). The twicing kernel estimators in NHR are numerically equal to LR estimators based on the original (before twicing) kernel, as shown in Newey, Hsieh, Robins (1998), and the twicing kernel estimators were shown to have smaller mean square error in large samples. Also, a Monte Carlo example in NHR finds that the mean square error (MSE) of the LR estimator has a smaller minimum and is flatter as a function of bandwidth than the MSE of Powell, Stock, and Stoker’s (1989) density weighted average derivative estimator. We expect similar finite sample improvements from LR moments in other cases.
LR moment conditions have appeared in earlier work. They are semiparametric versions of Neyman (1959) C-alpha test scores for parametric models. Hasminskii and Ibragimov (1978) suggested LR estimation of functionals of a density and argued for their advantages over plug-in estimators. Pfanzagl and Wefelmeyer (1981) considered using LR moment conditions for improving the asymptotic efficiency of functionals of distribution estimators. Bickel and Ritov (1988) gave a LR estimator of the integrated squared density that attains root-n consistency under minimal conditions. The Robinson (1988) semiparametric regression and Ichimura (1993) index regression estimators are LR. Newey (1990) showed that LR moment conditions can be obtained as residuals from projections on the tangent set in a semiparametric model. Newey (1994a) showed that derivatives of an objective function where the first step has been “concentrated out” are LR, including the efficient score of a semiparametric model. NHR (1998, 2004) gave estimators of averages that are linear in density derivative functionals with remainder rates that are as fast as those in Bickel and Ritov (1988). Doubly robust moment functions have been constructed by Robins, Rotnitzky, and Zhao (1994, 1995), Robins and Rotnitzky (1995), Scharfstein, Rotnitzky, and Robins (1999), Robins, Rotnitzky, and van der Laan (2000), Robins and Rotnitzky (2001), Graham (2011), and Firpo and Rothe (2017). They are widely used for estimating treatment effects, e.g. Bang and Robins (2005). Van der Laan and Rubin (2006) developed targeted maximum likelihood to obtain a LR estimating equation based on the efficient influence function of a semiparametric model. Robins et al. (2008, 2017) showed that efficient influence functions are LR, characterized some doubly robust moment conditions, and developed higher order influence functions that can reduce bias. Belloni, Chernozhukov, and Wei (2013), Belloni, Chernozhukov, and Hansen (2014), Farrell (2015), Kandasamy et al. (2015), Belloni, Chernozhukov, Fernandez-Val, and Hansen (2016), and Athey, Imbens, and Wager (2017) gave LR estimators with machine learning first steps in several specific contexts.
A main contribution of this paper is the construction of LR moment conditions from any moment condition and first step estimator that can result in a root-n consistent estimator of the parameter of interest. This construction is based on the limit of the first step when a data observation has a general distribution that allows for misspecification, similarly to Newey (1994). LR moment functions are constructed by adding to identifying moment functions the influence function of the true expectation of the identifying moment functions evaluated at the first step limit, i.e. by adding the influence function term that accounts for first step estimation. The addition of the influence adjustment “partials out” the first order effect of the first step on the moments. This construction of LR moments extends those cited above for first step density and distribution estimators to *any first step,* including instrumental variable estimators. Also, this construction is *estimator based* rather than model based as in van der Laan and Rubin (2006) and Robins et al. (2008, 2017). The construction depends only on the moment functions and the first step rather than on a semiparametric model. Also, we use the fundamental Gateaux derivative definition of the influence function to show LR rather than an embedding in a regular semiparametric model.
The focus on the functional that is the true expected moments evaluated at the first step limit is the key to this construction. This focus should prove useful for constructing LR moments in many setting, including those where it has already been used to find the asymptotic variance of semiparametric estimators, such as Newey (1994a), Pakes and Olley (1995), Hahn (1998), Ai and Chen (2003), Hirano, Imbens, and Ridder (2003), Bajari, Hong, Krainer, and Nekipelov (2010), Bajari, Chernozhukov, Hong, and Nekipelov (2009), Hahn and Ridder (2013, 2016), and Ackerberg, Chen, Hahn, and Liao (2014), Hahn, Liao, and Ridder (2016). One can construct LR moment functions in each of these settings by adding the first step influence function derived for each case as an adjustment to the original, identifying moment functions.
Another contribution is the development of LR moment conditions for dynamic discrete choice. We derive the influence adjustment for first step estimation of conditional choice probabilities as in Hotz and Miller (1993). We find encouraging Monte Carlo results when various machine learning methods are used to construct the first step. We also give LR moment functions for conditional moment restrictions based on orthogonal instruments.
An additional contribution is to provide general estimators of the influence adjustment term that can be used to construct LR moments without knowing their form. These methods estimate the adjustment term numerically, thus avoiding the need to know its form. It is beyond the scope of this paper to develop machine learning versions of these numerical estimators. Such estimators are developed by Chernozhukov, Newey, and Robins (2018) for average linear functionals of conditional expectations.
Further contributions include novel classes of DR estimators, including linear functionals of nonparametric instrumental variables and density estimators, and a characterization of (necessary and sufficient conditions for) double robustness. We also give related, novel partial robustness results where original moment conditions are satisfied even when the first step is not equal to the truth.
A main contribution is simple and general asymptotic theory for LR estimators that use cross-fitting in the construction of the average moments. This theory is based on the structure of LR moment conditions as an identifying moment condition depending on one first step plus an influence adjustment that can depend on an additional first step. We give a remainder decomposition that leads to mean square consistency conditions for first steps plus a few readily interpretable rate conditions. For DR estimators there is only one rate condition, on a product of sample remainders from two first step estimators, leading to particularly simple conditions. This simplicity motivates our inclusion of results for DR estimators. This asymptotic theory is also useful for existing moment conditions that are already known to be LR. Whenever the moment condition can be decomposed into an identifying moment condition depending on one first step and an influence function term that may depend on two first steps the simple and general regularity conditions developed here will apply.
LR moments reduce that smoothing bias that results from first step nonparametric estimation relative to original moment conditions. There are other sources of bias arising from nonlinearity of moment conditions in the first step and the empirical distribution. Cattaneo and Jansson (2017) and Cattaneo, Jansson, and Ma (2017) give useful bootstrap and jackknife methods that reduce nonlinearity bias. Newey and Robins (2017) show that one can also remove this bias by cross fitting in some settings. We allow for cross-fitting in this paper.
Section 2 describes the general construction of LR moment functions for semiparametric GMM. Section 3 gives LR moment conditions for dynamic discrete choice. Section 4 shows how to estimate the first step influence adjustment. Section 5 gives novel classes of DR moment functions and characterizes double robustness. Section 6 gives an orthogonal instrument construction of LR moments based on conditional moment restrictions. Section 7 provides simple and general asymptotic theory for LR estimators.
Locally Robust Moment Functions
===============================
The subject of this paper is GMM estimators of parameters where the sample moment functions depend on a first step nonparametric or large dimensional estimator. We refer to these estimators as semiparametric. We could also refer to them as GMM where first step estimators are plugged in the moments. This terminology seems awkward though, so we simply refer to them as semiparametric GMM estimators. We denote such an estimator by $\hat{\beta}$, which is a function of the data $z_{1},...,z_{n}$ where $n$ is the number of observations. Throughout the paper we will assume that the data observations $z_{i}$ are i.i.d. We denote the object that $\hat{\beta}$ estimates as $\beta_{0}$, the subscript referring to the parameter value under the distribution $F_{0}$ of $z_{i}$.
To describe semiparametric GMM let $m(z,\beta,\gamma)$ denote an $r\times1$ vector of functions of the data observation $z,$ parameters of interest $\beta$, and a function $\gamma$ that may be vector valued. The function $\gamma$ can depend on $\beta$ and $z$ through those arguments of $m.$ Here the function $\gamma$ represents some possible first step, such as an estimator, its limit, or a true function. A GMM estimator can be based on a moment condition where $\beta_{0}$ is the unique parameter vector satisfying$$E[m(z_{i},\beta_{0},\gamma_{0})]=0, \label{moments}$$ and $\gamma_{0}$ is the true $\gamma$. We assume that this moment condition identifies $\beta.$ Let $\hat{\gamma}$ denote some first step estimator of $\gamma_{0}$. Plugging in $\hat{\gamma}$ to obtain $m(z_{i},\beta,\hat{\gamma
})$ and averaging over $z_{i}$ results in the estimated sample moments $\hat{m}(\beta)=\sum_{i=1}^{n}m(z_{i},\beta,\hat{\gamma})/n.$ For $\hat{W}$ a positive semi-definite weighting matrix a semiparametric GMM estimator is$$\tilde{\beta}=\arg\min_{\beta\in B}\hat{m}(\beta)^{T}\hat{W}\hat{m}(\beta),$$ where $A^{T}$ denotes the transpose of a matrix $A$ and $B$ is the parameter space for $\beta$. Such estimators have been considered by, e.g. Andrews (1994), Newey (1994a), Newey and McFadden (1994), Pakes and Olley (1995), Chen and Liao (2015), and others.
Locally robust (LR) moment functions can be constructed by adding the influence function adjustment for the first step estimator $\hat{\gamma}$ to the identifying or original moment functions $m(z,\beta,\gamma).$ To describe this influence adjustment let $\gamma(F)$ denote the limit of $\hat{\gamma}$ when $z_{i}$ has distribution $F,$ where we restrict $F$ only in that $\gamma(F)$ exists and possibly other regularity conditions are satisfied. That is, $\gamma(F)$ is the limit of $\hat{\gamma}$ under possible misspecification, similar to Newey (1994). Let $G$ be some other distribution and $F_{\tau}=(1-\tau)F_{0}+\tau G$ for $0\leq\tau\leq1,$ where $F_{0}$ denotes the true distribution of $z_{i}.$ We assume that $G$ is chosen so that $\gamma(F_{\tau})$ is well defined for $\tau>0$ small enough and possibly other regularity conditions are satisfied, similarly to Ichimura and Newey (2017). The influence function adjustment will be the function $\phi
(z,\beta,\gamma,\lambda)$ such that for all such $G,$$$\frac{d}{d\tau}E[m(z_{i},\beta,\gamma(F_{\tau}))]=\int\phi(z,\beta,\gamma
_{0},\lambda_{0})G(dz),E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0,
\label{infdef}$$ where $\lambda$ is an additional nonparametric or large dimensional unknown object on which $\phi(z,\beta,\gamma,\lambda)$ depends and the derivative is from the right (i.e. for positive values of $\tau$) and at $\tau=0.$ This equation is the well known definition of the influence function $\phi
(z,\beta,\gamma_{0},\lambda_{0})$ of $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ as the Gateaux derivative of $\mu(F),$ e.g. Huber (1981). The restriction of $G$ so that $\gamma(F_{\tau})$ exists allows $\phi(z,\beta,\gamma_{0},\lambda
_{0})$ to be the influence function when $\gamma(F)$ is only well defined for certain types of distributions, such as when $\gamma(F)$ is a conditional expectation or density. The function $\phi(z,\beta,\gamma,\lambda)$ will generally exist when $E[m(z_{i},\beta,\gamma(F))]$ has a finite semiparametric variance bound. Also $\phi(z,\beta,\gamma,\lambda)$ will generally be unique because we are not restricting $G$ very much. Also, note that $\phi
(z,\beta,\gamma,\lambda)$ will be the influence adjustment term from Newey (1994a), as discussed in Ichimura and Newey (2017).
LR moment functions can be constructed by adding $\phi(z,\beta,\gamma
,\lambda)$ to $m(z,\beta,\gamma)$ to obtain new moment functions$$\psi(z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda).
\label{momadj}$$ Let $\hat{\lambda}$ be a nonparametric or large dimensional estimator having limit $\lambda(F)$ when $z_{i}$ has distribution $F,$ with $\lambda
(F_{0})=\lambda_{0}.$ Also let $\hat{\psi}(\beta)=\sum_{i=1}^{n}\psi
(z_{i},\beta,\hat{\gamma},\hat{\lambda})/n.$ A LR GMM estimator can be obtained as$$\hat{\beta}=\arg\min_{\beta\in B}\hat{\psi}(\beta)^{T}\hat{W}\hat{\psi}(\beta). \label{lrgmm}$$ As usual a choice of $\hat{W}$ that minimizes the asymptotic variance of $\sqrt{n}(\hat{\beta}-\beta_{0})$ will be a consistent estimator of the inverse of the asymptotic variance $\Omega$ of $\sqrt{n}\hat{\psi}(\beta
_{0}).$ As we will further discuss, $\psi(z,\beta,\gamma,\lambda)$ being LR will mean that the estimation of $\gamma$ and $\lambda$ does not affect $\Omega$, so that $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{T}].$ An optimal $\hat{W}$ also gives an efficient estimator in the wider sense shown in Ackerberg, Chen, Hahn, and Liao (2014), making $\hat{\beta}$ efficient in a semiparametric model where the only restrictions imposed are equation (\[moments\]).
The LR property we consider is that the derivative of the true expectation of the moment function with respect to the first step is zero, for a Gateaux derivative like that for the influence function in equation (\[infdef\]). Define $F_{\tau}=(1-\tau)F_{0}+\tau G$ as before where $G$ is such that both $\gamma(F_{\tau})$ and $\lambda(F_{\tau})$ are well defined. The LR property is that for all $G$ as specified,$$\frac{d}{d\tau}E[\psi(z_{i},\beta,\gamma(F_{\tau}),\lambda(F_{\tau}))]=0.
\label{lrdef}$$ Note that this condition is the same as that of Newey (1994a) for the presence of $\hat{\gamma}$ an $\hat{\lambda}$ to have no effect on the asymptotic distribution, when each $F_{\tau}$ is a regular parametric submodel. Consequently, the asymptotic variance of $\sqrt{n}\hat{\psi}(\beta_{0})$ will be $\Omega$ as in the last paragraph.
To show LR of the moment functions $\psi(z,\beta,\gamma,\lambda)=m(z,\beta
,\gamma)+\phi(z,\beta,\gamma,\lambda)$ from equation (\[momadj\]) we use the fact that the second, zero expectation condition in equation (\[infdef\]) must hold for all possible true distributions. For any given $\beta$ define $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ and $\phi(z,F)=\phi(z,\beta
,\gamma(F),\lambda(F)).$
<span style="font-variant:small-caps;">Theorem 1:</span> *If i)* $d\mu(F_{\tau})/d\tau=\int\phi
(z,F_{0})G(dz)$*, ii)* $\int\phi(z,F_{\tau})F_{\tau}(dz)=0$ *for all* $\tau\in\lbrack0,\bar{\tau}),$ *and iii)* $\int\phi(z,F_{\tau
})F_{0}(dz)$ *and* $\int\phi(z,F_{\tau})G(dz)$ *are continuous at* $\tau=0$ *then*$$\frac{d}{d\tau}E[\phi(z_{i},F_{\tau})]=-\frac{d\mu(F_{\tau})}{d\tau}.
\label{thm1con}$$
The proofs of this result and others are given in Appendix B. Assumptions i) and ii) of Theorem 1 require that both parts of equation (\[infdef\]) hold with the second, zero mean condition being satisfied when $F_{\tau}$ is the true distribution. Assumption iii) is a regularity condition. The LR property follows from Theorem 1 by adding $d\mu(F_{\tau})/d\tau$ to both sides of equation (\[thm1con\]) and noting that the sum of derivatives is the derivative of the sum. Equation (\[thm1con\]) shows that the addition of $\phi(z,\beta,\gamma,\lambda)$ “partials out” the effect of the first step $\gamma$ on the moment by “cancelling” the derivative of the identifying moment $E[m(z_{i},\beta,\gamma(F_{\tau}))]$ with respect to $\tau$. This LR result for $\psi(z,\beta,\gamma,\lambda)$ differs from the literature in its Gateaux derivative formulation and in the fact that it is not a semiparametric influence function but is the hybrid sum of an identifying moment function $m(z,\beta,\gamma)$ and an influence function adjustment $\phi(z,\beta
,\gamma,\lambda).$
Another zero derivative property of LR moment functions is useful. If the sets $\Gamma$ and $\Lambda$ of possible limits $\gamma(F)$ and $\lambda(F)$, respectively, are linear, $\gamma(F)$ and $\lambda(F)$ can vary separately from one another, and certain functional differentiability conditions hold then LR moment functions will have the property that for any $\gamma\in\Gamma
$, $\lambda\in\Lambda$, and $\bar{\psi}(\gamma,\lambda)=E[\psi(z_{i},\beta
_{0},\gamma,\lambda)]$, $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})=0,\frac{\partial}{\partial\tau}\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)=0. \label{lrdef2}$$ That is, the expected value of the LR moment function will have a zero Gateaux derivative with respect to each of the first steps $\gamma$ and $\lambda.$ This property will be useful for several results to follow. Under still stronger smoothness conditions this zero derivative condition will result in the existence of a constant $C$ such that for a function norm $\left\Vert
\cdot\right\Vert $,$$\left\vert \bar{\psi}(\gamma,\lambda_{0})\right\vert \leq C\left\Vert
\gamma-\gamma_{0}\right\Vert ^{2},\text{ }\left\vert \bar{\psi}(\gamma
_{0},\lambda)\right\vert \leq C\left\Vert \lambda-\lambda_{0}\right\Vert ^{2},
\label{nlremainder}$$ when $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda
-\lambda_{0}\right\Vert $ are small enough. In Appendix B we give smoothness conditions that are sufficient for LR to imply equations (\[lrdef2\]) and (\[nlremainder\]). When formulating regularity conditions for particular moment functions and first step estimators it may be more convenient to work directly with equation (\[lrdef2\]) and/or (\[nlremainder\]).
The approach of constructing LR moment functions by adding the influence adjustment differs from the model based approach of using an efficient influence function or score for a semiparametric model as moment functions . The approach here is *estimator based* rather than model based. The influence adjustment $\phi(z,\beta,\gamma,\lambda)$ is determined by the limit $\gamma(F)$ of the first step estimator $\hat{\gamma}$ and the moment functions $m(z,\beta,\gamma)$ rather than by some underlying semiparametric model. This estimator based approach has proven useful for deriving the influence function of a wide variety of semiparametric estimators, as mentioned in the Introduction. Here this estimator based approach provides a general way to construct LR moment functions. For any moment function $m(z,\beta,\gamma)$ and first step estimator $\hat{\gamma}$ a corresponding LR estimator can be constructed as in equations (\[momadj\]) and (\[lrgmm\]).
The addition of $\phi(z,\beta,\gamma,\lambda)$ does not affect identification of $\beta$ because $\phi(z,\beta,\gamma_{0},\lambda_{0})$ has expectation zero for any $\beta$ and true $F_{0}.$ Consequently, the LR GMM estimator will have the same asymptotic variance as the original GMM estimator $\tilde{\beta}$ when $\sqrt{n}(\tilde{\beta}-\beta_{0})$ is asymptotically normal, under appropriate regularity conditions. The addition of $\phi(z,\beta
,\gamma,\lambda)$ will change other properties of the estimator. As discussed in Chernozhukov et al. (2017, 2018), it can even remove enough bias so that the LR estimator is root-n consistent and the original estimator is not.
If $F_{\tau}$ was modified so that $\tau$ is a function of a smoothing parameter, e.g. a bandwidth, and $\tau$ gives the magnitude of the smoothing bias of $\gamma(F_{\tau}),$ then equation (\[lrdef\]) is a small bias condition, equivalent to$$E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]=o(\tau).$$ Here $E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]$ is a bias in the moment condition resulting from smoothing that shrinks faster than $\tau.$ In this sense LR GMM estimators have the small bias property considered in NHR. This interpretation is also one sense in which LR GMM is “debiased.”
In some cases the original moment functions $m(z,\beta,\gamma)$ are already LR and the influence adjustment will be zero. An important class of moment functions that are LR are those where $m(z,\beta,\gamma)$ is the derivative with respect to $\beta$ of an objective function where nonparametric parts have been concentrated out. That is, suppose that there is a function $q(z,\beta,\zeta)$ such that $m(z,\beta,\gamma)=\partial q(z,\beta,\zeta
(\beta))/\partial\beta$ where $\zeta(\beta)=\arg\max_{\zeta}E[q(z_{i},\beta,\zeta)]$, where $\gamma$ includes $\zeta(\beta)$ and possibly additional functions. Proposition 2 of Newey (1994a) and Lemma 2.5 of Chernozhukov et al. (2018) then imply that $m(z,\beta,\gamma)$ will be LR. This class of moment functions includes various partially linear regression models where $\zeta$ represents a conditional expectation. It also includes the efficient score for a semiparametric model, Newey (1994a, pp. 1358-1359).
Cross fitting, also known as sample splitting, has often been used to improve the properties of semiparametric and machine learning estimators; e.g. see Bickel (1982), Schick (1986), and Powell, Stock, and Stoker (1989). Cross fitting removes a source of bias and can be used to construct estimators with remainder terms that converge to zero as fast as is known to be possible, as in NHR and Newey and Robins (2017). Cross fitting is also useful for double machine learning estimators, as outlined in Chernozhukov et al. (2017, 2018). For these reasons we allow for cross-fitting, where sample moments have the form$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{i=1}^{n}\psi(z_{i},\beta,\hat{\gamma}_{i},\hat{\lambda}_{i}),$$ with $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ being formed from observations other than the $i^{th}.$ This kind of cross fitting removes an “own observation” bias term and is useful for showing root-n consistency when $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ are machine learning estimators.
One version of cross-fitting with good properties in examples in Chernozhukov et al. (2018) can be obtained by partitioning the observation indices into $L$ groups $I_{\ell},(\ell=1,...,L),$ forming $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell}$ from observations not in $I_{\ell}$, and constructing$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\psi
(z_{i},\beta,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}). \label{cfit}$$ Further bias reductions may be obtained in some cases by using different sets of observations for computing $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell
},$ leading to remainders that converge to zero as rapidly as known possible in interesting cases; see Newey and Robins (2017). The asymptotic theory of Section 7 focuses on this kind of cross fitting.
As an example we consider a bound on average equivalent variation. Let $\gamma_{0}(x)$ denote the conditional expectation of quantity $q$ conditional on $x=(p^{T},y)$ where $p=(p_{1},p_{2}^{T})^{T}$ is a vector of prices and $y$ is income$.$ The object of interest is a bound on average equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1}$ given by$$\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma_{0}(p_{1},p_{2i},y_{i})dp_{1}],\ell(p_{1},y)=w(y)1(\bar{p}_{1}\leq p_{1}\leq\check{p}_{1})\exp
\{-B(p_{1}-\bar{p}_{1})\}],$$ where $w(y)$ is a function of income and $B$ a constant. It follows by Hausman and Newey (2016) that if $B$ is a lower (upper) bound on the income effect for all individuals then $\beta_{0}$ is an upper (lower) bound on the equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1},$ averaged over heterogeneity, other prices $p_{2i},$ and income $y_{i}$. The function $w(y)$ allows for averages over income in specific ranges, as in Hausman and Newey (2017).
A moment function that could be used to estimate $\beta_{0}$ is$$m(z,\beta,\gamma)=\int\ell(p_{1},y)\gamma(p_{1},p_{2},y)dp_{1}-\beta.$$ Note that $$E[m(z_{i},\beta_{0},\gamma)]+\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma
(p_{1},p_{2i},y_{i})dp_{1}]=E[\lambda_{0}(x_{i})\gamma(x_{i})],\lambda
_{0}(x)=\frac{\ell(p_{1},y)}{f_{0}(p_{1}|p_{2},y)},$$ where $f_{0}(p_{1}|p_{2},y)$ is the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Then by Proposition 4 of Newey (1994) the influence function adjustment for any nonparametric estimator $\hat{\gamma}(x)$ of $E[q_{i}|x_{i}=x]$ is$$\phi(z,\beta,\gamma,\lambda)=\lambda(x)[q-\gamma(x)].$$ Here $\lambda_{0}(x)$ is an example of an additional unknown function that is included in $\phi(z,\beta,\gamma,\lambda)$ but not in the original moment functions $m(z,\beta,\gamma)$. Let $\hat{\gamma}_{i}(x)$ be an estimator of $E[q_{i}|x_{i}=x]$ that can depend on $i$ and $\hat{\lambda}_{i}(x)$ be an estimator of $\lambda_{0}(x)$, such as $\hat{f}_{i}(p_{1}|p_{2},y)^{-1}\ell(p_{1},y)$ for an estimator $\hat{f}_{i}(p_{1}|p_{2},y).$ The LR estimator obtained by solving $\hat{\psi}(\beta)=0$ for $m(z,\beta,\gamma)$ and $\phi(z,\beta,\gamma,\lambda)$ as above is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\left\{ \int\ell(p_{1},y_{i})\hat
{\gamma}_{i}(p_{1},p_{2i},y_{i})dp_{1}+\hat{\lambda}_{i}(x_{i})[q_{i}-\hat{\gamma}_{i}(x_{i})]\right\} . \label{exlr}$$
Machine Learning for Dynamic Discrete Choice
============================================
A challenging problem when estimating dynamic structural models is the dimensionality of state spaces. Machine learning addresses this problem via model selection to estimate high dimensional choice probabilities. These choice probabilities estimators can then be used in conditional choice probability (CCP) estimators of structural parameters, following Hotz and Miller (1993). In order for CCP estimators based on machine learning to be root-n consistent they must be based on orthogonal (i.e. LR) moment conditions, see Chernozhukov et al. (2017, 2018). Adding the adjustment term provides the way to construct LR moment conditions from known moment conditions for CCP estimators. In this Section we do so for the Rust’s (1987) model of dynamic discrete choice.
We consider an agent choosing among $J$ discrete alternatives by maximizing the expected present discounted value of utility. We assume that the per-period utility function for an agent making choice $j$ in period $t$ is given by$$U_{jt}=u_{j}(x_{t},\beta_{0})+\epsilon_{jt},(j=1,...,J;t=1,2,...).$$ The vector $x_{t}$ is the observed state variables of the problem (*e.g.* work experience, number of children, wealth) and the vector $\beta$ is unknown parameters. The disturbances $\epsilon_{t}=\{\epsilon
_{1t},...,\epsilon_{Jt}\}$ are not observed by the econometrician. As in much of the literature we assume that $\epsilon_{t}$ is i.i.d. over time with known CDF that has support $R^{J},$ is independent of $x_{t},$ and $x_{t}$ is first-order Markov.
To describe the agent’s choice probabilities let $\delta$ denote a time discount parameter, $\bar{v}(x)$ the expected value function, $y_{jt}\in\{0,1\}$ the indicator that choice $j$ is made and $\bar{v}_{j}(x_{t})=u_{j}(x_{t},\beta_{0})+\delta E[\bar{v}(x_{t+1})|x_{t},j]$ the expected value function conditional on choice $j.$ As in Rust (1987), we assume that in each period the agent makes the choice $j$ that maximizes the expected present discounted value of utility $\bar{v}_{j}(x_{t})+\epsilon
_{jt}.$ The probability of choosing $j$ in period $t$ is then$$P_{j}(\bar{v}_{t})=\Pr(\bar{v}_{j}(x_{t})+\epsilon_{jt}\geq\bar{v}_{k}(x_{t})+\epsilon_{kt};k=1,...,J),\bar{v}_{t}=(\bar{v}_{1}(x_{t}),...,\bar
{v}_{J}(x_{t}))^{\prime}. \label{choice prob}$$
These choice probabilities have a useful relationship to the structural parameters $\beta$ when there is a renewal choice, where the conditional distribution of $x_{t+1}$ given the renewal choice and $x_{t}$ does not depend on $x_{t}.$ Without loss of generality suppose that the renewal choice is $j=1.$ Let $\tilde{v}_{jt}$ denote $\tilde{v}_{j}(x_{t})=\bar{v}_{j}(x_{t})-\bar{v}_{1}(x_{t}),$ so that $\tilde{v}_{1t}\equiv0$. As usual, subtracting $\bar{v}_{1t}$ from each $\bar{v}_{jt}$ in $P_{j}(\bar{v}_{t})$ does not change the choice probabilities, so that they depend only on $\tilde{v}_{t}=(\tilde{v}_{2t},...,\tilde{v}_{Jt}).$
The renewal nature of $j=1$ leads to a specific formula for $\tilde{v}_{jt}$ in terms of the per period utilities $u_{jt}=u_{j}(x_{t},\beta_{0})$ and the choice probabilities $P_{t}=P(\tilde{v}_{t})=(P_{1}(\bar{v}_{t}),...P_{J}(\bar{v}_{t}))^{\prime}.$ As in Hotz and Miller (1993), there is a function $\mathcal{P}^{-1}(P)$ such that $\tilde{v}_{t}=\mathcal{P}^{-1}(P_{t}).$ Let $H(P)$ denote the function such that $$H(P_{t})=E[\max_{1\leq j\leq J}\{\mathcal{P}^{-1}(P_{t})_{j}+\epsilon
_{jt}\}|x_{t}]=E[\max_{1\leq j\leq J}\{\tilde{v}_{jt}+\epsilon_{jt}\}|x_{t}].$$ For example, for multinomial logit $H(P_{t})=.5772-\ln(P_{1t}).$ Note that by $j=1$ being a renewal we have $E[\bar{v}_{t+1}|x_{t},1]=C$ for a constant $C$, so that$$\bar{v}(x_{t})=\bar{v}_{1t}+H(P_{t})=u_{1t}+\delta C+H(P_{t}).$$ It then follows that$$\bar{v}_{jt}=u_{jt}+\delta E[\bar{v}(x_{t+1})|x_{t},j]=u_{jt}+\delta
E[u_{1,t+1}+H(P_{t+1})|x_{t},j]+\delta^{2}C,(j=1,...,J).$$ Subtracting then gives$$\tilde{v}_{jt}=u_{jt}-u_{1t}+\delta\{E[u_{1,t+1}+H(P_{t+1})|x_{t},j]-E[u_{1,t+1}+H(P_{t+1})|1]\}. \label{value}$$ This expression for the choice specific value function $\tilde{v}_{jt}$ depends only on $u_{j}(x_{t},\beta),$ $H(P_{t+1})$, and conditional expectations given the state and choice, and so can be used to form semiparametric moment functions.
To describe those moment functions let $\gamma_{1}(x)$ denote the vector of possible values of the choice probabilities $E[y_{t}|x_{t}=x],$ where $y_{t}=(y_{1t},...,y_{Jt})^{\prime}.$ Also let $\gamma_{j}(x_{t},\beta
,\gamma_{1}),(j=2,...,J)$ denote a possible $E[u_{1}(x_{t+1},\beta
)+H(\gamma_{1}(x_{t+1}))|x_{t},j]$ as a function of $\beta$, $x_{t}$ and $\gamma_{1},$ and $\gamma_{J+1}(\beta,\gamma_{1})$ a possible value of $E[u_{1}(x_{t},\beta)+H(\gamma_{1}(x_{t+1}))|1].$ Then a possible value of $\tilde{v}_{jt}$ is given by $$\tilde{v}_{j}(x_{t},\beta,\gamma)=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta
)+\delta\lbrack\gamma_{j}(x_{t},\beta,\gamma_{1})-\gamma_{J+1}(\beta
,\gamma_{1})],(j=2,...,J).$$ These value function differences are semiparametric, depending on the function $\gamma_{1}$ of choice probabilities and the conditional expectations $\gamma_{j}$, $(j=2,...,J).$ Let $\tilde{v}(x_{t},\beta,\gamma)=(\tilde{v}_{2}(x_{t},\beta,\gamma),...,\tilde{v}_{J}(x_{t},\beta,\gamma))^{\prime}$ and $A(x_{t})$ denote a matrix of functions of $x_{t}$ with $J$ columns. Semiparametric moment functions are given by$$m(z,\beta,\gamma)=A(x)[y-P(\tilde{v}(x,\beta,\gamma))].$$
LR moment functions can be constructed by adding the adjustment term for the presence of the first step $\gamma.$ This adjustment term is derived in Appendix A. It takes the form $$\phi(z,\beta,\gamma,\lambda)=\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\gamma
,\lambda),$$ where $\phi_{j}(z,\beta,\gamma,\lambda)$ is the adjustment term for $\gamma_{j}$ holding all other components $\gamma$ fixed at their true values. To describe it define$$\begin{aligned}
P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\label{ddcdef}\\
\lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\nonumber\end{aligned}$$ Then for $w_{t}=x_{t+1}$ and $z=(y,x,w)$ let$$\begin{aligned}
\phi_{1}(z,\beta,\gamma,\lambda) & =-\delta\left( \sum_{j=2}^{J}\{\lambda_{j}(x)-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{1}(x)\}\right) [\partial H(\gamma_{1}(x))/\partial P]^{\prime
}\{y-\gamma_{1}(x)\}\\
\phi_{j}(z,\beta,\gamma,\lambda) & =-\delta A(x)P_{\tilde{v}j}(\tilde
{v}(x,\beta,\gamma))\frac{y_{j}}{P_{j}(\tilde{v}(x,\beta,\gamma))}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{j}(x,\beta,\gamma_{1})\},(j=2,...,J),\\
\phi_{J+1}(z,\beta,\gamma,\lambda) & =\delta\left( \sum_{j=2}^{J}E[A(x_{t})P_{\tilde{v}j}(\tilde{v}(x_{t},\beta,\gamma))]\right) \pi_{1}^{-1}y_{1}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{J+1}(\beta,\gamma_{1})\}.\end{aligned}$$
<span style="font-variant:small-caps;">Theorem 2:</span> *If the marginal distribution of* $x_{t}$ *does not vary with* $t$ *then LR moment functions for the dynamic discrete choice model are*$$\psi(z,\beta,\gamma)=A(x_{t})[y_{t}-P(\tilde{v}(x_{t},\beta,\gamma
))]+\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\lambda).$$
The form of $\psi(z,\beta,\gamma)$ is amenable to machine learning. A machine learning estimator of the conditional choice probability vector $\gamma
_{10}(x)$ is straightforward to compute and can then be used throughout the construction of the orthogonal moment conditions everywhere $\gamma_{1}$ appears. If $u_{1}(x,\beta)$ is linear in $x,$ say $u_{1}(x,\beta
)=x_{1}^{\prime}\beta_{1}$ for subvectors $x_{1}$ and $\beta_{1}$ of $x$ and $\beta$ respectively, then machine learning estimators can be used to obtain $\hat{E}[x_{1,t+1}|x_{t},j]$ and $\hat{E}[\hat{H}_{t+1}|x_{j},j],$ $(j=2,...,J),$ and a sample average used to form $\hat{\gamma}_{J+1}(\beta,\hat{\gamma}_{1})$. The value function differences can then be estimated as$$\tilde{v}_{j}(x_{t},\beta,\hat{\gamma})=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta)+\hat{E}[x_{1,t+1}|x_{t},j]^{\prime}\beta_{1}-\hat{E}[x_{1,t+1}|1]^{\prime}\beta_{1}+\hat{E}[\hat{H}_{t+1}|x_{t},j]-\hat{E}[\hat{H}_{t+1}|1].$$ Furthermore, denominator problems can be avoided by using structural probabilities (rather than the machine learning estimators) in all denominator terms.
The challenging part of the machine learning for this estimator is the dependence on $\beta$ of the reverse conditional expectations in $\lambda
_{1}(x)$. It may be computationally prohibitive and possibly unstable to redo machine learning for each $\beta.$ One way to to deal with this complication is to update $\beta$ periodically, with more frequent updates near convergence. It is important that at convergence the $\beta$ in the reverse conditional expectations is the same as the $\beta$ that appears elsewhere.
With data $z_{i}$ that is i.i.d. over individuals these moment functions can be used for any $t$ to estimate the structural parameters $\beta.$ Also, for data for a single individual we could use a time average $\sum_{t=1}^{T-1}\psi(z_{t},\beta,\gamma)/(T-1)$ to estimate $\beta.$ It will be just as important to use LR moments for estimation with a single individual as it is with a cross section of individuals, although our asymptotic theory will not apply to that case.
Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived the influence adjustment for dynamic discrete games of imperfect information. Locally robust moment conditions for such games could be formed using their results. We leave that formulation to future work.
As an example of the finite sample performance of the LR GMM we report a Monte Carlo study of the LR estimator of this Section. The design of the experiment is loosely like the bus replacement application of Rust (1987). Here $x_{t}$ is a state variable meant to represent the lifetime of a bus engine. The transition density is $$x_{t+1}=\left\{
\begin{array}
[c]{c}x_{t}+N(.25,1)^{2},y_{t}=1,\\
x_{t}=1+N(.25,1)^{2},y_{t}=0.
\end{array}
\right. .$$ where $y_{t}=0$ corresponds to replacement of the bus engine and $y_{t}=1$ to nonreplacement. We assume that the agent chooses $y_{t}$ contingent on state to maximize$$\sum_{t=1}^{\infty}\delta^{t-1}[y_{t}(\alpha\sqrt{x_{t}}+\varepsilon
_{t})+(1-y_{t})RC],\alpha=-.3,RC=-4.$$ The unconditional probability of replacement in this model is about $1/8,$ which is substantially higher than that estimated in Rust (1987). The sample used for estimation was $1000$ observations for a single decision maker. We carried out $10,000$ replications.
We estimate the conditional choice probabilities by kernel and series nonparametric regression and by logit lasso, random forest, and boosted tree machine learning methods. Logit conditional choice probabilities and derivatives were used in the construction of $\hat{\lambda}_{j}$ wherever they appear in order to avoid denominator issues. The unknown conditional expectations in the $\hat{\lambda}_{j}$ were estimated by series regressions throughout. Kernel regression was also tried but did not work particularly well and so results are not reported.
Table 1 reports the results of the experiment. Bias, standard deviation, and coverage probability for asymptotic 95 percent confidence intervals are given in Table 1.
Table 1
\[c\][lllllll]{}\
& & &\
& $\ \ \ \ \alpha$ & RC & $\ \ \ \alpha$ & RC & $\ \ \ \alpha$ & RC\
Two step kernel & -.24 & .08 & .08 & .32 & .01 & .86\
LR kernel & -.05 & .02 & .06 & .32 & .95 & .92\
Two step quad & -.00 & .14 & .049 & .33$^{\ast}$ & .91 & .89\
LR quad & -.00 & .01 & .085 & .39 & .95 & .92\
Logit Lasso & -.12 & .25 & .06 & .28 & .74 & .84\
LR Logit Lasso & -.09 & .01 & .08 & .36 & .93 & .95\
Random Forest & -.15 & -.44 & .09 & .50 & .91 & .98\
LR Ran. For. & .00 & .00 & .06 & .44 & 1.0 & .98\
Boosted Trees & -.10 & -.28 & .08 & .50 & .99 & .99\
LR Boost Tr. & .03 & .09 & .07 & .47 & .99 & .97
Here we find bias reduction from the LR estimator in all cases. We also find variance reduction from LR estimation when the first step is kernel estimation, random forests, and boosted trees. The LR estimator also leads to actual coverage of confidence intervals being closer to the nominal coverage. The results for random forests and boosted trees seem noisier than the others, with higher standard deviations and confidence interval coverage probabilities farther from nominal. Overall, we find substantial improvements from using LR moments rather than only the identifying, original moments.
Estimating the Influence Adjustment
===================================
Construction of LR moment functions requires an estimator $\hat{\phi}(z,\beta)$ of the adjustment term. The form of $\phi(z,\beta,\gamma,\lambda)$ is known for some cases from the semiparametric estimation literature. Powell, Stock, and Stoker (1989) derived the adjustment term for density weighted average derivatives. Newey (1994a) gave the adjustment term for mean square projections (including conditional expectations), densities, and their derivatives. Hahn (1998) and Hirano, Imbens, and Ridder (2003) used those results to obtain the adjustment term for treatment effect estimators, where the LR estimator will be the doubly robust estimator of Robins, Rotnitzky, and Zhao (1994, 1995). Bajari, Hong, Krainer, and Nekipelov (2010) and Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived adjustment terms in some game models. Hahn and Ridder (2013, 2016) derived adjustments in models with generated regressors including control functions. These prior results can be used to obtain LR estimators by adding the adjustment term with nonparametric estimators plugged in.
For new cases it may be necessary to derive the form of the adjustment term. Also, it is possible to numerically estimate the adjustment term based on series estimators and other nonparametric estimators. In this Section we describe how to construct estimators of the adjustment term in these ways.
Deriving the Formula for the Adjustment Term
--------------------------------------------
One approach to estimating the adjustment term is to derive a formula for $\phi(z,\beta,\gamma,\lambda)$ and then plug in $\hat{\gamma}$ and $\hat{\lambda}$ in that formula$.$ A formula for $\phi(z,\beta,\gamma
,\lambda)$ can be obtained as in Newey (1994a). Let $\gamma(F)$ be the limit of the nonparametric estimator $\hat{\gamma}$ when $z_{i}$ has distribution $F.$ Also, let $F_{\tau}$ denote a regular parametric model of distributions with $F_{\tau}=F_{0}$ at $\tau=0$ and score (derivative of the log likelihood at $\tau=0)$ equal to $S(z)$. Then under certain regularity conditions $\phi(z,\beta,\gamma_{0},\lambda_{0})$ will be the unique solution to$$\left. \frac{\partial\int m(z,\beta,\gamma(F_{\tau}))F_{0}(dz)}{\partial\tau
}\right\vert _{\tau=0}=E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})S(z_{i})],E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0, \label{funeq}$$ as $\{F_{\tau}\}$ and the corresponding score $S(z)$ are allowed to vary over a family of parametric models where the set of scores for the family has mean square closure that includes all mean zero functions with finite variance. Equation (\[funeq\]) is a functional equation that can be solved to find the adjustment term, as was done in many of the papers cited in the previous paragraph.
The influence adjustment can be calculated by taking a limit of the Gateaux derivative as shown in Ichimura and Newey (2017). Let $\gamma(F)$ be the limit of $\hat{\gamma}$ when $F$ is the true distribution of $z_{i}$, as before. Let $G_{z}^{h}$ be a family of distributions that approaches a point mass at $z$ as $h\longrightarrow0.$ If $\phi(z_{i},\beta,\gamma_{0},\lambda_{0})$ is continuous in $z_{i}$ with probability one then$$\phi(z,\beta,\gamma_{0},\lambda_{0})=\lim_{h\longrightarrow0}\left( \left.
\frac{\partial E[m(z_{i},\beta,\gamma(F_{\tau}^{h}))]}{\partial\tau
}\right\vert _{\tau=0}\right) ,F_{\tau}^{h}=(1-\tau)F_{0}+\tau G_{z}^{h}.
\label{derlim}$$ This calculation is more constructive than equation (\[funeq\]) in the sense that the adjustment term here is a limit of a derivative rather than the solution to a functional equation. In Sections 5 and 6 we use those results to construct LR estimators when the first step is a nonparametric instrumental variables (NPIV) estimator.
With a formula for $\phi(z,\beta,\gamma,\lambda)$ in hand from either solving the functional equation in equation (\[funeq\]) or from calculating the limit of the derivative in equation (\[derlim\]), one can estimate the adjustment term by plugging estimators $\hat{\gamma}$ and $\hat{\lambda}$ into $\phi(z,\beta,\gamma,\lambda).$ This approach to estimating LR moments can used to construct LR moments for the average surplus described near the end of Section 2. There the adjustment term depends on the conditional density of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Let $\hat{f}_{\ell}(p_{1}|p_{2},y)$ be some estimator of the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}.$ Plugging that estimator into the formula for $\lambda_{0}(x)$ gives $\hat{\lambda}_{\ell}(x)=\frac{\ell(p_{1},y)}{\hat{f}_{\ell}(p_{1}|p_{2},y)}.$This $\hat{\lambda}_{\ell}(x)$ can then be used in equation (\[exlr\])$.$
Estimating the Influence Adjustment for First Step Series Estimators
--------------------------------------------------------------------
Estimating the adjustment term is relatively straightforward when the first step is a series estimator. The adjustment term can be estimated by treating the first step estimator as if it were parametric and applying a standard formula for the adjustment term for parametric two-step estimators. Suppose that $\hat{\gamma}_{\ell}$ depends on the data through a $K\times1$ vector $\hat{\zeta}_{\ell}$ of parameter estimators that has true value $\zeta_{0}$. Let $m(z,\beta,\zeta)$ denote $m(z,\beta,\gamma)$ as a function of $\zeta.$ Suppose that there is a $K\times1$ vector of functions $h(z,\zeta)$ such that $\hat{\zeta}_{\ell}$ satisfies$$\frac{1}{\sqrt{\bar{n}_{\ell}}}\sum_{i\in\bar{I}_{\ell}}h(z_{i},\hat{\zeta
}_{\ell})=o_{p}(1),$$ where $\bar{I}_{\ell}$ is a subset of observations, none which are included in $I_{\ell},$ and $\bar{n}_{\ell}$ is the number of observations in $\bar
{I}_{\ell}.$ Then a standard calculation for parametric two-step estimators (e.g. Newey, 1984, and Murphy and Topel, 1985) gives the parametric adjustment term$$\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=\hat{\Psi}_{\ell}(\beta)h(z_{i},\hat{\zeta}_{\ell}),\hat{\Psi}_{\ell}(\beta)=-\sum_{j\in\bar
{I}_{\ell}}\frac{\partial m(z_{j},\beta,\hat{\zeta}_{\ell})}{\partial\zeta
}\left( \sum_{j\in\bar{I}_{\ell}}\frac{\partial h(z_{j},\hat{\zeta}_{\ell})}{\partial\zeta}\right) ^{-1},i\in I_{\ell}.$$ In many cases $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ approximates the true adjustment term $\phi(z,\beta,\gamma_{0},\lambda_{0}),$ as shown by Newey (1994a, 1997) and Ackerberg, Chen, and Hahn (2012) for estimating the asymptotic variance of functions of series estimators. Here this approximation is used for estimation of $\beta$ instead of just for variance estimation. The estimated LR moment function will be$$\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=m(z_{i},\beta
,\hat{\zeta}_{\ell})+\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}).
\label{lr series}$$ We note that if $\hat{\zeta}_{\ell}$ were computed from the whole sample then $\hat{\phi}(\beta)=0$. This degeneracy does not occur when cross-fitting is used, which removes “own observation” bias and is important for first step machine learning estimators, as noted in Section 2.
We can apply this approach to construct LR moment functions for an estimator of the average surplus bound example that is based on series regression. Here the first step estimator of $\gamma_{0}(x)=E[q_{i}|x_{i}=x]$ will be that from an ordinary least regression of $q_{i}$ on a vector $a(x_{i})$ of approximating functions. The corresponding $m(z,\beta,\zeta)$ and $h(z,\zeta)$ are$$m(z,\beta,\zeta)=A(x)^{\prime}\zeta-\beta,h(z,\zeta)=a(x)[q-a(x)^{\prime}\zeta],A(x)=\int\ell(p_{1},y)a(p_{1},p_{2},y)dp_{1}.$$ Let $\hat{\zeta}_{\ell}$ denote the least squares coefficients from regressing $q_{i}$ on $a(x_{i})$ for observations that are not included in $I_{\ell}$. Then the estimator of the locally robust moments given in equation (\[lr series\]) is $$\begin{aligned}
\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}) & =A(x_{i})^{\prime
}\hat{\zeta}_{\ell}-\beta+\hat{\Psi}_{\ell}a(x_{i})[q_{i}-a(x_{i})^{\prime
}\hat{\zeta}_{\ell}],\\
\hat{\Psi}_{\ell} & =\sum_{j\in\bar{I}_{\ell}}A(x_{j})^{\prime}\left(
\sum_{j\in\bar{I}_{\ell}}a(x_{j})a(x_{j})^{\prime}\right) ^{-1}.\end{aligned}$$ It can be shown similarly to Newey (1994a, p. 1369) that $\hat{\Psi}_{\ell}$ estimates the population least squares coefficients from a regression of $\lambda_{0}(x_{i})$ on $a(x_{i}),$ so that $\hat{\lambda}_{\ell}(x_{i})=\hat{\Psi}_{\ell}a(x_{i})$ estimates $\lambda_{0}(x_{i}).$ In comparison the LR estimator described in the previous subsection was based on an explicit nonparametric estimator of $f_{0}(p_{1}|p_{2},y),$ while this $\hat{\lambda
}_{\ell}(x)$ implicitly estimates the inverse of that pdf via a mean-square approximation of $\lambda_{0}(x_{i})$ by $\hat{\Psi}_{\ell}a(x_{i}).$
Chernozhukov, Newey, and Robins (2018) introduce machine learning methods for choosing the functions to include in the vector $A(x)$. This method can be combined with machine learning methods for estimating $E[q_{i}|x_{i}]$ to construct a double machine learning estimator of average surplus, as shown in Chernozhukov, Hausman, and Newey (2018).
In parametric models moment functions like those in equation (\[lr series\]) are used to “partial out” nuisance parameters $\zeta.$ For maximum likelihood these moment functions are the basis of Neyman’s (1959) C-alpha test. Wooldridge (1991) generalized such moment conditions to nonlinear least squares and Lee (2005), Bera et al. (2010), and Chernozhukov et al. (2015) to GMM. What is novel here is their use in the construction of semiparametric estimators and the interpretation of the estimated LR moment functions $\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zeta}_{\ell})$ and an influence adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$.
Estimating the Influence Adjustment with First Step Smoothing
-------------------------------------------------------------
The adjustment term can be estimated in a general way that allows for kernel density, locally linear regression, and other kernel smoothing estimators for the first step. The idea is to differentiate with respect to the effect of the $i^{th}$ observation on sample moments. Newey (1994b) used a special case of this approach to estimate the asymptotic variance of a functional of a kernel based semiparametric or nonparametric estimator. Here we extend this method to a wider class of first step estimators, such as locally linear regression, and apply it to estimate the adjustment term for construction of LR moments.
We will describe this estimator for the case where $\gamma$ is a vector of functions of a vector of variables $x.$ Let $h(z,x,\gamma)$ be a vector of functions of a data observation $z$, $x$, and a possible realized value of $\gamma$ (i.e. a vector of real numbers $\gamma$). Also let $\hat{h}_{\ell
}(x,\gamma)=\sum_{j\in\bar{I}_{\ell}}h(z_{j},x,\gamma)/\bar{n}_{\ell}$ be a sample average over a set of observations $\bar{I}_{\ell}$ not included in $I_{\ell},$ where $\bar{n}_{j}$ is the number of observations in $\bar{I}_{j}.$ We assume that the first step estimator $\hat{\gamma}_{\ell}(x)$ solves$$0=\hat{h}_{\ell}(x,\gamma).$$ We suppress the dependence of $h$ and $\hat{\gamma}$ on a bandwidth. For example for a pdf $\kappa(u)$ a kernel density estimator would correspond to $h(z_{j},x,\gamma)=\kappa(x-x_{j})-\gamma$ and a locally linear regression would be $\hat{\gamma}_{1}(x)$ for$$h(z_{j},x,\gamma)=\kappa(x-x_{j})\left(
\begin{array}
[c]{c}1\\
x-x_{j}\end{array}
\right) [y_{j}-\gamma_{1}-(x-x_{j})^{\prime}\gamma_{2}].$$
To measure the effect of the $i^{th}$ observation on $\hat{\gamma}$ let $\hat{\gamma}_{\ell i}^{\xi}(x)$ be the solution to $$0=\hat{h}_{\ell}(x,\gamma)+\xi\cdot h(z_{i},x,\gamma).$$ This $\hat{\gamma}_{\ell i}^{\xi}(x)$ is the value of the function obtained from adding the contribution $\xi\cdot h(z_{i},x,\gamma)$ of the $i^{th}$ observation. An estimator of the adjustment term can be obtained by differentiating the average of the original moment function with respect to $\xi$ at $\xi=0.$ This procedure leads to an estimated locally robust moment function given by$$\psi(z_{i},\beta,\hat{\gamma}_{\ell})=m(z_{i},\beta,\hat{\gamma}_{\ell
})+\left. \frac{\partial}{\partial\xi}\frac{1}{\bar{n}_{\ell}}\sum_{j\in
\bar{I}_{\ell}}m(z_{j},\beta,\hat{\gamma}_{\ell i}^{\xi}(\cdot))\right\vert
_{\xi=0}.$$ This estimator is a generalization of the influence function estimator for kernels in Newey (1994b).
Double and Partial Robustness
=============================
The zero derivative condition in equation (\[lrdef\]) is an appealing robustness property in and of itself. A zero derivative means that the expected moment functions remain closer to zero than $\tau$ as $\tau$ varies away from zero. This property can be interpreted as local insensitivity of the moments to the value of $\gamma$ being plugged in, with the moments remaining close to zero as $\gamma$ varies away from its true value. Because it is difficult to get nonparametric functions exactly right, especially in high dimensional settings, this property is an appealing one.
Such robustness considerations, well explained in Robins and Rotnitzky (2001), have motivated the development of doubly robust (DR) moment conditions. DR moment conditions have expectation zero if one first stage component is incorrect. DR moment conditions allow two chances for the moment conditions to hold, an appealing robustness feature. Also, DR moment conditions have simpler conditions for asymptotic normality than general LR moment functions as discussed in Section 7. Because many interesting LR moment conditions are also DR we consider double robustness.
LR moments that are constructed by adding the adjustment term for first step estimation provide candidates for DR moment functions. The derivative of the expected moments with respect to each first step will be zero, a necessary condition for DR. The condition for moments constructed in this way to be DR is the following:
<span style="font-variant:small-caps;">Assumption 1:</span> *There are sets* $\Gamma$ *and* $\Lambda
$ *such that for all* $\gamma\in\Gamma$ *and* $\lambda\in
\Lambda$$$E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\beta_{0},\gamma,\lambda
_{0})],E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]=0.$$
This condition is just the definition of DR for the moment function $\psi(z,\beta,\gamma)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$, pertaining to specific sets $\Gamma$ ** and $\Lambda.$
The construction of adding the adjustment term to an identifying or original moment function leads to several novel classes of DR moment conditions. One such class has a first step that satisfies a conditional moment restriction$$E[y_{i}-\gamma_{0}(w_{i})|x_{i}]=0, \label{cmrlin}$$ where $w_{i}$ is potentially endogenous and $x_{i}$ is a vector of instrumental variables. This condition is the nonparametric instrumental variable (NPIV) restriction as in Newey and Powell (1989, 2003) and Newey (1991). A first step conditional expectation where $\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$ is included as special case with $w_{i}=x_{i}.$ Ichimura and Newey (2017) showed that the adjustment term for this step takes the form $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)]$ so $m(z,\beta,\gamma
)+\lambda(x)[y-\gamma(x)]$ is a candidate for a DR moment function. A sufficient condition for DR is:
<span style="font-variant:small-caps;">Assumption 2:</span> *i) Equation (\[cmrlin\]) is satisfied; ii)* $\Lambda=\{\lambda(x):E[\lambda(x_{i})^{2}]<\infty\}$ *and* $\Gamma=\{\gamma(w):E[\gamma(w_{i})^{2}]<\infty\};$ *iii) there is* $v(w)$ *with* $E[v(w_{i})^{2}]<\infty$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=E[v(w_{i})\{\gamma(w_{i})-\gamma_{0}(w_{i})\}]$ *for all* $\gamma\in\Gamma$*; iv) there is* $\lambda
_{0}(x)$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$*; and v)* $E[y_{i}^{2}]<\infty.$
By the Riesz representation theorem condition iii) is necessary and sufficient for $E[m(z_{i},\beta_{0},\gamma)]$ to be a mean square continuous functional of $\gamma$ with representer $v(w).$ Condition iv) is an additional condition giving continuity in the reduced form difference $E[\gamma(w_{i})-\gamma
_{0}(w_{i})|x_{i}]$, as further discussed in Ichimura and Newey (2017). Under this condition$$\begin{aligned}
E[m(z_{i},\beta_{0},\gamma)] & =E[E[\lambda_{0}(x_{i})|w_{i}]\{\gamma
(w_{i})-\gamma_{0}(w_{i})\}]=E[\lambda_{0}(x_{i})\{\gamma(w_{i})-\gamma
_{0}(w_{i})\}]\\
& =-E[\phi(z_{i},\gamma,\lambda_{0})],\text{ \ }E[\phi(z_{i},\gamma
_{0},\lambda)]=E[\lambda(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]=0.\end{aligned}$$ Thus Assumption 2 implies Assumption 1 so that we have
<span style="font-variant:small-caps;">Theorem 3:</span> *If Assumption 2 is satisfied then* $m(z,\beta
,\gamma)+\lambda(x)\{y-\gamma(w)\}$ *is doubly robust.*
There are many interesting, novel examples of DR moment conditions that are special cases of Theorem 3. The average surplus bound is an example where $y_{i}=q_{i},$ $w_{i}=x_{i},$ $x_{i}$ is the observed vector of prices and income, $\Lambda=\Gamma$ is the set of all measurable functions of $x_{i}$ with finite second moment, and $\gamma_{0}(x)=E[y_{i}|x_{i}=x].$ Let $x_{1}$ denote $p_{1}$ and $x_{2}$ the vector of other prices and income, so that $x=(x_{1},x_{2}^{\prime})^{\prime}$. Also let $f_{0}(x_{1}|x_{2})$ denote the conditional pdf of $p_{1}$ given $x_{2}$ and $\ell(x)=\ell(p_{1},y)$ for income $y$. Let $m(z,\beta,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}-\beta$ as before. Multiplying and dividing through by $f_{0}(p_{1}|x_{2})$ gives, for all $\gamma,\lambda\in\Gamma$ and $\lambda
_{0}(x)=f_{0}(x_{1}|x_{2})^{-1}\ell(x),$ $$E[m(z_{i},\beta_{0},\gamma)]=E[\int\ell(p_{1},x_{2i})\gamma(p_{1},x_{2i})dp_{1}]-\beta_{0}=E[E[\lambda_{0}(x_{i})\gamma(x_{i})|x_{2i}]]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}].$$ Theorem 3 then implies that the LR moment function for average surplus $m(z,\beta,\gamma)+\lambda(x)[q-\gamma(x)]$ is DR. A corresponding DR estimator $\hat{\beta}$ is given in equation (\[exlr\]).
The surplus bound is an example of a parameter where $\beta_{0}=E[g(z_{i},\gamma_{0})]$ for some linear functional $g(z,\gamma)$ of $\gamma$ and for $\gamma_{0}$ satisfying the conditional moment restriction of equation (\[cmrlin\])$.$ For the surplus bound $g(z,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}.$ If Assumption 2 is satisfied then choosing $m(z,\beta,\gamma)=g(z,\gamma)-\beta$ a DR moment condition is $g(z,\gamma
)-\beta+\lambda(x)[y-\gamma(w)].$ A corresponding DR estimator is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{g(z_{i},\hat{\gamma}_{i})+\hat{\lambda
}_{i}(x_{i})[y_{i}-\hat{\gamma}_{i}(w_{i})]\}, \label{drlin}$$ where $\hat{\gamma}_{i}(w)$ and $\hat{\lambda}_{i}(x)$ are estimators of $\gamma_{0}(w)$ and $\lambda_{0}(x)$ respectively. An estimator $\hat{\gamma
}_{i}$ can be constructed by nonparametric regression when $w_{i}=x_{i}$ or NPIV in general. A series estimator $\hat{\lambda}_{i}(x)$ can be constructed similarly to the surplus bound example in Section 3.2. For $w_{i}=x_{i}$ Newey and Robins (2017) give such series estimators of $\hat{\lambda}_{i}(x)$ and Chernozhukov, Newey, and Robins (2018) show how to choose the approximating functions for $\hat{\lambda}_{i}(x_{i})$ by machine learning. Simple and general conditions for root-n consistency and asymptotic normality of $\hat{\beta}$ that allow for machine learning are given in Section 7.
Novel examples of the DR estimator in equation (\[drlin\]) $w_{i}=x_{i}$ are given by Newey and Robins (2017) and Chernozhukov, Newey, and Robins (2018). Also Appendix C provides a generalization to $\gamma(w)$ and $\gamma(x)$ that satisfy orthogonality conditions more general than conditional moment restrictions and novel examples of those. A novel example with $w_{i}\neq
x_{i}$ is a weighted average derivative of $\gamma_{0}(w)$ satisfying equation (\[cmrlin\]). Here $g(z,\gamma)=\bar{v}(w)\partial\gamma(w)/\partial w$ for some weight function $\bar{v}(w)$. Let $f_{0}(w)$ be the pdf of $w_{i}$ and $v(w)=-f_{0}(w)^{-1}\partial\lbrack\bar{v}(w)f_{0}(w)]/\partial w,$ assuming that derivatives exist. Assume that $\bar{v}(w)\gamma(w)f_{0}(w)$ is zero on the boundary of the support of $w_{i}.$ Integration by parts then gives Assumption 2 iii). Assume also that there exists $\lambda_{0}\in\Lambda$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}].$ Then for estimators $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ a DR estimator of the weighted average derivative is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{\bar{v}(w_{i})\frac{\partial\hat
{\gamma}_{i}(w_{i})}{\partial w}+\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma
}_{i}(w_{i})]\}.$$ This is a DR version of the weighted average derivative estimator of Ai and Chen (2007). A special case of this example is the DR moment condition for the weighted average derivative in the exogenous case where $w_{i}=x_{i}$ given in Firpo and Rothe (2017).
Theorem 3 includes existing DR moment functions as special cases where $w_{i}=x_{i}$, including the mean with randomly missing data given by Robins and Rotnitzky (1995), the class of DR estimators in Robins et al. (2008), and the DR estimators of Firpo and Rothe (2017). We illustrate for the mean with missing data. Let $w=x,$ $x=(a,u)$ for an observed data indicator $a\in\{0,1\}$ and covariates $u,$ $m(z,\beta,\gamma)=\gamma(1,u)-\beta,$ and $\lambda_{0}(x)=a/\Pr(a_{i}=1|u_{i}=u).$ Here it is well known that $$E[m(z_{i},\beta_{0},\gamma)]=E[\gamma(1,u_{i})]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}]=-E[\lambda_{0}(x_{i})\{y_{i}-\gamma(x_{i})\}].$$ Then DR of the moment function $\gamma(1,w)-\beta+\lambda(x)[y-\gamma(x)]$ of Robins and Rotnitzky (1995) follows by Proposition 5.
Another novel class of DR moment conditions are those where the first step $\gamma$ is a pdf of a function $x$ of the data observation $z.$ By Proposition 5 of Newey (1994a), the adjustment term for such a first step is $\phi(z,\beta,\gamma,\lambda)=\lambda(x)-\int\lambda(u)\gamma(u)du$ for some possible $\lambda$. A sufficient condition for the DR as in Assumption 1 is:
<span style="font-variant:small-caps;">Assumption 3:</span> $x_{i}$ *has pdf* $\gamma_{0}(x)$ *and for* $\Gamma=\{\gamma:\gamma(x)\geq0$, $\int\gamma(x)dx=1\}$ *there is* $\lambda_{0}(x)$ *such that for all* $\gamma\in\Gamma,$$$E[m(z_{i},\beta_{0},\gamma)]=\int\lambda_{0}(x)\{\gamma(x)-\gamma_{0}(x)\}dx.$$
Note that for $\phi(z,\gamma,\lambda)=\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ it follows from Assumption 3 that $E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\gamma,\lambda_{0})]$ for all $\gamma
\in\Gamma$. Also, $E[\phi(z_{i},\gamma_{0},\lambda)]=E[\lambda(x_{i})]-\int\lambda(\tilde{x})\gamma_{0}(\tilde{x})dx=0.$ Then Assumption 1 is satisfied so we have:
<span style="font-variant:small-caps;">Theorem 4:</span> *If Assumption 3 is satisfied then* $m(z,\beta
,\gamma)+\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ *is DR.*
The integrated squared density $\beta_{0}=\int\gamma_{0}(x)^{2}dx$ is an example for $m(z,\beta,\gamma)=\gamma(x)-\beta,$ $\lambda_{0}=\gamma_{0},$ and $$\psi(z,\beta,\gamma,\lambda)=\gamma(x)-\beta+\lambda(x)-\int\lambda(\tilde
{x})\gamma(\tilde{x})dx.$$ This DR moment function seems to be novel. Another example is the density weighted average derivative (DWAD) of Powell, Stock, and Stoker (1989), where $m(z,\beta,\gamma)=-2y\cdot\partial\gamma(x)/\partial x-\beta$. Let $\delta(x_{i})=E[y_{i}|x_{i}]\gamma_{0}(x_{i})$. Assuming that $\delta
(u)\gamma(u)$ is zero on the boundary and differentiable, integration by parts gives$$E[m(z_{i},\beta_{0},\gamma)]=-2E[y_{i}\partial\gamma(x_{i})/\partial
x]-\beta_{0}=\int[\partial\delta(\tilde{x})/\partial x]\{\gamma(\tilde
{x})-\gamma_{0}(\tilde{x})\}du,$$ so that Assumption 3 is satisfied with $\lambda_{0}(x)=\partial\delta
(x)/\partial x.$ Then by Theorem 4$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{-2\frac{\partial\hat{\gamma}_{i}(x_{i})}{\partial x}+\frac{\partial\hat{\delta}_{i}(x_{i})}{\partial x}-\int\frac{\partial\hat{\delta}_{i}(\tilde{x})}{\partial x}\hat{\gamma}_{i}(\tilde{x})d\tilde{x}\}$$ is a DR estimator. It was shown in NHR (1998) that the Powell, Stock, and Stoker (1989) estimator with a twicing kernel is numerically equal to a leave one out version of this estimator for the original (before twicing) kernel. Thus the DR result for $\hat{\beta}$ gives an interpretation of the twicing kernel estimator as a DR estimator.
The expectation of the DR moment functions of both Theorem 3 and 4 are affine in $\gamma$ and $\lambda$ holding the other fixed at the truth. This property of DR moment functions is general, as we show by the following characterization of DR moment functions:
<span style="font-variant:small-caps;">Theorem 5:</span> *If* $\Gamma$ *and* $\Lambda$ *are linear then* $\psi(z,\beta,\gamma,\lambda)$ *is DR if and only if* $$\left. \partial E[\psi(z_{i},\beta_{0},(1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})]\right\vert _{\tau=0}=0,\left. \partial E[\psi(z_{i},\beta
_{0},\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]\right\vert _{\tau=0}=0,$$ *and* $E[\psi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ *and* $E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ *are affine in* $\gamma
$ *and* $\lambda$ *respectively.*
The zero derivative condition of this result is a Gateaux derivative, componentwise version of LR. Thus, we can focus a search for DR moment conditions on those that are LR. Also, a DR moment function must have an expectation that is affine in each of $\gamma$ and $\lambda$ while the other is held fixed at the truth. It is sufficient for this condition that $\psi(z_{i},\beta_{0},\gamma,\lambda)$ be affine in each of $\gamma$ and $\lambda$ while the other is held fixed. This property can depend on how $\gamma$ and $\lambda$ are specified. For example the missing data DR moment function $m(1,u)-\beta+\pi(u)^{-1}a[y-\gamma(x)]$ is not affine in the propensity score $\pi(u)=\Pr(a_{i}=1|u_{i}=u)$ but is in $\lambda
(x)=\pi(u)^{-1}a$.
In general Theorem 5 motivates the construction of DR moment functions by adding the adjustment term to obtain a LR moment function that will then be DR if it is affine in $\gamma$ and $\lambda$ separately. It is interesting to note that in the NPIV setting of Theorem 3 and the density setting of Theorem 4 that the adjustment term is always affine in $\gamma$ and $\lambda.$ It then follows from Theorem 5 that in those settings LR moment conditions are precisely those where $E[m(z_{i},\beta_{0},\gamma)]$ is affine in $\gamma.$ Robins and Rotnitzky (2001) gave conditions for the existence of DR moment conditions in semiparametric models. Theorem 5 is complementary to those results in giving a complete characterization of DR moments when $\Gamma$ and $\Lambda$ are linear.
Assumptions 2 and 3 both specify that $E[m(z_{i},\beta_{0},\gamma)]$ is continuous in an integrated squared deviation norm. These continuity conditions are linked to finiteness of the semiparametric variance bound for the functional $E[m(z_{i},\beta_{0},\gamma)],$ as discussed in Newey and McFadden (1994) for Assumption 2 with $w_{i}=x_{i}$ and for Assumption 3. For Assumption 2 with $w_{i}\neq x_{i}$ Severini and Tripathi (2012) showed for $m(z,\beta,\gamma)=v(w)\gamma(w)-\beta$ with known $v(w)$ that the existence of $\lambda_{0}(w)$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ is necessary for the existence of a root-n consistent estimator of $\beta$. Thus the conditions of Assumption 2 are also linked to necessary conditions for root-n consistent estimation when $w_{i}\neq x_{i}.$
Partial robustness refers to settings where $E[m(z_{i},\beta_{0},\bar{\gamma
})]=0$ for some $\bar{\gamma}\neq\gamma_{0}$. The novel DR moment conditions given here lead to novel partial robustness results as we now demonstrate in the conditional moment restriction setting of Assumption 2. When $\lambda
_{0}(x)$ in Assumption 2 is restricted in some way there may exist $\tilde{\gamma}\neq\gamma_{0}$ with $E[\lambda_{0}(x_{i})\{y_{i}-\tilde
{\gamma}(w_{i})\}]=0.$ Then$$E[m(z_{i},\beta_{0},\tilde{\gamma})]=-E[\lambda_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.$$ Consider the average derivative $\beta_{0}=E[\partial\gamma_{0}(w_{i})/\partial w_{r}]$ where $m(z,\beta,\gamma)=\partial\gamma(w)/\partial
w_{r}-\beta$ for some $r.$ Let $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}]$ be the limit of the linear IV estimator with right hand side variables $p(w)$ and the same number of instruments $a(x).$ The following is a partial robustness result that provides conditions for the average derivative of the linear IV estimator to equal the true average derivative:
<span style="font-variant:small-caps;">Theorem 6:</span> If $-\partial\ln f_{0}(w)/\partial w_{r}=c^{\prime}p(w)$ for a constant vector $c$, $E[p(w_{i})p(w_{i})^{\prime}]$ is nonsingular, and $E[a(x_{i})|w_{i}=w]=\Pi p(w)$ for a square nonsingular $\Pi$ then for $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}],$$$E[\partial\{p(w_{i})^{\prime}\delta\}/\partial w_{r}]=E[\partial\gamma
_{0}(w_{i})/\partial w_{r}].$$
This result shows that if the density score is a linear combination of the right-hand side variables $p(w)$ used by linear IV, the conditional expectation of the instruments $a(x_{i})$ given $w_{i}$ is a nonsingular linear combination of $p(w)$, and $p(w)$ has a nonsingular second moment matrix then the average derivative of the linear IV estimator is the true average derivative. This is a generalization to NPIV of Stoker’s (1986) result that linear regression coefficients equal the average derivatives when the regressors are multivariate Gaussian.
DR moment conditions can be used to identify parameters of interest. Under Assumption 1 $\beta_{0}$ may be identified from$$E[m(z_{i},\beta_{0},\bar{\gamma})]=-E[\phi(z_{i},\beta_{0},\bar{\gamma
},\lambda_{0})]$$ for any fixed $\bar{\gamma}$ when the solution $\beta_{0}$ to this equation is unique.
<span style="font-variant:small-caps;">Theorem 7:</span> *If Assumption 1 is satisfied,* $\lambda_{0}$ *is identified, and for some* $\bar{\gamma}$ *the equation* $E[\psi(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$ *has a unique solution then* $\beta_{0}$ *is identified as that solution.*
Applying this result to the NPIV setting of Assumption 2 gives an explicit formula for certain functionals of $\gamma_{0}(w)$ without requiring that the completeness identification condition of Newey and Powell (1989, 2003) be satisfied, similarly to Santos (2011). Suppose that $v(w)$ is identified, e.g. as for the weighted average derivative. Since both $w$ and $x$ are observed it follows that a solution $\lambda_{0}(x)$ to $v(w)=E[\lambda_{0}(x)|w]$ will be identified if such a solution exists. Plugging in $\bar{\gamma}=0$ into the equation $E[\psi(z_{i},\beta_{0},\bar{\gamma},\lambda_{0})]=0$ gives
<span style="font-variant:small-caps;">Corollary 8:</span> *If* $v(w_{i})$ *is identified and there exists* $\lambda_{0}(x_{i})$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ *then* $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ *is identified as* $\beta_{0}=E[\lambda_{0}(x_{i})y_{i}]$*.*
Note that this result holds without the completeness condition. Identification of $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ for known $v(w_{i})$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ follows from Severini and Tripathi (2006). Corollary 8 extends that analysis to the case where $v(w_{i})$ is only identified but not necessarily known and links it to DR moment conditions. Santos (2011) gives a related formula for a parameter $\beta_{0}=\int\tilde
{v}(w)\lambda_{0}(w)dw$. The formula here differs from Santos (2011) in being an expectation rather than a Lebesgue integral. Santos (2011) constructed an estimator. That is beyond the scope of this paper.
Conditional Moment Restrictions
===============================
Models of conditional moment restrictions that depend on unknown functions are important in econometrics. In such models the nonparametric components may be determined simultaneously with the parametric components. In this setting it is useful to work directly with the instrumental variables to obtain LR moment conditions rather than to make a first step influence adjustment. For that reason we focus in this Section on constructing LR moments by orthogonalizing the instrumental variables.
Our orthogonal instruments framework is based on based on conditional moment restrictions of the form$$E[\rho_{j}(z_{i},\beta_{0},\gamma_{0})|x_{ji}]=0,(j=1,...,J),
\label{cond mom restrict}$$ where each $\rho_{j}(z,\beta,\gamma)$ is a scalar residual and $x_{j}$ are instruments that may differ across $j$. This model is considered by Chamberlain (1992) and Ai and Chen (2003, 2007) when $x_{j}$ is the same for each $j$ and for Ai and Chen (2012) when the set of $x_{j}$ includes $x_{j-1}.$ We allow the residual vector $\rho(z,\beta,\gamma)$ to depend on the entire function $\gamma$ and not just on its value at some function of the observed data $z_{i}$.
In this framework we consider LR moment functions having the form$$\psi(z,\beta,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma), \label{gcm}$$ where $\lambda(x)=[\lambda_{1}(x_{1}),...,\lambda_{J}(x_{J})]$ is a matrix of instrumental variables with the $j^{th}$ column given by $\lambda_{j}(x_{j}).$ We will define orthogonal instruments to be those that make $\psi
(z,\beta,\gamma,\lambda)$ locally robust. To define orthogonal instrumental variables we assume that $\gamma$ is allowed to vary over a linear set $\Gamma$ as $F$ varies. For each $\Delta\in\Gamma$ let$$\bar{\rho}_{\gamma}(x,\Delta)=(\frac{\partial E[\rho_{1}(z_{i},\beta
_{0},\gamma_{0}+\tau\Delta)|x_{1}]}{\partial\tau},...,\frac{\partial
E[\rho_{J}(z_{i},\beta_{0},\gamma_{0}+\tau\Delta)|x_{J}]}{\partial\tau
})^{\prime}.$$ This $\bar{\rho}_{\gamma}(x,\Delta)$ is the Gateaux derivative with respect to $\gamma$ of the conditional expectation of the residuals in the direction $\Delta.$ We characterize $\lambda_{0}(x)$ as orthogonal if$$E[\lambda_{0}(x_{i})\bar{\rho}_{\gamma}(x_{i},\Delta)]=0\text{ for all }\Delta\in\Gamma.$$ We assume that $\bar{\rho}_{\gamma}(x,\Delta)$ is linear in $\Delta$ and consider the Hilbert space of vectors of random vectors $a(x)=$ $(a_{1}(x_{1}),...,a_{J}(x_{J}))$ with inner product $\left\langle a,b\right\rangle
=E[a(x_{i})^{\prime}b(x_{i})]$. Let $\bar{\Lambda}_{\gamma}$ denote the closure of the set $\{\bar{\rho}_{\gamma}(x,\Delta):\Delta\in\Gamma\}$ in that Hilbert space. Orthogonal instruments are those where each row of $\lambda
_{0}(x)$ is orthogonal to $\bar{\Lambda}_{\gamma}.$ They can be interpreted as instrumental variables where the effect of estimation of $\gamma$ has been partialed out. When $\lambda_{0}(x)$ is orthogonal then $\psi(z,\beta
,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma)$ is LR:
<span style="font-variant:small-caps;">Theorem 9:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *then the moment functions in equation (\[gcm\]) are LR.*
We also have a DR result:
<span style="font-variant:small-caps;">Theorem 10:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *and* $\rho(z,\beta,\gamma
)$ *is affine in* $\gamma\in\Gamma$ *then the moment functions in equation (\[gcm\]) are DR for* $\Lambda=\{\lambda(x):$ ** $E[\lambda(x_{i})^{\prime}\rho(z_{i},\beta_{0},\gamma_{0})^{\prime}\rho
(z_{i},\beta_{0},\gamma_{0})\lambda(x_{i})]$.
There are many ways to construct orthogonal instruments. For instance, given a $r\times(J-1)$ matrix of instrumental variables $\lambda(x)$ one could construct corresponding orthogonal ones $\lambda_{0}(x_{i})$ as the matrix where each row of $\lambda(x)$ is replaced by the residual from the least squares projection of the corresponding row of $\lambda(x)$ on $\bar{\Lambda
}_{\gamma}$. For local identification of $\beta$ we also require that $$rank(\left. \partial E[\psi(z_{i},\beta,\gamma_{0})]/\partial\beta\right\vert
_{\beta=\beta_{0}})=\dim(\beta). \label{local id beta}$$
A model where $\beta_{0}$ is identified from semiparametric conditional moment restrictions with common instrumental variables is a special case where $x_{ji}$ is the same for each $j$. In this case there is a way to construct orthogonal instruments that leads to an efficient estimator of $\beta_{0}$. Let $\Sigma(x_{i})$ denote some positive definite matrix with its smallest eigenvalue bounded away from zero, so that $\Sigma(x_{i})^{-1}$ is bounded. Let $\left\langle a,b\right\rangle _{\Sigma}=E[a(x_{i})^{\prime}\Sigma
(x_{i})^{-1}b(x_{i})]$ denote an inner product and note that $\bar{\Lambda
}_{\gamma}$ is closed in this inner product by $\Sigma(x_{i})^{-1}$ bounded. Let $\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)$ denote the residual from the least squares projection of the $k^{th}$ row $\lambda\left( x\right)
^{\prime}e_{k}$ of $\lambda(x)$ on $\bar{\Lambda}_{\gamma}$ with the inner product $\left\langle a,b\right\rangle _{\Sigma}.$ Then for all $\Delta
\in\Gamma,$ $$E[\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)^{\prime}\Sigma(x_{i})^{-1}\bar{\rho}_{\gamma}(x_{i},\Delta)]=0,$$ so that for $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)=[\tilde{\lambda}_{1}^{\Sigma}(x_{i},\lambda),...,\tilde{\lambda}_{r}^{\Sigma}(x_{i},\lambda)]$ the instrumental variables $\tilde{\lambda}^{\Sigma}(x_{i},\lambda
)\Sigma(x_{i})^{-1}$ are orthogonal. Also, $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)$ can be interpreted as the solution to$$\min_{\{D(x):D(x)^{\prime}e_{k}\in\bar{\Lambda}_{\gamma},k=1,...,r\}}tr(E[\{\lambda(x_{i})-D(x_{i})\}\Sigma(x_{i})^{-1}\{\lambda(x_{i})-D(x_{i})\}^{\prime}])$$ where the minimization is in the positive semidefinite sense.
The orthogonal instruments that minimize the asymptotic variance of GMM in the class of GMM estimators with orthogonal instruments are given by$$\lambda_{0}^{\ast}(x)=\tilde{\lambda}^{\Sigma^{\ast}}(x,\lambda_{\beta})\Sigma^{\ast}(x)^{-1},\lambda_{\beta}(x_{i})=\left. \frac{\partial
E[\rho(z_{i},\beta,\gamma_{0})|x_{i}]}{\partial\beta}\right\vert _{\beta
=\beta_{0}}^{\prime},\Sigma^{\ast}(x_{i})=Var(\rho_{i}|x_{i}),\rho_{i}=\rho(z_{i},\beta_{0},\gamma_{0}).$$
<span style="font-variant:small-caps;">Theorem 11:</span> *The instruments* $\varphi^{\ast}(x_{i})$ *give an efficient estimator in the class of IV estimators with orthogonal instruments.*
The asymptotic variance of the GMM estimator with optimal orthogonal instruments is $$(E[m_{i}^{\ast}m_{i}^{\ast\prime}])^{-1}=E[\tilde{\lambda}(x_{i},\lambda
^{\ast},\Sigma^{\ast})\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}(x_{i},\lambda^{\ast},\Sigma^{\ast})^{\prime}])^{-1}.$$ This matrix coincides with the semiparametric variance bound of Ai and Chen (2003). Estimation of the optimal orthogonal instruments is beyond the scope of this paper. The series estimator of Ai and Chen (2003) could be used for this.
This framework includes moment restrictions with a NPIV first step $\gamma$ satisfying $E[\rho(z_{i},\gamma_{0})|x_{i}]=0$ where we can specify $\rho
_{1}(z,\beta,\gamma)=m(z,\beta,\gamma),$ $x_{1i}=1,$ $\rho_{2}(z,\beta
,\gamma)=\rho(z,\gamma),$ and $x_{2i}=x_{i}.$ It generalizes that setup by allowing for more residuals $\rho_{j}(z,\beta,\gamma)$, $(j\geq3)$ and allowing all residuals to depend on $\beta.$
Asymptotic Theory
=================
In this Section we give simple and general asymptotic theory for LR estimators that incorporates the cross-fitting of equation (\[cfit\]). Throughout we use the structure of LR moment functions that are the sum $\psi(z,\beta
,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$ of an identifying or original moment function $m(z,\beta,\gamma)$ depending on a first step function $\gamma$ and an influence adjustment term $\phi
(z,\beta,\gamma,\lambda)$ that can depend on an additional first step $\lambda.$ The asymptotic theory will apply to any moment function that can be decomposed into a function of a single nonparametric estimator and a function of two nonparametric estimators. This structure and LR leads to particularly simple and general conditions.
The conditions we give are composed of mean square consistency conditions for first steps and one, two, or three rate conditions for quadratic remainders. We will only use one quadratic remainder rate for DR moment conditions, involving faster than $1/\sqrt{n}$ convergence of products of estimation errors for $\hat{\gamma}$ and $\hat{\lambda}.$ When $E[m(z_{i},\beta
_{0},\gamma)+\phi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ is not affine in $\gamma$ we will impose a second rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\gamma}.$ When $E[\phi(z_{i},\gamma
_{0},\lambda)]$ is also not affine in $\lambda$ we will impose a third rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$ Most adjustment terms $\phi(z,\beta,\gamma,\lambda)$ of which we are aware, including for first step conditional moment restrictions and densities, have $E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ affine in $\lambda,$ so that faster $n^{-1/4}$ convergence of $\hat{\lambda}$ will not be required under our conditions. It will suffice for most LR estimators which we know of to have faster than $n^{-1/4}$ convergence of $\hat{\gamma}$ and faster than $1/\sqrt{n}$ convergence of the product of estimation errors for $\hat{\gamma
}$ and $\hat{\lambda},$ with only the latter condition imposed for DR moment functions. We also impose some additional conditions for convergence of the Jacobian of the moments and sample second moments that give asymptotic normality and consistent asymptotic variance estimation for $\hat{\beta}$.
An important intermediate result for asymptotic normality is$$\sqrt{n}\hat{\psi}(\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})+o_{p}(1), \label{no effec}$$ where $\hat{\psi}(\beta)$ is the cross-fit, sample, LR moments of equation (\[cfit\]). This result will mean that the presence of the first step estimators has no effect on the limiting distribution of the moments at the true $\beta_{0}$. To formulate conditions for this result we decompose the difference between the left and right-hand sides into several remainders. Let $\phi(z,\gamma,\lambda)=\phi(z,\beta_{0},\gamma,\lambda),$ $\bar{\phi}(\gamma,\lambda)=E[\phi(z_{i},\gamma,\lambda)],$ and $\bar{m}(\gamma
)=E[m(z_{i},\beta_{0},\gamma)],$ so that $\bar{\psi}(\gamma,\lambda)=\bar
{m}(\gamma)+\bar{\phi}(\gamma,\lambda)$ Then adding and subtracting terms gives $$\sqrt{n}[\hat{\psi}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma
_{0},\lambda_{0})/n]=\hat{R}_{1}+\hat{R}_{2}+\hat{R}_{3}+\hat{R}_{4},
\label{redecomp}$$ where$$\begin{aligned}
\hat{R}_{1} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[m(z_{i},\beta_{0},\hat{\gamma}_{i})-m(z_{i},\beta_{0},\gamma_{0})-\bar{m}(\hat{\gamma}_{i})]\label{remain}\\
& +\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\lambda
_{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma}_{i},\lambda_{0})+\phi(z_{i},\gamma_{0},\hat{\lambda}_{i})-\phi(z_{i},\gamma
_{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{i})],\nonumber\\
\hat{R}_{2} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})-\phi(z_{i},\hat{\gamma}_{i},\lambda_{0})-\phi
(z_{i},\gamma_{0},\hat{\lambda}_{i})+\phi(z_{i},\gamma_{0},\lambda
_{0})],\nonumber\\
\hat{R}_{3} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\psi}(\hat{\gamma}_{i},\lambda_{0}),\;\;\;\hat{R}_{4}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\phi
}(\gamma_{0},\hat{\lambda}_{i}),\nonumber\end{aligned}$$
We specify regularity conditions sufficient for each of $\hat{R}_{1}$, $\hat{R}_{2}$, $\hat{R}_{3},$ and $\hat{R}_{4}$ to converge in probability to zero so that equation (\[no effec\]) will hold. The remainder term $\hat
{R}_{1}$ is a stochastic equicontinuity term as in Andrews (1994). We give mean square consistency conditions for $\hat{R}_{1}\overset{p}{\longrightarrow
}0$ in Assumption 3.
The remainder term $\hat{R}_{2}$ is a second order remainder that involves both $\hat{\gamma}$ and $\hat{\lambda}.$ When the influence adjustment is $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)],$ as for conditional moment restrictions, then$$\hat{R}_{2}=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}[\hat{\lambda}_{i}(x_{i})-\lambda_{0}(x_{i})][\hat{\gamma}_{i}(w_{i})-\gamma_{0}(w_{i})].$$ $\hat{R}_{2}$ will converge to zero when the product of convergence rates for $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ is faster than $1/\sqrt{n}.$ However, that is not the weakest possible condition. Weaker conditions for locally linear regression first steps are given by Firpo and Rothe (2017) and for series regression first steps by Newey and Robins (2017). These weaker conditions still require that the product of biases of $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ converge to zero faster than $1/\sqrt{n}$ but have weaker conditions for variance terms. We allow for these weaker conditions by allowing $\hat{R}_{2}\overset{p}{\longrightarrow}0$ as a regularity condition. Assumption 5 gives these conditions.
We will have $\hat{R}_{3}=\hat{R}_{4}=0$ in the DR case of Assumption 1, where $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0$ will suffice for equation (\[no effec\]). In non DR cases LR leads to $\bar{\psi}(\gamma,\lambda_{0})=\bar{m}(\gamma
)+\bar{\phi}(\gamma,\lambda_{0})$ having a zero functional derivative with respect to $\gamma$ at $\gamma_{0}$ so that $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when $\hat{\gamma}_{i}$ converges to $\gamma_{0}$ at a rapid enough, feasible rate. For example if $\bar{\psi
}(\gamma,\lambda_{0})$ is twice continuously Frechet differentiable in a neighborhood of $\gamma_{0}$ for a norm $\left\Vert \cdot\right\Vert ,$ with zero Frechet derivative at $\gamma_{0}$. Then$$\left\vert \hat{R}_{3}\right\vert \leq C\sum_{\ell=1}^{L}\sqrt{n}\left\Vert
\hat{\gamma}_{\ell}-\gamma_{0}\right\Vert ^{2}\overset{p}{\longrightarrow}0$$ when $\left\Vert \hat{\gamma}-\gamma_{0}\right\Vert =o_{p}(n^{-1/4})$. Here $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when each $\hat{\gamma}_{\ell}$ converges to $\gamma_{0}$ more quickly than $n^{-1/4}$. It may be possible to weaken this condition by bias correcting $m(z,\beta,\hat{\gamma}),$ as by the bootstrap in Cattaneo and Jansson (2017), by the jackknife in Cattaneo Ma and Jansson (2017), and by cross-fitting in Newey and Robins (2017). Consideration of such bias corrections for $m(z,\beta,\hat{\gamma})$ is beyond the scope of this paper.
In many cases $\hat{R}_{4}=0$ even though the moment conditions are not DR. For example that is true when $\hat{\gamma}$ is a pdf or when $\gamma_{0}$ estimates the solution to a conditional moment restriction. In such cases mean square consistency, $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ and faster than $n^{-1/4}$ consistency of $\hat{\gamma}$ suffices for equation (\[no effec\]); no convergence rate for $\hat{\lambda}$ is needed. The simplification that $\hat{R}_{4}=0$ seems to be the result of $\lambda$ being a Riesz representer for the linear functional that is the derivative of $\bar{m}(\gamma)$ with respect to $\gamma.$ Such a Riesz representer will enter $\bar{\phi}(\lambda,\gamma_{0})$ linearly, leading to $\hat{R}_{4}=0.$ When $\hat{R}_{4}\neq0$ then $\hat{R}_{4}\overset{p}{\longrightarrow}0$ will follow from twice Frechet differentiability of $\bar{\phi}(\lambda,\gamma
_{0})$ in $\lambda$ and faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$
All of the conditions can be easily checked for a wide variety of machine learning and conventional nonparametric estimators. There are well known conditions for mean square consistency for many conventional and machine learning methods. Rates for products of estimation errors are also know for many first step estimators as are conditions for $n^{-1/4}$ consistency. Thus, the simple conditions we give here are general enough to apply to a wide variety of first step estimators.
The first formal assumption of this section is sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0.$
<span style="font-variant:small-caps;">Assumption 4:</span> *For each* $\ell=1,...,L$*, i) Either* $m(z,\beta_{0},\gamma)$ *does not depend on* $z$ *or* $\int\{m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *ii)* $\int\{\phi
(z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *and* $\int\{\phi(z,\gamma
_{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0;$
The cross-fitting used in the construction of $\hat{\psi}(\beta_{0})$ is what makes the mean-square consistency conditions of Assumption 4 sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0$. The next condition is sufficient for $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
<span style="font-variant:small-caps;">Assumption 5:</span> *For each* $\ell=1,...,L$*, either i)*$$\sqrt{n}\int\max_{j}|\phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma}_{\ell
},\lambda_{0})+\phi_{j}(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\overset{p}{\longrightarrow}0$$ *or ii)* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
As previously discussed, this condition allows for just $\hat{R}_{2}\overset{p}{\longrightarrow}0$ in order to allow the weak regularity conditions of Firpo and Rothe (2017) and Newey and Robins (2017). The first result of this Section shows that Assumptions 4 and 5 are sufficient for equation (*\[no effec\]*) when the moment functions are DR.
<span style="font-variant:small-caps;">Lemma 12:</span> *If Assumption 1 is satisfied, with probability approaching one* $\hat{\gamma}\in\Gamma$*,* $\hat{\lambda}\in\Lambda
,$ *and Assumptions 4 and 5 are satisfied then equation (\[no effec\]) is satisfied.*
An important class of DR estimators are those from equation (\[drlin\]). The following result gives conditions for asymptotic linearity of these estimators:
<span style="font-variant:small-caps;">Theorem 13:</span> *If a) Assumptions 2 and 4 i) are satisfied with* $\hat{\gamma}\in\Gamma$ *and* $\hat{\lambda}\in\Lambda$ *with probability approaching one; b)* $\lambda_{0}(x_{i})$ *and* $E[\{y_{i}-\gamma_{0}(w_{i})\}^{2}|x_{i}]$ *are bounded; c) for each* $\ell=1,...,L$*,* $\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ ** $\int[\hat{\lambda}_{\ell
}(x)-\lambda_{0}(x)]^{2}F_{0}(dz)$ ** $\overset{p}{\longrightarrow}0$*, and either*$$\sqrt{n}\left\{ \int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\right\} ^{1/2}\left\{ \int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\right\} ^{1/2}\mathit{\ }\overset{p}{\longrightarrow}0$$ *or*$$\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\{\hat{\gamma}_{\ell}(w_{i})-\gamma
_{0}(w_{i})\}\{\hat{\lambda}_{\ell}(x_{i})-\lambda_{0}(x_{i})\}\overset{p}{\longrightarrow}0;$$ *then*$$\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[g(z_{i},\gamma_{0})-\beta_{0}+\lambda_{0}(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]+o_{p}(1).$$
The conditions of this result are simple, general, and allow for machine learning first steps. Conditions a) and b) simply require mean square consistency of the first step estimators $\hat{\gamma}$ and $\hat{\lambda}.$ The only convergence rate condition is c), which requires a product of estimation errors for the two first steps to go to zero faster than $1/\sqrt{n}$. This condition allows for a trade-off in convergence rates between the two first steps, and can be satisfied even when one of the two rates is not very fast. This trade-off can be important when $\lambda_{0}(x)$ is not continuous in one of the components of $x$, as in the surplus bound example. Discontinuity in $x$ can limit that rate at which $\lambda_{0}(x)$ can be estimated. This result extends the results of Chernozhukov et al. (2018) and Farrell (2015) for DR estimators of treatment effects to the whole novel class of DR estimators from equation (\[drlin\]) with machine learning first steps. In interesting related work, Athey et al. (2016) show root-n consistent estimation of an average treatment effect is possible under very weak conditions on the propensity score, under strong sparsity of the regression function. Thus, for machine learning the conditions here and in Athey et al. (2016) are complementary and one may prefer either depending on whether or not the regression function can be estimated extremely well based on a sparse method. The results here apply to many more DR moment conditions.
DR moment conditions have the special feature that $\hat{R}_{3}$ and $\hat
{R}_{4}$ in Proposition 4 are equal to zero. For estimators that are not DR we impose that $\hat{R}_{3}$ and $\hat{R}_{4}$ converge to zero.
<span style="font-variant:small-caps;">Assumption 6:</span> *For each* $\ell=1,...,L$*, i)* $\sqrt
{n}\bar{\psi}(\hat{\gamma}_{\ell},\lambda_{0})\overset{p}{\longrightarrow}0$ *and ii)* $\sqrt{n}\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell
})\overset{p}{\longrightarrow}0.$
Assumption 6 requires that $\hat{\gamma}$ converge to $\gamma_{0}$ rapidly enough but places no restrictions on the convergence rate of $\hat{\lambda}$ when $\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})=0.$
<span style="font-variant:small-caps;">Lemma 14:</span> *If Assumptions 4-6 are satisfied then equation (\[no effec\]) is satisfied.*
Assumptions 4-6 are based on the decomposition of LR moment functions into an identifying part and an influence function adjustment. These conditions differ from other previous work in semiparametric estimation, as in Andrews (1994), Newey (1994), Newey and McFadden (1994), Chen, Linton, and van Keilegom (2003), Ichimura and Lee (2010), Escanciano et al. (2016), and Chernozhukov et al. (2018), that are not based on this decomposition. The conditions extend Chernozhukov et. al. (2018) to many more DR estimators and to estimators that are nonlinear in $\hat{\gamma}$ but only require a convergence rate for $\hat{\gamma}$ and not for $\hat{\lambda}$.
This framework helps explain the potential problems with “plugging in” a first step machine learning estimator into a moment function that is not LR. Lemma 14 implies that if Assumptions 4-6 are satisfied for some $\hat{\lambda}$ then $\sqrt{n}\hat{m}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma
_{0})/\sqrt{n}\overset{p}{\longrightarrow}0$ if and only if$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})\overset{p}{\longrightarrow}0. \label{plugin}$$ The plug-in method will fail when this equation does not hold. For example, suppose $\gamma_{0}=E[y|x]$ so that by Proposition 4 of Newey (1994),$$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma
}_{i}(x_{i})].$$ Here $\hat{R}_{5}\overset{p}{\longrightarrow}0$ is an approximate orthogonality condition between the approximation $\hat{\lambda}_{i}(x_{i})$ to $\lambda_{0}(x_{i})$ and the nonparametric first stage residuals $y_{i}-\hat{\gamma}_{i}(x_{i}).$ Machine learning uses model selection in the construction of $\hat{\gamma}_{i}(x_{i}).$ If the model selected by $\hat{\gamma}_{i}(x_{i})$ to approximate $\gamma_{0}(x_{i})$ is not rich (or dense) enough to also approximate $\lambda_{0}(x_{i})$ then $\hat{\lambda}_{i}(x_{i})$ need not be approximately orthogonal to $y_{i}-\hat{\gamma}_{i}(x_{i})$ and $\hat{R}_{5}$ need not converge to zero. In particular, if the variables selected to be used to approximate $\gamma_{0}(x_{i})$ cannot be used to also approximate $\lambda_{0}(x_{i})$ then the approximate orthogonality condition can fail. This phenomenon helps explain the poor performance of the plug-in estimator shown in Belloni, Chernozhukov, and Hansen (2014) and Chernozhukov et al. (2017, 2018). The plug-in estimator can be root-n consistent if the only thing being selected is an overall order of approximation, as in the series estimation results of Newey (1994). General conditions for root-n consistency of the plug-in estimator can be formulated using Assumptions 4-6 and $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ which we do in Appendix D.
Another component of an asymptotic normality result is convergence of the Jacobian term $\partial\hat{\psi}(\beta)/\partial\beta$ to $M=\left.
E[\partial\psi(z_{i},\beta,\gamma_{0},\lambda_{0})/\partial\beta\right\vert
_{\beta=\beta_{0}}].$ We impose the following condition for this purpose.
<span style="font-variant:small-caps;">Assumption 7:</span> $M\,$*exists and there is a neighborhood* $\mathcal{N}$ *of* $\beta_{0}$ *and* $\left\Vert \cdot
\right\Vert $ *such that i) for each* $\ell,$ $\left\Vert \hat{\gamma
}_{\ell}-\gamma_{0}\right\Vert \overset{p}{\longrightarrow}0,$ $\left\Vert
\hat{\lambda}_{\ell}-\lambda_{0}\right\Vert \overset{p}{\longrightarrow}0;$ *ii)* for all $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda-\lambda_{0}\right\Vert $ small enough $\psi(z_{i},\beta,\gamma,\lambda)$ *is differentiable in* $\beta$ *on* $\mathcal{N}$ *with probability approaching* $1$ *iii) there is* $\zeta^{\prime}>0$ *and* $d(z_{i})$ *with* $E[d(z_{i})]<\infty
$ *such that for* $\beta\in N$ *and* $\left\Vert \gamma
-\gamma_{0}\right\Vert $ *small enough* $$\left\Vert \frac{\partial\psi(z_{i},\beta,\gamma,\lambda)}{\partial\beta
}-\frac{\partial\psi(z_{i},\beta_{0},\gamma,\lambda)}{\partial\beta
}\right\Vert \leq d(z_{i})\left\Vert \beta-\beta_{0}\right\Vert ^{\zeta
^{\prime}};$$ *iii) For each* $\ell=1,...,L,$ $j,$ and $k$, $\int\left\vert
\partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda
_{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0,$
The following intermediate result gives Jacobian convergence.
<span style="font-variant:small-caps;">Lemma 15:</span> *If Assumption 7 is satisfied then for any* $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{\psi}(\beta)$ *is differentiable at* $\bar{\beta}$ *with probability approaching one and* $\partial\hat{\psi}(\bar{\beta})/\partial\beta
\overset{p}{\longrightarrow}M.$
With these results in place the asymptotic normality of semiparametric GMM follows in a standard way.
<span style="font-variant:small-caps;">Theorem 16:</span> *If Assumptions 4-7 are satisfied,* $\hat{\beta
}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular, and* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma_{0},\lambda
_{0})\right\Vert ^{2}]<\infty$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda
_{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime
}WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$
It is also useful to have a consistent estimator of the asymptotic variance of $\hat{\beta}$. As usual such an estimator can be constructed as$$\begin{aligned}
\hat{V} & =(\hat{M}^{\prime}\hat{W}\hat{M})^{-1}\hat{M}^{\prime}\hat{W}\hat{\Omega}\hat{W}\hat{M}(\hat{M}^{\prime}\hat{W}\hat{M})^{-1},\\
\hat{M} & =\frac{\partial\hat{\psi}(\hat{\beta})}{\partial\beta},\hat
{\Omega}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in\mathcal{I}_{\ell}}\psi
(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})^{\prime}.\end{aligned}$$ Note that this variance estimator ignores the estimation of $\gamma$ and $\lambda$ which works here because the moment conditions are LR. The following result gives conditions for consistency of $\hat{V}.$
<span style="font-variant:small-caps;">Theorem 17:</span> *If Assumptions 4 and 7 are satisfied with* $E[b(z_{i})^{2}]<\infty,$ ** $M^{\prime}WM$ *is nonsingular, and* $$\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi
(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda
_{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0$$ *then* $\hat{\Omega}\overset{p}{\longrightarrow}\Omega$ *and* $\hat{V}\overset{p}{\longrightarrow}V.$
In this section we have used cross-fitting and a decomposition of moment conditions into identifying and influence adjustment components to formulate simple and general conditions for asymptotic normality of LR GMM estimators. For reducing higher order bias and variance it may be desirable to let the number of groups grow with the sample size. That case is beyond the scope of this paper.
Appendix A: Proofs of Theorems
==============================
**Proof of Theorem 1:** By ii) and iii), $$0=(1-\tau)\int\phi(z,F_{\tau})F_{0}(dz)+\tau\int\phi(z,F_{\tau})G(dz).$$ Dividing by $\tau$ and solving gives$$\frac{1}{\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{\tau})G(dz)+\int\phi(z,F_{\tau})F_{0}(z).$$ Taking limits as $\tau\longrightarrow0$, $\tau>0$ and using i) gives$$\frac{d}{d\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{0})G(dz)+0=-\frac
{d\mu(F_{\tau})}{d\tau}.\text{ }Q.E.D.$$
**Proof of Theorem 2**: We begin by deriving $\phi_{1},$ the adjustment term for the first step CCP estimation. We use the definitions given in the body of the paper. We also let$$\begin{aligned}
P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\\
\lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\end{aligned}$$ Consider a parametric submodel as described in Section 4 and let $\gamma
_{1}(x,\tau)$ denote the conditional expectation of $y_{t}$ given $x_{t}$ under the parametric submodel. Note that for $\tilde{v}_{t}=\tilde{v}(x_{t}),$$$\begin{aligned}
& E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{\partial E[H(\gamma
_{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\
& =\frac{\partial}{\partial\tau}E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}H(\gamma_{1}(x_{t+1},\tau))]\\
& =\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac
{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}]H(\gamma_{1}(x_{t+1},\tau))]\\
& =\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]=\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t})H(\gamma_{1}(x_{t},\tau))]\\
& =E[\lambda_{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial
P}^{\prime}\frac{\partial\gamma_{1}(x_{t},\tau)}{\partial\tau}]=E[\lambda
_{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})].\end{aligned}$$ where the last (sixth) equality follows as in Proposition 4 of Newey (1994a), and the fourth equality follows by equality of the marginal distributions of $x_{t}$ and $x_{t+1}$. Similarly, for $\pi_{1}=\Pr(y_{t1}=1)$ and $\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x]$ we have$$\begin{aligned}
\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau} &
=\frac{\partial E[\pi_{1}^{-1}y_{1t}H(\gamma_{1}(x_{t+1},\tau))]}{\partial
\tau}=\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]}{\partial\tau}\\
& =\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t})H(\gamma_{1}(x_{t},\tau))]}{\partial\tau}\\
& =E[\pi_{1}^{-1}\lambda_{10}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\end{aligned}$$ Then combining terms gives$$\begin{aligned}
& \frac{\partial E[m(z_{t},\beta_{0},\gamma_{1}(\tau),\gamma_{-10})]}{\partial\tau}\\
& =-\delta\sum_{j=2}^{J}\{E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial
E[H(\gamma_{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\
& -E[A(x_{t})P_{vj}(\tilde{v}_{t})]\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau}\}\\
& =-\delta\sum_{j=2}^{J}E[\{\lambda_{j0}(x_{t})-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{10}(x_{t})\}\frac{\partial
H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\\
& =E[\phi_{1}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})].\end{aligned}$$
Next, we show the result for $\phi_{j}(z,\beta,\gamma,\lambda)$ for $2\leq
j\leq J.$ As in the proof of Proposition 4 of Newey (1994a), for any $w_{t}$ we have$$\frac{\partial}{\partial\tau}E[w_{t}|x_{t},y_{tj}=1,\tau]=E[\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{w_{t}-E[w_{t}|x_{t},y_{tj}=1]\}S(z_{t})|x_{t}].$$ It follows that$$\begin{aligned}
\frac{\partial E[m(z_{t},\beta_{0},\gamma_{j}(\tau),\gamma_{-j,0})]}{\partial\tau} & =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial
E[u_{1,t+1}+H_{t+1}|x_{t},y_{tj}=1,\tau]}{\partial\tau}]\\
& =-\delta\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\{u_{1,t+1}+H_{t+1}\}|x_{t},y_{tj}=1,\tau]].\\
& =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{u_{1,t+1}+H_{t+1}-\gamma_{j0}(x_{t},\beta_{0},\gamma_{1})\}S(z_{t})]\\
& =E[\phi_{j}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})],\end{aligned}$$ showing that the formula for $\phi_{j}$ is correct. The proof for $\phi_{J+1}$ follows similarly. *Q.E.D.*
**Proof of Theorem 3:** Given in text.
**Proof of Theorem 4:** Given in text.
**Proof of Theorem 5:** Let $\bar{\psi}(\gamma,\lambda)=E[\psi
(z_{i},\beta_{0},\gamma,\lambda)]$. Suppose that $\psi(z,\beta,\gamma
,\lambda)$ is DR. Then for any $\gamma\neq\gamma_{0},\gamma\in\Gamma$ we have$$0=\bar{\psi}(\gamma,\lambda_{0})=\bar{\psi}(\gamma_{0},\lambda_{0})=\bar{\psi
}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0}),$$ for any $\tau.$ Therefore for any $\tau$,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0=(1-\tau)\bar{\psi
}(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ Also by the previous equation $\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0$ identically in $\tau$ so that $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})=0,$$ where the derivative with respect to $\tau$ is evaluated at $\tau=0.$ Applying the same argument switching of $\lambda$ and $\gamma$ we find that $\bar{\psi
}(\gamma_{0},\lambda)$ is affine in $\lambda$ and $\partial\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)/\partial\tau=0.$
Next suppose that $\bar{\psi}(\gamma,\lambda_{0})$ is affine $\gamma$ and $\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial
\tau=0.$ Then by $\bar{\psi}(\gamma_{0},\lambda_{0})=0$, for any $\gamma
\in\Gamma,$ $$\begin{aligned}
\bar{\psi}(\gamma,\lambda_{0}) & =\partial\lbrack\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau=\partial\lbrack(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau\\
& =\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial
\tau=0.\end{aligned}$$ Switching the roles of $\gamma$ and $\lambda$ it follows analogously that $\bar{\psi}(\gamma_{0},\lambda)=0$ for all $\lambda\in\Lambda,$ so $\bar{\psi
}(\gamma,\lambda)$ is doubly robust. *Q.E.D.*
**Proof of Theorem 6:** Let $\lambda_{0}(x)=-c^{\prime}\Pi^{-1}a(x)$ so that $E[\lambda_{0}(x_{i})|w_{i}]=-c^{\prime}\Pi^{-1}\Pi p(w_{i})=-c^{\prime
}p(w_{i}).$Then integration by parts gives$$\begin{aligned}
E[m(z_{i},\beta_{0},\tilde{\gamma})] & =E[c^{\prime}p(w_{i})\{\tilde{\gamma
}(w_{i})-\gamma_{0}(w_{i})\}]=-E[\gamma_{0}(x_{i})\{\tilde{\gamma}(w_{i})-\gamma_{0}(w_{i})\}]\\
& =E[\gamma_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=-c^{\prime}\Pi
^{-1}E[a(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.\text{ }Q.E.D.\end{aligned}$$
**Proof of Theorem 7:** If $\lambda_{0}$ is identified then $m(z,\beta,\bar{\gamma},\lambda_{0})$ is identified for every $\beta$. By DR$$E[m(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$$ at $\beta=\beta_{0}$ and by assumption this is the only $\beta$ where this equation is satisfied. *Q.E.D.*
**Proof of Corollary 8:** Given in text.
**Proof of Theorem 9:** Note that for $\rho_{i}=\rho(z_{i},\beta
_{0},\gamma_{0}),$$$\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]=(1-\tau)E[\lambda
_{0}(x_{i})\rho_{i}]+\tau E[\lambda(x_{i})\rho_{i}]=0. \label{th9proof}$$ Differentiating gives the second equality in eq. (\[lrdef2\]). Also, for $\Delta=\gamma-\gamma_{0},$$$\frac{\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})}{\partial\tau}=E[\lambda_{0}(x_{i})\bar{\rho}(x_{i},\Delta)]=0,$$ giving the first equality in eq. (\[lrdef2\]). *Q.E.D.*
**Proof of Theorem 10:** The first equality in eq. (\[th9proof\]) of the proof of Theorem 9 shows that $\bar{\psi}(\gamma_{0},\lambda)$ is affine in $\lambda$. Also,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=E[\lambda_{0}(x_{i})\{(1-\tau)\rho(z_{i},\beta_{0},\gamma_{0})+\tau\rho(z_{i},\beta
_{0},\gamma)\}]=(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi
}(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ The conclusion then follows by Theorem 5. *Q.E.D.*
**Proof of Theorem 11:** To see that $\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda^{\ast})\Sigma^{\ast}(x_{i})^{-1}$ minimizes the asymptotic variance note that for any orthogonal instrumental variable matrix $\lambda_{0}(x),$ by the rows of $\lambda_{\beta}(x_{i})-\tilde{\lambda
}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})$ being in $\bar{\Lambda}_{\gamma},$ $$M=E[\lambda_{0}(x_{i})\lambda_{\beta}(x_{i})^{\prime}]=E[\lambda_{0}(x_{i})\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime
}]=E[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime}\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime}].$$ Since the instruments are orthogonal the asymptotic variance matrix of the GMM estimator with $\hat{W}\overset{p}{\longrightarrow}W$ is the same as if $\hat{\gamma}=\gamma_{0}.$ Define $m_{i}=M^{\prime}W\lambda_{0}(x_{i})\rho
_{i}$ and $m_{i}^{\ast}=\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta
})\Sigma^{\ast}(x_{i})^{-1}\rho_{i}.$ The asymptotic variance of the GMM estimator for orthogonal instruments $\lambda_{0}(x)$ is$$(M^{\prime}WM)^{-1}M^{\prime}WE[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime
}\lambda_{0}(x_{i})^{\prime}]WM(M^{\prime}WM)^{-1}=(E[m_{i}m_{i}^{\ast\prime
}])^{-1}E[m_{i}m_{i}^{\prime}](E[m_{i}m_{i}^{\ast}])^{-1\prime}.$$ The fact that this matrix is minimized in the positive semidefinite sense for $m_{i}=m_{i}^{\ast}$ is well known, e.g. see Newey and McFadden (1994). *Q.E.D.*
The following result is useful for the results of Section 7:
<span style="font-variant:small-caps;">Lemma A1:</span> *If Assumption 4 is satisfied then* $\hat{R}_{1}\overset{p}{\longrightarrow}0.$ *If Assumption 5 is satisfied then* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
Proof: Define $\hat{\Delta}_{i\ell}=m(z_{i},\hat{\gamma}_{\ell})-m(z_{i},\gamma_{0})-\bar{m}(\hat{\gamma}_{\ell})$ for $i\in I_{\ell}$ and let $Z_{\ell}^{c}$ denote the observations $z_{i}$ for $i\notin I_{\ell}$. Note that $\hat{\gamma}_{\ell}$ depends only on $Z_{\ell}^{c}$. By construction and independence of $Z_{\ell}^{c}$ and $z_{i},i\in I_{\ell}$ we have $E[\hat{\Delta}_{i\ell}|Z_{\ell}^{c}]=0.$ Also by independence of the observations, $E[\hat{\Delta}_{i\ell}\hat{\Delta}_{j\ell}|Z_{\ell}^{c}]=0$ for $i,j\in I_{\ell}.$ Furthermore, for $i\in I_{\ell}$ $E[\hat{\Delta}_{i\ell
}^{2}|Z_{\ell}^{c}]\leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)$. Then we have $$\begin{aligned}
E[\left( \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}\right)
^{2}|Z_{\ell}^{c}] & =\frac{1}{n}E[\left( \sum_{i\in I_{\ell}}\hat{\Delta
}_{i\ell}\right) ^{2}|Z_{\ell}^{c}]=\frac{1}{n}\sum_{i\in I_{\ell}}E[\hat{\Delta}_{i\ell}^{2}|Z_{\ell}^{c}]\\
& \leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The conditional Markov inequality then implies that $\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}/\sqrt{n}\overset{p}{\longrightarrow}0.$ The analogous results also hold for $\hat{\Delta}_{i\ell}=\phi(z_{i},\hat{\gamma}_{\ell
},\lambda_{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma
}_{\ell},\lambda_{0})$ and $\hat{\Delta}_{i\ell}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})$. Summing across these three terms and across $\ell=1,...,L$ gives the first conclusion.
For the second conclusion, note that under the first hypothesis of Assumption 5,$$\begin{aligned}
& E[\left\vert \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}[\phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})]\right\vert |Z_{\ell}^{c}]\\
& \leq\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}E[\left\vert \phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert |Z_{\ell}^{c}]\\
& \leq\sqrt{n}\int\left\vert \phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda
}_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma
}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert
F_{0}(dz)\overset{p}{\longrightarrow}0,\end{aligned}$$ so $\hat{R}_{2}\overset{p}{\longrightarrow}0$ follows by the conditional Markov and triangle inequalities. The second hypothesis of Assumption 5 is just $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ $Q.E.D.$
**Proof of Lemma 12**: By Assumption 1 and the hypotheses that $\hat{\gamma}_{i}\in\Gamma$ and $\hat{\lambda}_{i}\in\Lambda$ we have $\hat
{R}_{3}=\hat{R}_{4}=0.$ By Lemma A1 we have $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ The conclusion then follows by the triangle inequality. $Q.E.D.$
**Proof of Theorem 13:** Note that for $\varepsilon=y-\gamma_{0}(w)$ $$\begin{aligned}
\phi(z,\hat{\gamma},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0}) &
=\lambda_{0}(x)[\hat{\gamma}(w)-\gamma_{0}(w)],\\
\phi(z,\gamma_{0},\hat{\lambda})-\phi(z,\gamma_{0},\lambda_{0}) &
=[\hat{\lambda}(x)-\lambda_{0}(x)]\varepsilon,\\
\phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda_{0})+\phi
_{j}(z,\gamma_{0},\lambda_{0}) & =-[\hat{\lambda}(x)-\lambda_{0}(x)][\hat{\gamma}(x)-\gamma_{0}(x)].\end{aligned}$$ The first part of Assumption 4 ii) then follows by$$\begin{aligned}
\int[\phi(z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda
_{0})]^{2}F_{0}(dz) & =\int\lambda_{0}(x)^{2}[\hat{\gamma}(w)-\gamma
_{0}(w)]^{2}F_{0}(dz)\\
& \leq C\int[\hat{\gamma}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The second part of Assumption 4 ii) follows by$$\begin{aligned}
\int[\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda
_{0})]^{2}F_{0}(dz) & =\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}\varepsilon^{2}F_{0}(dz)\\
& =\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}E[\varepsilon^{2}|x]F_{0}(dz)\\
& \leq C\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ Next, note that by the Cauchy-Schwartz inequality, $$\begin{aligned}
& \sqrt{n}\int|\phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi
(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda
_{0})+\phi(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\\
& =\sqrt{n}\int\left\vert [\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)][\hat
{\gamma}_{\ell}(w)-\gamma_{0}(w)]\right\vert F_{0}(dx)\\
& \leq\sqrt{n}\{\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\}^{1/2}\{\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\}^{1/2}.\end{aligned}$$ Then the first rate condition of Assumption 5 holds under the first rate condition of Theorem 13 while the second condition of Assumption 5 holds under the last hypothesis of Theorem 13. Then eq. (\[no effec\]) holds by Lemma 12, and the conclusion by rearranging the terms in eq. (\[no effec\]). *Q.E.D.*
**Proof of Lemma 14:** Follows by Lemma A1 and the triangle inequality. *Q.E.D.*
**Proof of Lemma 15:** Let $\hat{M}(\beta)=\partial\hat{\psi}(\beta)/\partial\beta$ when it exists, $\tilde{M}_{\ell}=n^{-1}\sum_{i\in
I_{\ell}}\partial\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta,$ and $\bar{M}_{\ell}=n^{-1}\sum_{i\in I_{\ell}}\partial
\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})/\partial\beta.$ By the law of large numbers, and Assumption 5 iii), $\sum_{\ell=1}^{L}\bar{M}_{\ell
}\overset{p}{\longrightarrow}M.$ Also, by condition iii) for each $j$ and $k,$ $$E[|\tilde{M}_{\ell jk}-\bar{M}_{\ell jk}||Z^{\ell}]\leq\int\left\vert
\partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda
_{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then by the conditional Markov inequality, for each $\ell,$ $$\tilde{M}_{\ell}-\bar{M}_{\ell}\overset{p}{\longrightarrow}0.$$ It follows by the triangle inequality that $\sum_{\ell=1}^{L}\tilde{M}_{\ell
}\overset{p}{\longrightarrow}M.$ Also, with probability approaching one we have for any $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0}$$$\left\Vert \hat{M}(\bar{\beta})-\sum_{\ell=1}^{L}\tilde{M}_{\ell}\right\Vert
\leq\left( \frac{1}{n}\sum_{i=1}^{n}d(z_{i})\right) \left\Vert \bar{\beta
}-\beta_{0}\right\Vert ^{\zeta^{\prime}}=O_{p}(1)o_{p}(1)\overset{p}{\longrightarrow}0.$$ The conclusion then follows by the triangle inequality. *Q.E.D.*
**Proof of Theorem 16:** The conclusion follows in a standard manner from the conclusions of Lemmas 14 and 15. *Q.E.D.*
**Proof of Theorem 17:** Let $\hat{\psi}_{i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})$ and $\psi_{i}=\psi(z_{i},\beta
_{0},\gamma_{0},\lambda_{0}).$ By standard arguments (e.g. Newey, 1994), it suffices to show that $\sum_{i=1}^{n}\left\Vert \hat{\psi}_{i}-\psi
_{i}\right\Vert ^{2}/n\overset{p}{\longrightarrow}0.$ Note that$$\begin{aligned}
\hat{\psi}_{i}-\psi_{i} & =\sum_{j=1}^{5}\hat{\Delta}_{ji},\hat{\Delta}_{1i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}),\hat{\Delta
}_{2i}=m(z_{i},\beta_{0},\hat{\gamma}_{\ell})-m(z_{i},\beta_{0},\gamma_{0}),\\
\hat{\Delta}_{3i} & =\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi
(z_{i},\gamma_{0},\lambda_{0}),\hat{\Delta}_{4i}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0}),\\
\hat{\Delta}_{5i} & =\phi(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})-\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})+\phi(z_{i},\gamma_{0},\lambda_{0}).\end{aligned}$$ By standard arguments it suffices to show that for each $j$ and $\ell,$ $$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{ji}\right\Vert
^{2}\overset{p}{\longrightarrow}0. \label{var conv}$$ For $j=1$ it follows by a mean value expansion and Assumption 7 with $E[b(z_{i})^{2}]<\infty$ that$$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{1i}\right\Vert
^{2}=\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \frac{\partial}{\partial\beta
}\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})(\hat{\beta
}-\beta)\right\Vert ^{2}\leq\frac{1}{n}\left( \sum_{i\in I_{\ell}}b(z_{i})^{2}\right) \left\Vert \hat{\beta}-\beta\right\Vert ^{2}\overset{p}{\longrightarrow}0,$$ where $\bar{\beta}\,$is a mean value that actually differs from row to row of $\partial\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta$. For $j=2$ note that by Assumption 4,$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{2i}\right\Vert
^{2}|Z^{\ell}]\leq\int\left\Vert m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$$ so eq. (\[var conv\]) holds by the conditional Markov inequality. For $j=3$ and $j=4$ eq. (\[var conv\]) follows similarly. For $j=5$, it follows from the hypotheses of Theorem 17 that$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{5i}\right\Vert
^{2}|Z^{\ell}]\leq\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda
}_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell
},\lambda_{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then eq. (\[var conv\]) holds for $j=5$ by the conditional Markov inequality. *Q.E.D.*
Appendix B: Local Robustness and Derivatives of Expected Moments.
=================================================================
In this Appendix we give conditions sufficient for the LR property of equation (\[lrdef\]) to imply the properties in equations (\[lrdef2\]) and (\[nlremainder\]). As discussed following equation (\[nlremainder\]), it may be convenient when specifying regularity conditions for specific moment functions to work directly with (\[lrdef2\]) and/or (\[nlremainder\]).
<span style="font-variant:small-caps;">Assumption B1:</span> *There are linear sets* $\Gamma$ *and* $\Lambda$ *and a set* $G$ *such that i)* $\bar{\psi}(\gamma,\lambda)$ *is Frechet differentiable at* $(\gamma_{0},\lambda_{0});$ *ii) for all* $G\in$ ** $G$ *the vector* $(\gamma(F_{\tau}),\lambda(F_{\tau}))$ *is Frechet differentiable at* $\tau=0;$ *iii) the closure of* $\{\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau:G\in$ ** $G\}$ *is* $\Gamma\times\Lambda$*.*
<span style="font-variant:small-caps;">Theorem B1:</span> *If Assumption B1 is satisfied and equation (\[lrdef\]) is satisfied for all* $G\in$ ** $\mathcal{G}$ *then equation (\[lrdef2\]) is satisfied.*
Proof: Let $\bar{\psi}^{\prime}(\gamma,\lambda)$ denote the Frechet derivative of $\bar{\psi}(\gamma,\lambda)$ at $(\gamma_{0},\lambda_{0})$ in the direction $(\gamma,\lambda),$ which exists by i). By ii), the chain rule for Frechet derivatives (e.g. Proposition 7.3.1 of Luenberger, 1969), and by eq. *(\[lrdef\])* it follows that for $(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau,$$$\bar{\psi}^{\prime}(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\frac
{\partial\bar{\psi}(\gamma(F_{\tau}),\lambda(F_{\tau}))}{\partial\tau}=0.$$ By $\bar{\psi}^{\prime}(\gamma,\lambda)$ being a continuous linear function and iii) it follows that $\bar{\psi}^{\prime}(\gamma,\lambda)=0$ for all $(\gamma,\lambda)\in\Gamma\times\Lambda.$ Therefore, for any $\gamma\in\Gamma$ and $\lambda\in\Lambda,$$$\bar{\psi}^{\prime}(\gamma-\gamma_{0},0)=0,\bar{\psi}^{\prime}(0,\lambda
-\lambda_{0})=0.$$ Equation *(\[lrdef2\])* then follows by i). *Q.E.D.*
<span style="font-variant:small-caps;">Theorem B2:</span> *If equation (\[lrdef2\]) is satisfied and in addition* $\bar{\psi}(\gamma,\lambda_{0})$ *and* $\bar{\psi}(\gamma
_{0},\lambda)$ *are twice Frechet differentiable in open sets containing* $\gamma_{0}$ *and* $\lambda_{0}$ *respectively with bounded second derivative then equation* (\[nlremainder\]) *is satisfied.*
Proof: Follows by Proposition 7.3.3 of Luenberger (1969). *Q.E.D.*
Appendix C: Doubly Robust Moment Functions for Orthogonality Conditions
=======================================================================
In this Appendix we generalize the DR estimators for conditional moment restrictions to orthogonality conditions for a general residual $\rho
(z,\gamma)$ that is affine in $\gamma$ but need not have the form $y-\gamma(w).$
<span style="font-variant:small-caps;">Assumption C1:</span> *There are linear sets* $\Gamma$ and $\Lambda$ *of functions* $\lambda(x)$ *and* $\gamma(w)$ *that are closed in mean square such that i) For any* $\gamma,\tilde{\gamma}\in\Gamma$ and scalar $\tau,$ $E[\rho(z_{i},\gamma)^{2}]<\infty$ and $\rho(z,(1-\tau
)\gamma+\tau\tilde{\gamma})=(1-\tau)\rho(z,\gamma)+\tau\rho(z,\tilde{\gamma})$ ; *ii)* $E[\lambda(x_{i})\rho(z_{i},\gamma_{0})]=0$ for all $\lambda
\in\Lambda;$ *iii) there exists* $\lambda_{0}\in\Lambda$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma
)]$ *for all* $\gamma\in\Gamma.$
Assumption C1 ii) could be thought of as an identification condition for $\gamma_{0}$. For example, if $\Lambda$ is all functions of $x_{i}$ with finite mean square then ii) is $E[\rho(z_{i},\gamma_{0})|x_{i}]=0,$ the nonparametric conditional moment restriction of Newey and Powell (2003) and Newey (1991). Assumption C1 iii) also has an interesting interpretation. Let $\Pi(a)(x_{i})$ denote the orthogonal mean-square projection of a random variable $a(z_{i})$ with finite second moment on $\Gamma.$ Then by ii) and iii) we have$$\begin{aligned}
E[m(z_{i},\beta_{0},\gamma)] & =-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma)]=E[\lambda_{0}(x_{i})\Pi(\rho(\gamma))(x_{i})]\\
& =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma))(x_{i})-\Pi(\rho(\gamma_{0}))(x_{i})\}]\\
& =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i})\}].\end{aligned}$$ Here we see that $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i}).$ The Riesz representation theorem will also imply that if $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho
(\gamma_{0}))(x_{i})$ then $\lambda_{0}(x)$ exists satisfying Assumption C1 ii). For the case where $w_{i}=x_{i}$ this mean-square continuity condition is necessary for existence of a root-n consistent estimator, as in Newey (1994) and Newey and McFadden (1994). We conjecture that when $w_{i}$ need not equal $x_{i}$ this condition generalizes Severini and Tripathi’s (2012) necessary condition for existence of a root-n consistent estimator of $\beta_{0}$.
Noting that Assumptions 1 ii) and iii) are the conditions for double robustness we have
<span style="font-variant:small-caps;">Theorem C1:</span> *If Assumption C1 is satisfied then* $\psi
(z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\lambda(x)\rho(z,\gamma)$ *is doubly robust.*
It is interesting to note that $\lambda_{0}(x)$ satisfying Assumption C1 iii) need not be unique. When the closure of $\{\Pi(\rho(\gamma))(x_{i}):\gamma
\in\Gamma\}$ is not all of $\Lambda$ then there will exist $\tilde{\lambda}\in\Lambda$ such that $\tilde{\lambda}\neq0$ and $$E[\tilde{\lambda}(x_{i})\rho(z_{i},\gamma)]=E[\tilde{\lambda}(x_{i})\Pi
(\rho(\gamma))(x_{i})]=0\text{ for all }\gamma\in\Gamma.$$ In that case Assumption C1 iii) will also be satisfied for $\lambda_{0}(x_{i})+\tilde{\lambda}(x_{i}).$ We can think of this case as one where $\gamma_{0}$ is overidentified, similarly to Chen and Santos (2015). As discussed in Ichimura and Newey (2017), the different $\lambda_{0}(x_{i})$ would correspond to different first step estimators.
The partial robustness results of the last Section can be extended to the orthogonality condition setting of Assumption C1. Let $\Lambda^{\ast}$ be a closed linear subset of $\Lambda,$ such as finite dimensional linear set and let $\gamma^{\ast}$ be such that $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast
})]=0$ for all $\lambda\in\Lambda^{\ast}$. Note that if $\lambda_{0}\in
\Lambda^{\ast}$ it follows by Theorem C1 that$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma^{\ast})]=0.$$
<span style="font-variant:small-caps;">Theorem C2:</span> *If* $\Lambda^{\ast}$ *is a closed linear subset of* $\Lambda$*,* $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast})]=0$ *for all* $\lambda\in\Lambda^{\ast}$*, and Assumption C2 iii) is satisfied with* $\lambda_{0}\in\Lambda^{\ast}$ *then*$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=0.$$
$.$
Appendix D: Regularity Conditions for Plug-in Estimators
========================================================
In this Appendix we formulate regularity conditions for root-n consistency and asymptotic normality of the plug-in estimator $\tilde{\beta}$ as described in Section 2, where $m(z,\beta,\gamma)$ need not be LR. These conditions are based on Assumptions 4-6 applied to the influence adjustment $\phi
(z,\gamma,\lambda)$ corresponding to $m(z,\beta,\gamma)$ and $\hat{\gamma}.$ For this purpose we treat $\hat{\lambda}$ as any object that can approximate $\lambda_{0}(x),$ not just as an estimator of $\lambda_{0}.$
<span style="font-variant:small-caps;">Theorem D1:</span> *If Assumptions 4-6 are satisfied, Assumption 7* is satisfied with $m(z,\beta,\gamma)$ replacing $\psi(z,\beta,\gamma
,\lambda),$ ** $\tilde{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular,* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma
_{0},\lambda_{0})\right\Vert ^{2}]<\infty,$ *and*$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})\overset{p}{\longrightarrow}0,$$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime
}WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$
The condition $\hat{R}_{5}\overset{p}{\longrightarrow}0$ was discussed in Section 7. It is interesting to note that $\hat{R}_{5}\overset{p}{\longrightarrow}0$ appears to be a complicated condition that seems to depend on details of the estimator $\hat{\gamma}_{i}$ in a way that Assumptions 4-7 do not. In this way the regularity conditions for the LR estimator seem to be more simple and general than those for the plug-in estimator.
Acknowledgements
Whitney Newey gratefully acknowledges support by the NSF. Helpful comments were provided by M. Cattaneo, B. Deaner, J. Hahn, M. Jansson, Z. Liao, A. Pakes, R. Moon, A. de Paula, V. Semenova, and participants in seminars at Cambridge, Columbia, Cornell, Harvard-MIT, UCL, USC, Yale, and Xiamen. B. Deaner provided capable research assistance.
**REFERENCES**
<span style="font-variant:small-caps;">Ackerberg, D., X. Chen, and J. Hahn</span> (2012): “A Practical Asymptotic Variance Estimator for Two-step Semiparametric Estimators,” *The Review of Economics and Statistics* 94: 481–498.
<span style="font-variant:small-caps;">Ackerberg, D., X. Chen, J. Hahn, and Z. Liao</span> (2014): “Asymptotic Efficiency of Semiparametric Two-Step GMM,” *The Review of Economic Studies* 81: 919–943.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2003): Efficient Estimation of Models with Conditional Moment Restrictions Containing Unknown Functions, *Econometrica* 71, 1795-1843.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2007): “Estimation of Possibly Misspecified Semiparametric Conditional Moment Restriction Models with Different Conditioning Variables,” *Journal of Econometrics* 141, 5–43.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2012): “The Semiparametric Efficiency Bound for Models of Sequential Moment Restrictions Containing Unknown Functions,” *Journal of Econometrics* 170, 442–457.
<span style="font-variant:small-caps;">Andrews, D.W.K.</span> (1994): Asymptotics for Semiparametric Models via Stochastic Equicontinuity, *Econometrica* 62, 43-72.
<span style="font-variant:small-caps;">Athey, S., G. Imbens, and S. Wager</span> (2017): “Efficient Inference of Average Treatment Effects in High Dimensions via Approximate Residual Balancing,” *Journal of the Royal Statistical Society, Series B,* forthcoming.
<span style="font-variant:small-caps;">Bajari, P., V. Chernozhukov, H. Hong, and D. Nekipelov</span> (2009): “Nonparametric and Semiparametric Analysis of a Dynamic Discrete Game,” working paper, Stanford.
<span style="font-variant:small-caps;">Bajari, P., H. Hong, J. Krainer, and D. Nekipelov</span> (2010): “Estimating Static Models of Strategic Interactions,” *Journal of Business and Economic Statistics* 28, 469-482.
<span style="font-variant:small-caps;">Bang, and J.M. Robins</span> (2005): “Doubly Robust Estimation in Missing Data and Causal Inference Models,” *Biometrics* 61, 962–972.
<span style="font-variant:small-caps;">Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen</span> (2012): Sparse Models and Methods for Optimal Instruments with an Application to Eminent Domain, *Econometrica* 80, 2369–2429.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and Y. Wei</span> (2013): Honest Confidence Regions for Logistic Regression with a Large Number of Controls, arXiv preprint arXiv:1304.3969.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and C. Hansen</span> (2014): “Inference on Treatment Effects after Selection among High-Dimensional Controls,” *The Review of Economic Studies* 81, 608–650.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, I. Fernandez-Val, and C. Hansen</span> (2016): “Program Evaluation and Causal Inference with High-Dimensional Data,” *Econometrica* 85, 233-298.
<span style="font-variant:small-caps;">Bera, A.K., G. Montes-Rojas, and W. Sosa-Escudero</span> (2010): “General Specification Testing with Locally Misspecified Models,” *Econometric Theory* 26, 1838–1845.
<span style="font-variant:small-caps;">Bickel, P.J.</span> (1982): “On Adaptive Estimation,” *Annals of Statistics* 10, 647-671.
<span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (1988): “Estimating Integrated Squared Density Derivatives: Sharp Best Order of Convergence Estimates,” *Sankhyā: The Indian Journal of Statistics, Series A* 238, 381-393.
<span style="font-variant:small-caps;">Bickel, P.J., C.A.J. Klaassen, Y. Ritov, [and]{} J.A. Wellner</span> (1993): *Efficient and Adaptive Estimation for Semiparametric Models*, Springer-Verlag, New York.
<span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (2003): “Nonparametric Estimators Which Can Be ”Plugged-in," *Annals of Statistics* 31, 1033-1053.
<span style="font-variant:small-caps;">Bonhomme, S., and M. Weidner</span> (2018): “Minimizing Sensitivity to Misspecification,” working paper.
<span style="font-variant:small-caps;">Cattaneo, M.D., and M. Jansson</span> (2017): “Kernel-Based Semiparametric Estimators: Small Bandwidth Asymptotics and Bootstrap Consistency,” *Econometrica*, forthcoming.
<span style="font-variant:small-caps;">Cattaneo, M.D., M. Jansson, and X. Ma</span> (2017): “Two-step Estimation and Inference with Possibly Many Included Covariates,” working paper.
<span style="font-variant:small-caps;">Chamberlain, G.</span> (1987): Asymptotic Efficiency in Estimation with Conditional Moment Restrictions, *Journal of Econometrics* 34, 1987, 305–334.
<span style="font-variant:small-caps;">Chamberlain, G.</span> (1992): Efficiency Bounds for Semiparametric Regression, *Econometrica* 60, 567–596.
<span style="font-variant:small-caps;">Chen, X. and X. Shen</span> (1997): Sieve Extremum Estimates for Weakly Dependent Data, *Econometrica* 66, 289-314.
<span style="font-variant:small-caps;">Chen, X., O.B. Linton, [and]{} I. [van Keilegom]{}</span> (2003): Estimation of Semiparametric Models when the Criterion Function Is Not Smooth, *Econometrica* 71, 1591-1608.
<span style="font-variant:small-caps;">Chen, X., and Z. Liao</span> (2015): “Sieve Semiparametric Two-Step GMM Under Weak Dependence”, *Journal of Econometrics* 189, 163–186.
<span style="font-variant:small-caps;">Chen, X., and A. Santos</span> (2015): Overidentification in Regular Models, working paper.
<span style="font-variant:small-caps;">Chernozhukov, V., C. Hansen, and M. Spindler</span> (2015): “Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach,” *Annual Review of Economics* 7: 649–688.
<span style="font-variant:small-caps;">Chernozhukov, V., G.W. Imbens and W.K. Newey</span> (2007): “Instrumental Variable Identification and Estimation of Nonseparable Models,” *Journal of Econometrics* 139, 4-14.
<span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey</span> (2017): “Double/Debiased/Neyman Machine Learning of Treatment Effects,” *American Economic Review Papers and Proceedings* 107, 261-65.
<span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, J. Robins</span> (2018): "Debiased/Double Machine Learning for Treatment and Structural Parameters,*Econometrics Journal* 21, C1-C68.
<span style="font-variant:small-caps;">Chernozhukov, V., J.A. Hausman, and W.K. Newey</span> (2018): “Demand Analysis with Many Prices,” working paper, MIT.
<span style="font-variant:small-caps;">Chernozhukov, V., W.K. Newey, J. Robins</span> (2018): “Double/De-Biased Machine Learning Using Regularized Riesz Representers,” arxiv.
<span style="font-variant:small-caps;">Escanciano, J-C., D. Jacho-Cha'vez, and A. Lewbel</span> (2016): Identification and Estimation of Semiparametric Two Step Models, *Quantitative Economics* 7, 561-589.
<span style="font-variant:small-caps;">Farrell, M.</span> (2015): “Robust Inference on Average Treatment Effects with Possibly More Covariates than Observations,” *Journal of Econometrics* 189, 1–23.
<span style="font-variant:small-caps;">Firpo, S. and C. Rothe</span> (2017): “Semiparametric Two-Step Estimation Using Doubly Robust Moment Conditions,” working paper.
<span style="font-variant:small-caps;">Graham, B.W.</span> (2011): “Efficiency Bounds for Missing Data Models with Semiparametric Restrictions,” *Econometrica* 79, 437–452.
<span style="font-variant:small-caps;">Hahn, J. (1998):</span> “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects,” *Econometrica* 66, 315-331.
<span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2013): “Asymptotic Variance of Semiparametric Estimators With Generated Regressors,” *Econometrica* 81, 315-340.
<span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2016): Three-stage Semi-Parametric Inference: Control Variables and Differentiability,“ working paper.”
<span style="font-variant:small-caps;">Hahn, J., Z. Liao, and G. Ridder</span> (2016): “Nonparametric Two-Step Sieve M Estimation and Inference,” working paper, UCLA.
<span style="font-variant:small-caps;">Hasminskii, R.Z. and I.A. Ibragimov</span> (1978): “On the Nonparametric Estimation of Functionals,” *Proceedings of the 2nd Prague Symposium on Asymptotic Statistics*, 41-51.
<span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2016): “Individual Heterogeneity and Average Welfare,” *Econometrica* 84, 1225-1248.
<span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2017): “Nonparametric Welfare Analysis,” *Annual Review of Economics* 9, 521–546.
<span style="font-variant:small-caps;">Hirano, K., G. Imbens, and G. Ridder</span> (2003): “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,” *Econometrica* 71: 1161–1189.
<span style="font-variant:small-caps;">Hotz, V.J. and R.A. Miller</span> (1993): “Conditional Choice Probabilities and the Estimation of Dynamic Models,” *Review of Economic Studies* 60, 497-529.
<span style="font-variant:small-caps;">Huber, P. (1981):</span> *Robust Statistics,* New York: Wiley.
<span style="font-variant:small-caps;">Ichimura, H.</span> (1993): “Estimation of Single Index Models,” *Journal of Econometrics* 58, 71-120.
<span style="font-variant:small-caps;">Ichimura, H., [and]{} S. Lee</span> (2010): Characterization of the Asymptotic Distribution of Semiparametric M-Estimators, *Journal of Econometrics* 159, 252–266.
<span style="font-variant:small-caps;">Ichimura, H. and W.K. Newey</span> (2017): “The Influence Function of Semiparametric Estimators,” CEMMAP Working Paper, CWP06/17.
<span style="font-variant:small-caps;">Kandasamy, K., A. Krishnamurthy, B. P'oczos, L. Wasserman, J.M. Robins</span> (2015): “Influence Functions for Machine Learning: Nonparametric Estimators for Entropies, Divergences and Mutual Informations,” arxiv.
<span style="font-variant:small-caps;">Lee, Lung-fei</span> (2005): A $C(\alpha)$-type Gradient Test in the GMM Approach, working paper.
<span style="font-variant:small-caps;">Luenberger, D.G.</span> (1969): *Optimization by Vector Space Methods*, New York: Wiley.
<span style="font-variant:small-caps;">Murphy, K.M. and R.H. Topel</span> (1985): “Estimation and Inference in Two-Step Econometric Models,” *Journal of Business and Economic Statistics* 3, 370-379.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1984): “A Method of Moments Interpretation of Sequential Estimators,” *Economics Letters* 14, 201-206.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1990): “Semiparametric Efficiency Bounds,” *Journal of Applied Econometrics* 5, 99-135.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1991): Uniform Convergence in Probability and Stochastic Equicontinuity, *Econometrica* 59, 1161-1167.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1994a): “The Asymptotic Variance of Semiparametric Estimators,” *Econometrica* 62, 1349-1382.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1994b): Kernel Estimation of Partial Means and a General Variance Estimator, *Econometric Theory* 10, 233-253.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1997): Convergence Rates and Asymptotic Normality for Series Estimators, *Journal of Econometrics* 79, 147-168.
<span style="font-variant:small-caps;">Newey, W.K. (</span>1999): Consistency of Two-Step Sample Selection Estimators Despite Misspecification of Distribution, *Economics Letters* 63, 129-132.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} D. McFadden</span> (1994): Large Sample Estimation and Hypothesis Testing," in *Handbook of Econometrics*, Vol. 4, ed. by R. Engle, and D. McFadden, pp. 2113-2241. North Holland.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (1989): “Instrumental Variable Estimation of Nonparametric Models,” presented at Econometric Society winter meetings, 1988.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (2003): “Instrumental Variable Estimation of Nonparametric Models,” *Econometrica* 71, 1565-1578.
<span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (1998): Undersmoothing and Bias Corrected Functional Estimation," MIT Dept. of Economics working paper 72, 947-962.
<span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (2004): Twicing Kernels and a Small Bias Property of Semiparametric Estimators, *Econometrica* 72, 947-962.
<span style="font-variant:small-caps;">Newey, W.K., and J. Robins</span> (2017): “Cross Fitting and Fast Remainder Rates for Semiparametric Estimation,” arxiv.
<span style="font-variant:small-caps;">Neyman, J.</span> (1959): Optimal Asymptotic Tests of Composite Statistical Hypotheses, *Probability and Statistics, the Harald Cramer Volume*, ed., U. Grenander, New York, Wiley.
<span style="font-variant:small-caps;">Pfanzagl, J., and W. Wefelmeyer</span> (1982): "Contributions to a General Asymptotic Statistical Theory. Springer Lecture Notes in Statistics.
<span style="font-variant:small-caps;">Pakes, A. and G.S. Olley</span> (1995): “A Limit Theorem for a Smooth Class of Semiparametric Estimators,” *Journal of Econometrics* 65, 295-332.
<span style="font-variant:small-caps;">Powell, J.L., J.H. Stock, and T.M. Stoker</span> (1989): “Semiparametric Estimation of Index Coefficients,” *Econometrica* 57, 1403-1430.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1994): “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed,” *Journal of the American Statistical Association* 89: 846–866.
<span style="font-variant:small-caps;">Robins, J.M. and A. Rotnitzky</span> (1995): “Semiparametric Efficiency in Multivariate Regression Models with Missing Data,” *Journal of the American Statistical Association* 90:122–129.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1995): “Analysis of Semiparametric Regression Models for Repeated Outcomes in the Presence of Missing Data,” *Journal of the American Statistical Association* 90,106–121.
<span style="font-variant:small-caps;">Robins, J.M.,and A. Rotnitzky (2001):</span> Comment on Semiparametric Inference: Question and an Answer Likelihood by P.A. Bickel and J. Kwon, *Statistica Sinica* 11, 863-960.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and M. van der Laan</span> (2000): "Comment on ’On Profile Likelihood’ by S. A. Murphy and A. W. van der Vaart, *Journal of the American Statistical Association* 95, 431-435.
<span style="font-variant:small-caps;">Robins, J., M. Sued, Q. Lei-Gomez, and A. Rotnitzky</span> (2007): “Comment: Performance of Double-Robust Estimators When Inverse Probability’ Weights Are Highly Variable,” *Statistical Science* 22, 544–559.
<span style="font-variant:small-caps;">Robins, J.M., L. Li, E. Tchetgen, and A. van der Vaart</span> (2008): “Higher Order Influence Functions and Minimax Estimation of Nonlinear Functionals,” *IMS Collections Probability and Statistics: Essays in Honor of David A. Freedman, Vol 2,* 335-421.
<span style="font-variant:small-caps;">Robins, J.M., L. Li, R. Mukherjee, E. Tchetgen, and A. van der Vaart</span> (2017): “Higher Order Estimating Equations for High-Dimensional Models,” *Annals of Statistics,* forthcoming.
<span style="font-variant:small-caps;">Robinson, P.M.</span> (1988): "\`Root-N-consistent Semiparametric Regression," *Econometrica* 56, 931-954.
<span style="font-variant:small-caps;">Rust, J.</span> (1987): “Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher,” *Econometrica* 55, 999-1033.
<span style="font-variant:small-caps;">Santos, A.</span> (2011): “Instrumental Variable Methods for Recovering Continuous Linear Functionals,” *Journal of Econometrics*, 161, 129-146.
<span style="font-variant:small-caps;">Scharfstein D.O., A. Rotnitzky, and J.M. Robins (1999):</span> Rejoinder to Adjusting For Nonignorable Drop-out Using Semiparametric Non-response Models, *Journal of the American Statistical Association* 94, 1135-1146.
<span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2006): "</span>Some Identification Issues in Nonparametric Linear Models with Endogenous Regressors," *Econometric Theory* 22, 258-278.
<span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2012):</span> “Efficiency Bounds for Estimating Linear Functionals of Nonparametric Regression Models with Endogenous Regressors,” *Journal of Econometrics* 170, 491-498.
<span style="font-variant:small-caps;">Schick, A.</span> (1986): “On Asymptotically Efficient Estimation in Semiparametric Models,” *Annals of Statistics* 14, 1139-1151.
<span style="font-variant:small-caps;">Stoker, T.</span> (1986): “Consistent Estimation of Scaled Coefficients,” *Econometrica* 54, 1461-1482.
<span style="font-variant:small-caps;">Tamer, E.</span> (2003): “Incomplete Simultaneous Discrete Response Model with Multiple Equilibria,” *Review of Economic Studies* 70, 147-165.
<span style="font-variant:small-caps;">van der Laan, M. and Rubin</span> (2006): “Targeted Maximum Likelihood Learning,” U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 213.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1991): On Differentiable Functionals, *The Annals of Statistics,* 19, 178-204.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1998): *Asymptotic Statistics,* Cambride University Press, Cambridge, England.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (2014): “Higher Order Tangent Spaces and Influence Functions,” Statistical Science 29, 679–686.
<span style="font-variant:small-caps;">Wooldridge, J.M.</span> (1991): On the Application of Robust, Regression-Based Diagnostics to Models of Conditional Means and Conditional Variances, *Journal of Econometrics* 47, 5-46.
|
{
"pile_set_name": "arxiv"
}
|
Tân Phước District
Tân Phước is a rural district (huyện) of Tien Giang province in the Mekong Delta region of Vietnam. As of 2003 the district had a population of 53,125. The district covers an area of 343 km². The district capital lies at Mỹ Phước.
References
Category:Districts of Tiền Giang Province
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We use Stein’s method to bound the Wasserstein distance of order $2$ between a measure $\nu$ and the Gaussian measure using a stochastic process $(X_t)_{t \geq 0}$ such that $X_t$ is drawn from $\nu$ for any $t > 0$. If the stochastic process $(X_t)_{t \geq 0}$ satisfies an additional exchangeability assumption, we show it can also be used to obtain bounds on Wasserstein distances of any order $p \geq 1$. Using our results, we provide optimal convergence rates for the multi-dimensional Central Limit Theorem in terms of Wasserstein distances of any order $p \geq 2$ under simple moment assumptions.'
author:
- |
Thomas Bonis\
DataShape team, Inria Saclay, Université Paris-Saclay, Paris, France\
thomas.bonis@inria.fr
bibliography:
- 'Bibliography.bib'
title:
- 'Stein’s method for normal approximation in Wasserstein distances with application to the multivariate Central Limit Theorem [^1] '
- 'Stein’s method for normal approximation in Wasserstein distances with application to the multivariate Central Limit Theorem'
---
Acknowledgements {#acknowledgements .unnumbered}
================
The author would like to thank Michel Ledoux for his many comments and advice regarding the redaction of this paper as well as Jérôme Dedecker and Yvik Swan, Chi Tran and Frédéric Chazal for their multiple remarks.
[^1]: The author was supported by the French Délégation Générale de l’Armement (DGA) and by ANR project TopData ANR-13-BS01-0008.
|
{
"pile_set_name": "arxiv"
}
|
Preprint hep-ph/0006089
[Improved Conformal Mapping of the Borel Plane]{}
U. D. Jentschura and G. Soff
[*Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany*]{}\
[**Email:**]{} jentschura@physik.tu-dresden.de, soff@physik.tu-dresden.de
The conformal mapping of the Borel plane can be utilized for the analytic continuation of the Borel transform to the entire positive real semi-axis and is thus helpful in the resummation of divergent perturbation series in quantum field theory. We observe that the convergence can be accelerated by the application of Padé approximants to the Borel transform expressed as a function of the conformal variable, i.e. by a combination of the analytic continuation via conformal mapping and a subsequent numerical approximation by rational approximants. The method is primarily useful in those cases where the leading (but not sub-leading) large-order asymptotics of the perturbative coefficients are known.
11.15.Bt, 11.10.Jj General properties of perturbation theory;\
Asymptotic problems and properties
The problem of the resummation of quantum field theoretic series is of obvious importance in view of the divergent, asymptotic character of the perturbative expansions [@LGZJ1990; @ZJ1996; @Fi1997]. The convergence can be accelerated when additional information is available about large-order asymptotics of the perturbative coefficients [@JeWeSo2000]. In the example cases discussed in [@JeWeSo2000], the location of several poles in the Borel plane, known from the leading and next-to-leading large-order asymptotics of the perturbative coefficients, is utilized in order to construct specialized resummation prescriptions. Here, we consider a particular perturbation series, investigated in [@BrKr1999], where only the [*leading*]{} large-order asymptotics of the perturbative coefficients are known to sufficient accuracy, and the subleading asymptotics have – not yet – been determined. Therefore, the location of only a single pole – the one closest to the origin – in the Borel plane is available. In this case, as discussed in [@CaFi1999; @CaFi2000], the (asymptotically optimal) conformal mapping of the Borel plane is an attractive method for the analytic continuation of the Borel transform beyond its circle of convergence and, to a certain extent, for accelerating the convergence of the Borel transforms. Here, we argue that the convergence of the transformation can be accelerated further when the Borel transforms, expressed as a function of the conformal variable which mediates the analytic continuation, are additionally convergence-accelerated by the application of Padé approximants.
First we discuss, in general terms, the construction of the improved conformal mapping of the Borel plane which is used for the resummation of the perturbation series defined in Eqs. (\[gammaPhi4\]) and (\[gammaYukawa\]) below. The method uses as input data the numerical values of a finite number of perturbative coefficients and the leading large-order asymptotics of the perturbative coefficients, which can, under appropriate circumstances, be derived from an empirical investigation of a finite number of coefficients, as it has been done in [@BrKr1999]. We start from an asymptotic, divergent perturbative expansion of a physical observable $f(g)$ in powers of a coupling parameter $g$, $$\label{power}
f(g) \sim \sum_{n=0}^{\infty} c_n\,g^n\,,$$ and we consider the generalized Borel transform of the $(1,\lambda)$-type (see Eq. (4) in [@JeWeSo2000]), $$\label{BorelTrans}
f^{(\lambda)}_{\rm B}(u) \; \equiv \;
f^{(1,\lambda)}_{\rm B}(u) \; = \;
\sum_{n=0}^{\infty} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The full physical solution can be reconstructed from the divergent series (\[power\]) by evaluating the Laplace-Borel integral, which is defined as $$\label{BorelIntegral}
f(g) = \frac{1}{g^\lambda} \,
\int_0^\infty {\rm d}u \,u^{\lambda - 1} \,
\exp\bigl(-u/g\bigr)\,
f^{(\lambda)}_{\rm B}(u)\,.$$ The integration variable $u$ is referred to as the Borel variable. The integration is carried out either along the real axis or infinitesimally above or below it (if Padé approximants are used for the analytic continuation, modified integration contours have been proposed [@Je2000]). The most prominent issue in the theory of the Borel resummation is the construction of an analytic continuation for the Borel transform (\[BorelTrans\]) from a finite-order partial sum of the perturbation series (\[power\]), which we denote by $$\label{PartialSum}
f^{(\lambda),m}_{\rm B}(u) =
\sum_{n=0}^{m} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The analytic continuation can be accomplished using the direct application of Padé approximants to the partial sums of the Borel transform $f^{(\lambda),m}_{\rm B}(u)$ [@BrKr1999; @Je2000; @Raczka1991; @Pi1999] or by a conformal mapping [@SeZJ1979; @LGZJ1983; @GuKoSu1995; @CaFi1999; @CaFi2000]. We now assume that the [*leading*]{} large-order asymptotics of the perturbative coefficients $c_n$ defined in Eq. (\[power\]) is factorial, and that the coefficients display an alternating sign pattern. This indicates the existence of a singularity (branch point) along the negative real axis corresponding to the leading large-order growth of the perturbative coefficients, which we assume to be at $u=-1$. For Borel transforms which have only a single cut in the complex plane which extends from $u=-1$ to $u=-\infty$, the following conformal mapping has been recommended as optimal [@CaFi1999], $$\label{DefZ}
z = z(u) = \frac{\sqrt{1+u}-1}{\sqrt{1+u}+1}\,.$$ Here, $z$ is referred to as the conformal variable. The cut Borel plane is mapped unto the unit circle by the conformal mapping (\[DefZ\]). We briefly mention that a large variety of similar conformal mappings have been discussed in the literature .
It is worth noting that conformal mappings which are adopted for doubly-cut Borel planes have been discussed in [@CaFi1999; @CaFi2000]. We do not claim here that it would be impossible to construct conformal mappings which reflect the position of more than two renormalon poles or branch points in the complex plane. However, we stress that such a conformal mapping is likely to have a more complicated mathematical structure than, for example, the mapping defined in Eq. (27) in [@CaFi1999]. Using the alternative methods described in [@JeWeSo2000], poles (branch points) in the Borel plane corresponding to the subleading asymptotics can be incorporated easily provided their position in the Borel plane is known. In a concrete example (see Table 1 in [@JeWeSo2000]), 14 poles in the Borel plane have been fixed in the denominator of the Padé approximant constructed according to Eqs. (53)–(55) in [@JeWeSo2000], and accelerated convergence of the transforms is observed. In contrast to the investigation [@JeWeSo2000], we assume here that only the [*leading*]{} large-order factorial asymptotics of the perturbative coefficients are known.
We continue with the discussion of the conformal mapping (\[DefZ\]). It should be noted that for series whose leading singularity in the Borel plane is at $u = -u_0$ with $u_0 > 0$, an appropriate rescaling of the Borel variable $u \to |u_0|\, u$ is necessary on the right-hand side of Eq. (\[BorelIntegral\]). Then, $f^{(\lambda)}_{\rm B}(|u_0|\,u)$ as a function of $u$ has its leading singularity at $u = -1$ (see also Eq. (41.57) in [@ZJ1996]). The Borel integration variable $u$ can be expressed as a function of $z$ as follows, $$\label{UasFuncOfZ}
u(z) = \frac{4 \, z}{(z-1)^2}\,.$$ The $m$th partial sum of the Borel transform (\[PartialSum\]) can be rewritten, upon expansion of the $u$ in powers of $z$, as $$\label{PartialSumConformal}
f^{(\lambda),m}_{\rm B}(u) =
f^{(\lambda),m}_{\rm B}\bigl(u(z)\bigr) =
\sum_{n=0}^{m} C_n\,z^n + {\cal O}(z^{m+1})\,,$$ where the coefficients $C_n$ as a function of the $c_n$ are uniquely determined (see, e.g., Eqs. (36) and (37) of [@CaFi1999]). We define partial sum of the Borel transform, expressed as a function of the conformal variable $z$, as $$f'^{(\lambda),m}_{\rm B}(z) = \sum_{n=0}^{m} C_n\,z^n\,.$$ In a previous investigation [@CaFi1999], Caprini and Fischer evaluate the following transforms, $$\label{CaFiTrans}
{\cal T}'_m f(g) = \frac{1}{g^\lambda}\,
\int_0^\infty {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\,
f'^{(\lambda),m}_{\rm B}(z(u))\,.$$ Caprini and Fischer [@CaFi1999] observe the apparent numerical convergence with increasing $m$. The limit as $m\to\infty$, provided it exists, is then assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}'_m f(g)\,.$$ We do not consider the question of the existence of this limit here (for an outline of questions related to these issues we refer to [@CaFi2000]).
In the absence of further information on the analyticity domain of the Borel transform (\[BorelTrans\]), we cannot necessarily conclude that $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ as a function of $z$ is analytic inside the unit circle of the complex $z$-plane, or that, for example, the conditions of Theorem 5.2.1 of [@BaGr1996] are fulfilled. Therefore, we propose a modification of the transforms (\[CaFiTrans\]). In particular, we advocate the evaluation of (lower-diagonal) Padé approximants [@BaGr1996; @BeOr1978] to the function $f'^{(\lambda),m}_{\rm B}(z)$, expressed as a function of $z$, $$\label{ConformalPade}
f''^{(\lambda),m}_{\rm B}(z) =
\bigg[ [\mkern - 2.5 mu [m/2] \mkern - 2.5 mu ] \bigg/
[\mkern - 2.5 mu [(m+1)/2] \mkern - 2.5 mu ]
\bigg]_{f'^{(\lambda),m}_{\rm B}}\!\!\!\left(z\right)\,.$$ We define the following transforms, $$\label{AccelTrans}
{\cal T}''_m f(g) = \frac{1}{g^\lambda}\,
\int_{C_j} {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\,
f''^{(\lambda),m}_{\rm B}\bigl(z(u)\bigr)$$ where the integration contour $C_j$ ($j=-1,0,1$) have been defined in [@Je2000]. These integration contours have been shown to to provide the physically correct analytic continuation of resummed perturbation series for those cases where the evaluation of the standard Laplace-Borel integral (\[BorelIntegral\]) is impossible due to an insufficient analyticity domain of the integrand (possibly due to multiple branch cuts) or due to spurious singularities in view of the finite order of the Padé approximations defined in (\[ConformalPade\]). We should mention potential complications due to multi-instanton contributions, as discussed for example in Ch. 43 of [@ZJ1996] (these are not encountered in the current investigation). In this letter, we use exclusively the contour $C_0$ which is defined as the half sum of the contours $C_{-1}$ and $C_{+1}$ displayed in Fig. 1 in [@Je2000]. At increasing $m$, the limit as $m\to\infty$, provided it exists, is then again assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}''_m f(g)\,.$$ Because we take advantage of the special integration contours $C_j$, analyticity of the Borel transform $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ inside the unit circle of the complex $z$-plane is not required, and additional acceleration of the convergence is mediated by employing Padé approximants in the conformal variable $z$.
[cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{}
------------------------------------------------------------------------
$m$ & & & &\
------------------------------------------------------------------------
28 & $-0$ & $501~565~232$ & $-0$ & $538~352~234$ & $-0$ & $573~969~740$ & $-0$ & $827~506~173$\
------------------------------------------------------------------------
29 & $-0$ & $501~565~232$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~143$\
------------------------------------------------------------------------
30 & $-0$ & $501~565~231$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~136$\
[cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{}
------------------------------------------------------------------------
$m$ & & & &\
------------------------------------------------------------------------
28 & $-1$ & $669~071~213$ & $-1$ & $800~550~588$ & $-1$ & $928~740~624$ & $-1$ & $852~027~809$\
------------------------------------------------------------------------
29 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~626$ & $-1$ & $852~027~810$\
------------------------------------------------------------------------
30 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~625$ & $-1$ & $852~027~810$\
We consider the resummation of two particular perturbation series discussed in [@BrKr1999] for the anomalous dimension $\gamma$ function of the $\phi^3$ theory in 6 dimensions and the Yukawa coupling in 4 dimensions. The perturbation series for the $\phi^3$ theory is given in Eq. (16) in [@BrKr1999], $$\label{gammaPhi4}
\gamma_{\rm hopf}(g) \sim
\sum_{n=1}^{\infty} (-1)^n \, \frac{G_n}{6^{2 n - 1}} \, g^n\,,$$ where the coefficients $G_n$ are given in Table 1 in [@BrKr1999] for $n=1,\dots,30$ (the $G_n$ are real and positive). We denote the coupling parameter $a$ used in [@BrKr1999] as $g$; this is done in order to ensure compatibility with the general power series given in Eq. (\[power\]). Empirically, Broadhurst and Kreimer derive the large-order asymptotics $$G_n \sim {\rm const.} \; \times \;
12^{n-1} \, \Gamma(n+2)\,, \qquad n\to\infty\,,$$ by investigating the explicit numerical values of the coefficients $G_1,\dots,G_{30}$. The leading asymptotics of the perturbative coefficients $c_n$ are therefore (up to a constant prefactor) $$\label{LeadingPhi4}
c_n \sim (-1)^n \frac{\Gamma(n+2)}{3^n}\,, \qquad n\to\infty\,.$$ This implies that the $\lambda$-parameter in the Borel transform (\[BorelTrans\]) should be set to $\lambda=2$ (see also the notion of an asymptotically optimized Borel transform discussed in [@JeWeSo2000]). In view of Eq. (\[LeadingPhi4\]), the pole closest to the origin of the Borel transform (\[BorelTrans\]) is expected at $$u = u^{\rm hopf}_0 = -3\,,$$ and a rescaling of the Borel variable $u \to 3\,u$ in Eq. (\[BorelIntegral\]) then leads to an expression to which the method defined in Eqs. (\[power\])–(\[AccelTrans\]) can be applied directly. For the Yukawa coupling, the $\gamma$-function reads $$\label{gammaYukawa}
{\tilde \gamma}_{\rm hopf}(g) \sim
\sum_{n=1}^{\infty} (-1)^n \,
\frac{{\tilde G}_n}{2^{2 n - 1}} \, g^n\,,$$ where the ${\tilde G}_n$ are given in Table 2 in [@BrKr1999] for $n=1,\dots,30$. Empirically, i.e. from an investigation of the numerical values of ${\tilde G}_1,\dots,{\tilde G}_{30}$, the following factorial growth in large order is derived [@BrKr1999], $${\tilde G}_n \sim {\rm const.'} \; \times \;
2^{n-1} \, \Gamma(n+1/2)\,, \qquad n\to\infty\,.$$ This leads to the following asymptotics for the perturbative coefficients (up to a constant prefactor), $$c_n \sim (-1)^n \frac{\Gamma(n+1/2)}{2^n} \,, \qquad n\to\infty\,.$$ This implies that an asymptotically optimal choice [@JeWeSo2000] for the $\lambda$-parameter in (\[BorelTrans\]) is $\lambda=1/2$. The first pole of the Borel transform (\[BorelTrans\]) is therefore expected at $$u = {\tilde u}^{\rm hopf}_0 = -2\,.$$ A rescaling of the Borel variable according to $u \to 2\,u$ in (\[BorelIntegral\]) enables the application of the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]).
In Table \[table1\], numerical values for the transforms ${\cal
T}''_m \gamma_{\rm hopf}(g)$ are given, which have been evaluated according to Eq. (\[AccelTrans\]). The transformation order is in the range $m=28~,29,~30$, and we consider coupling parameters $g=5.0,~5.5,~6.0$ and $g=10.0$. The numerical values of the transforms display apparent convergence to about 9 significant figures for $g \leq 6.0$ and to about 7 figures for $g=10.0$. In Table \[table2\], numerical values for the transforms ${\cal T}''_m
{\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[AccelTrans\]) are shown in the range $m=28,~29,~30$ for (large) coupling strengths $g=5.0,~5.5,~6.0$. Additionally, the value $g = 30^2/(4\,\pi)^2 = 5.69932\dots$ is considered as a special case (as it has been done in [@BrKr1999]). Again, the numerical values of the transforms display apparent convergence to about 9 significant figures. At large coupling $g = 12.0$, the apparent convergence of the transforms suggests the following values: $\gamma_{\rm hopf}(12.0) =
-0.939\,114\,3(2)$ and ${\tilde \gamma}_{\rm hopf}(12.0) =
-3.287\,176\,9(2)$. The numerical results for the Yukawa case, i.e. for the function ${\tilde
\gamma}_{\rm hopf}$, have recently been confirmed by an improved analytic, nonperturbative investigation [@BrKr2000prep] which extends the perturbative calculation [@BrKr1999].
We note that the transforms ${\cal T}'_m \gamma_{\rm hopf}(g)$ and ${\cal T}'_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[CaFiTrans\]), i.e. by the unmodified conformal mapping, typically exhibit apparent convergence to 5–6 significant figures in the transformation order $m=28,~29,~30$ and at large coupling $g \geq 5$. Specifically, the numerical values for $g=5.0$ are $$\begin{aligned}
{\cal T}'_{28} \gamma_{\rm hopf}(g = 5.0) \; &=& \;
-0.501~567~294\,, \nonumber\\[2ex]
{\cal T}'_{29} \gamma_{\rm hopf}(g = 5.0) \; &=& \;
-0.501~564~509\,, \nonumber\\[2ex]
{\cal T}'_{30} \gamma_{\rm hopf}(g = 5.0) \; &=& \;
-0.501~563~626\,. \nonumber\end{aligned}$$ These results, when compared to the data in Table \[table1\], exemplify the acceleration of the convergence by the additional Padé approximation of the Borel transform [*expressed as a function of the conformal variable*]{} \[see Eq. (\[ConformalPade\])\].
It is not claimed here that the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]) necessarily provides the fastest possible rate of convergence for the perturbation series defined in Eq. (\[gammaPhi4\]) and (\[gammaYukawa\]). Further improvements should be feasible, especially if particular properties of the input series are known and exploited (see in part the methods described in [@JeWeSo2000]). We also note possible improvements based on a large-coupling expansion [@We1996d], in particular for excessively large values of the coupling parameter $g$, or methods based on order-dependent mappings (see [@SeZJ1979; @LGZJ1983] or the discussion following Eq. (41.67) in [@ZJ1996]).
The conformal mapping [@CaFi1999; @CaFi2000] is capable of accomplishing the analytic continuation of the Borel transform (\[BorelTrans\]) beyond the circle of convergence. Padé approximants, applied directly to the partial sums of the Borel transform (\[PartialSum\]), provide an alternative to this method [@Raczka1991; @Pi1999; @BrKr1999; @Je2000; @JeWeSo2000]. Improved rates of convergence can be achieved when the convergence of the transforms obtained by conformal mapping in Eq. (\[PartialSumConformal\]) is accelerated by evaluating Padé approximants as in Eq. (\[ConformalPade\]), and conditions on analyticity domains can be relaxed in a favorable way when these methods are combined with the integration contours from Ref. [@Je2000]. Numerical results for the resummed values of the perturbation series (\[gammaPhi4\]) and (\[gammaYukawa\]) are provided in the Tables \[table1\] and \[table2\]. By the improved conformal mapping and other optimized resummation techniques (see, e.g., the methods introduced in Ref. [@JeWeSo2000]) the applicability of perturbative (small-coupling) expansions can be generalized to the regime of large coupling and still lead to results of relatively high accuracy.\
U.J. acknowledges helpful conversations with E. J. Weniger, I. Nándori, S. Roether and P. J. Mohr. G.S. acknowledges continued support from BMBF, DFG and GSI.
[10]{}
J. C. LeGuillou and J. Zinn-Justin, [*Large-Order Behaviour of Perturbation Theory*]{} (North-Holland, Amsterdam, 1990).
J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, 3rd ed. (Clarendon Press, Oxford, 1996).
J. Fischer, Int. J. Mod. Phys. A [**12**]{}, 3625 (1997).
U. D. Jentschura, E. Weniger, and G. Soff, Asymptotic Improvement of Resummation and Perturbative Predictions, Los Alamos preprint hep-ph/0005198, submitted.
D. Broadhurst and D. Kreimer, Phys. Lett. B [**475**]{}, 63 (2000).
I. Caprini and J. Fischer, Phys. Rev. D [**60**]{}, 054014 (1999).
I. Caprini and J. Fischer, Convergence of the expansion of the Laplace-Borel integral in perturbative QCD improved by conformal mapping, Los Alamos preprint hep-ph/0002016.
U. D. Jentschura, Resummation of Nonalternating Divergent Perturbative Expansions, Los Alamos preprint hep-ph/0001135, Phys. Rev. D (in press).
P. A. Raczka, Phys. Rev. D [**43**]{}, R9 (1991).
M. Pindor, Padé Approximants and Borel Summation for QCD Perturbation Series, Los Alamos preprint hep-th/9903151.
R. Seznec and J. Zinn-Justin, J. Math. Phys. [**20**]{}, 1398 (1979).
J. C. L. Guillou and J. Zinn-Justin, Ann. Phys. (N. Y.) [**147**]{}, 57 (1983).
R. Guida, K. Konishi, and H. Suzuki, Ann. Phys. (N. Y.) [**241**]{}, 152 (1995).
D. J. Broadhurst, P. A. Baikov, V. A. Ilyin, J. Fleischer, O. V. Tarasov, and V. A. Smirnov, Phys. Lett. B [**329**]{}, 103 (1994).
G. Altarelli, P. Nason, and G. Ridolfi, Z. Phys. C [**68**]{}, 257 (1995).
D. E. Soper and L. R. Surguladze, Phys. Rev. D [**54**]{}, 4566 (1996).
K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Phys. Lett. B [**371**]{}, 93 (1996).
K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Nucl. Phys. B [**482**]{}, 213 (1996).
K. G. Chetyrkin, R. Harlander, and M. Steinhauser, Phys. Rev. D [**58**]{}, 014012 (1998).
G. A. Baker and P. Graves-Morris, [*Padé approximants*]{}, 2nd ed. (Cambridge University Press, Cambridge, 1996).
C. M. Bender and S. A. Orszag, [*Advanced Mathematical Methods for Scientists and Engineers*]{} (McGraw-Hill, New York, NY, 1978).
D. Broadhurst and D. Kreimer, in preparation (2000).
E. J. Weniger, Phys. Rev. Lett. [**77**]{}, 2859 (1996).
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Planets interact with their host stars through gravity, radiation and magnetic fields, and for those giant planets that orbit their stars within $\sim$10 stellar radii ($\sim$0.1 AU for a sun-like star), star-planet interactions (SPI) are observable with a wide variety of photometric, spectroscopic and spectropolarimetric studies. At such close distances, the planet orbits within the sub-alfvénic radius of the star in which the transfer of energy and angular momentum between the two bodies is particularly efficient. The magnetic interactions appear as enhanced stellar activity modulated by the planet as it orbits the star rather than only by stellar rotation. These SPI effects are informative for the study of the internal dynamics and atmospheric evolution of exoplanets. The nature of magnetic SPI is modeled to be strongly affected by both the stellar and planetary magnetic fields, possibly influencing the magnetic activity of both, as well as affecting the irradiation and even the migration of the planet and rotational evolution of the star. As phase-resolved observational techniques are applied to a large statistical sample of hot Jupiter systems, extensions to other tightly orbiting stellar systems, such as smaller planets close to M dwarfs become possible. In these systems, star-planet separations of tens of stellar radii begin to coincide with the radiative habitable zone where planetary magnetic fields are likely a necessary condition for surface habitability.'
author:
- 'Evgenya L. Shkolnik'
- Joe Llama
bibliography:
- 'chapter\_revision\_refs.bib'
title: 'Signatures of star-planet interactions'
---
Introduction
=============
Giant planets located $<0.1$ AU from their parent star comprise $\sim$7% of the confirmed exoplanets, primarily around FGK stars[^1]. At such small orbital separations, these hot Jupiters (HJ) provide a laboratory to study the tidal and magnetic interactions between the planet and the star that does not exist in our own solar system. These interactions can be observed because they scale as $a^{-3}$ and $a^{-2}$, respectively, where $a$ is the separation between the two bodies. Although HJs are rare around M dwarfs (only five known), statistics from the *Kepler* survey have revealed that M stars host on average 0.24 Earth-sized planets in the habitable zone [@dres15]. We can apply the techniques trained on HJs around FGK stars on to small planet + M dwarf systems.
[@cuntz2000] first suggested that close-in planets may increase and modulate their host star’s activity levels through tidal and magnetic interactions as such effects are readily observed in the comparable cases of tightly orbiting RS CVn binary systems (e.g. @piskunov1996 [@shkolnik2005a]). Variable excess stellar activity with the period of the planet’s orbit rather than with star’s rotation period, indicates a magnetic interaction with the planet, while a period of half the orbit’s indicates a tidal interaction. This suggestion has spurred the search for such interactions as a means of studying the angular momentum evolution of HJ systems and as detecting the magnetic fields of exoplanets (e.g. @cuntz2004 [@saar2004; @lanza2009; @shkolnik2003; @shkolnik2005; @shkolnik2008; @cohen2009]).
Exoplanetary magnetic fields provide a probe into a planet’s internal dynamics and constraints on its atmospheric mass loss. This fundamental physical property of exoplanets would most directly be detected through the radio emission produced by electron cyclotron maser instability (see review by @treumann2006 and Chapter 9.6 of this book). Such emission has been detected from all of the solar system’s gas giants and the Earth resulting from an interaction between the planetary magnetosphere and the solar wind. There are no detections to date of radio emission from exoplanets although searches have been typically less sensitive at higher emission frequencies than predicted for exoplanets (e.g. @farrell1999 [@bastian2000; @lanza2009; @lazio2009; @jardine2008; @vidotto2012] and see review by @lazio2016). Even though a radio detection of a planet’s magnetic field ($B_p$) remains elusive, there have been reported detections through magnetic star-planet interactions (SPI). Nearly twenty studies of HJ systems, varying in wavelengths and observing strategy, have independently come to the conclusion that a giant exoplanet in a short-period orbit can induce activity on the photosphere and upper atmosphere of its parent star. This makes the host star’s magnetic activity a probe of the planet’s magnetic field.
Due to their proximity to their parent star, magnetic SPI in HJ systems can be detected because these exoplanets typically lie within the Alfvén radius of their parent star ($\lesssim 10R_\star$ or $\lesssim 0.1$ AU for a sun-like star). At these small separations, the Alfvén speed is larger than the stellar wind speed, allowing for direct magnetic interaction with the stellar surface. If the giant planet is magnetized, then the magnetosphere of the planet may interact with the stellar corona throughout its orbit, potentially through magnetic reconnection, the propagation of Alfvén waves within the stellar wind, and the generation of electron beams that may strike the base of the stellar corona.
In the case of characterizing habitable zone planets, the current favored targets are low-mass stars where the habitable zone is located much closer to the parent star compared to the Earth-Sun separation, making the planet easier to detect and study. Low-mass stars are typically much more magnetically active than solar type stars. It is therefore vital that we understand how this increase in magnetic activity impacts the potential habitability of a planet orbiting close to a low-mass star and what defenses the planet has against it. In order to sustain its atmosphere, a planet around a low-mass star must be able to withstand enhanced atmospheric erosion from extreme stellar wind and also from the impact of coronal mass ejections. Both of these reasons necessitate the push towards the detection and characterization of magnetic SPI in M dwarf planetary systems.
The need to understand magnetic SPI is also driving the modeling effort forward. There have been considerable efforts towards modeling the space weather environments surrounding close-in giant exoplanets and star-planet interactions. The magnetized stellar winds may interact with the close-in exoplanet through the stars’ outflows and magnetospheres, and potentially lead to observable SPI. Observing the stellar winds of stars other than the Sun is difficult and there are very few observational constraints on the winds of low-mass stars [@wood2005]. Star-planet interactions can be simulated by using hydrodynamical (HD) or magnetohydrodynamical (MHD) numerical models. The modeling efforts have not only focused on studying individual systems, but have also been extended to more general scenarios to help aid the interpretation of statistical studies.
MHD models for star-planet interactions require a dynamic model for the stellar corona and wind, and also a model for the planet, which acts as an additional, time-dependent boundary condition in the simulation [@cohen2011]. The standard approach to modeling SPI involves adapting 3D MHD models originally developed for the solar corona and wind. The BATS-R-US (@powell1999 [@toth2012]) global MHD model forms part of the Space Weather Modeling Framework (@toth2005) and is capable of accurately reproducing the large-scale density and temperature structure of the solar corona and has been adapted to model the winds of other stars. This MHD model uses a stellar magnetogram (or solar synoptic map) as input along with other properties of the host star, including the stellar coronal base density ($\rho$), surface temperature ($T$), mass ($M_\star$), radius ($R_\star$) and rotation period ($P_\star$). The model then self-consistently solves the ideal MHD equations for the stellar corona and wind, which in turn allows the conditions experienced by an exoplanet to be studied (e.g., @cohen2009 [@cohen2011; @cohen2014; @vidotto2009; @vidotto2013; @vidotto2014; @doNascimento]).
In this chapter, we discuss the observational evidence of magnetic SPI in FGK and M stars plus the array of models produced to explain and characterize this diagnostic, albeit complex, physical phenomenon.
Planet induced and orbit phased stellar emission
================================================
Although no tidally induced variability has yet been reported, magnetic SPI has seen a blossoming of data and modeling over the past 15 years. The strongest evidence for magnetic SPI is excess stellar activity modulated in phase with a planet as it orbits a star with a rotation period significantly different from the planet’s orbital period. Such signatures were first reported by [@shkolnik2003] who observed periodic chromospheric activity through Ca II H & K variability of HJ host HD 179949 modulated on the planet’s orbital period of 3.092 d [@butler2006] rather than the stellar rotation period of 7 days [@fares2012]. Those data consisted of nightly high-resolution ($\lambda/\Delta\lambda\approx$110,000), high signal-to-noise (a few hundred per pixel) spectroscopy acquired over several epochs (Figure \[fig:hd179949\_spi\], @shkolnik2005 [@shkolnik2008; @gurdemir2012]).
![Integrated Ca II K residual flux of HJ host HD 179949 as a function of orbital phase where $\phi$=0 is the sub-planetary point (or inferior conjunction). The symbols are results from six individual epochs of observation collected from 2001 to 2006 by [@shkolnik2003; @shkolnik2005a; @shkolnik2008] and [@gurdemir2012]. The spot model shown is fit to the 2001-2005 data and shows a persistent region of excess chromospheric activity peaking at the planet’s orbital phase of $\sim$0.75. []{data-label="fig:hd179949_spi"}](hd179949_combined-01.png){width="\textwidth"}
In addition to HD 179949, several other stars with HJs exhibit this kind of Ca II H & K modulation, including $\upsilon$ And, $\tau$ Boo, and HD 189733. [@shkolnik2008] reported that this signature is present roughly $\sim$75% of the time. During other epochs only rotationally modulated spotting for these stars is observed. This is interpreted as variations in the stellar magnetic field configuration leading to weaker (or no) magnetic SPI with the planet’s field. Simulations of magnetic SPI using magnetogram data of the varying solar magnetic fields confirm this to be a likely explanation of the intermittent effect [@cran07]. As another example, the large scale magnetic field of the planet-host star HD 189733 has been observed over multiple years using Zeeman-Doppler Imaging (ZDI) and the field shows structural evolution between observations [@moutou2007; @fares2010; @fares2013]. In this case, the SPI diagnostics in the HD 189733 system must vary with the stellar magnetic field.
Scaling law to measure planetary magnetic field strengths
=========================================================
In the solar system, there is a strong correlation between the magnetic moment of a body and the ratio of its mass to its rotation period (Figure \[fig:magmom\_ss\]). Analogously, a similar relationship has emerged for exoplanets. Figure \[fig:magmom\_exo\] shows $M_p\sin i/P_{orb}$ against the stellar magnetic activity measure, $<$MADK$>$, the average of the Mean Absolute Deviation of Ca II K line variability per observing run. Note that the planet is assumed to be tidally locked such that $P_{\rm orb}$ equals the rotation period of the planet.
[@lanza2009] and [@lanza2012] provided a straightforward formalism with which to scale the expected power emitted from magnetic SPI (P$_{SPI}$):
P$_{SPI} \propto$ B$_*^{4/3}$ B$_p^{2/3} R_p^2 v$
where B$_\star$ and B$_p$ are the magnetic field strengths of the star and planet, respectively, $R_p$ is the planet’s radius, and $v$ is the relative velocity between the two bodies. This implies that systems with stars that are tidally locked to their HJs, i.e. stellar rotation period equals the orbital period as is the case for HJ host $\tau$ Boo, should produce weak P$_{SPI}$ (Figure \[fig:magmom\_exo\]; @shkolnik2008 [@walker2008; @fares2013]).
The strength of the planetary magnetic field for tidally locked planets has been a subject of debate but scaling laws presented by [@chri09] and others reviewed in [@chri10] predict that the planet’s field strength depends primarily on the internal heat flux, and not on electrical conductivity nor the rotation speed. This same energy scaling can simultaneously explain the observed field strengths of Jupiter, Earth and rapidly rotating low-mass stars.
Using the formalism of Lanza (2009) above with the measured stellar magnetic fields from spectropolarimetric observation of these targets (e.g., @donati2008 [@fares2009; @fares2013; @jeffers2014; @hussain2016; @mengel2017]), it is possible to use this correlation to estimate the *relative* magnetic field strengths of the planets, exhibiting a range of planetary magnetic field strengths of the HJs in these systems. For example, the HJ around HD 179949 appears to have a field strength seven times that of the HJ around HD 189733.
![The magnetic moment for the six magnetized solar system planets, plus Ganymede, plotted against the ratio of body mass to rotation period. The power law fit is $y\propto x^{1.21}$. Data are from Tholen et al. (2000) and Kivelson et al. (2002).[]{data-label="fig:magmom_ss"}](magmom_ss.pdf){width="80.00000%"}
![$M_p\sin i/P_{\rm orb}$, which is proportional to the planet’s magnetic moment (Figure \[fig:magmom\_ss\]), plotted against the mean night-to-night Ca II K chromospheric activity (assuming the planet is tidally locked, $P_{\rm orb} = P_{{\rm rot},p}$). The green squares show systems where SPI has been detected. Note that $\tau$ Boo, for which $P_{\rm orb} = P_{{\rm rot},\star}$ does not follow the trend. This is evidence in support of a model (@lanza2009) where near-zero relative motion of the planet through the stellar magnetosphere produces minimal magnetic SPI effects [@shkolnik2008].[]{data-label="fig:magmom_exo"}](magmom_exo.pdf){width="80.00000%"}
There are stars for which no planet phased activity is reported, e.g. HD 209458 [@shkolnik2008] and WASP 18 [@miller2015; @pillitteri2014b]. In these cases, the central stars are particularly inactive with very weak fields and measurable SPI is not unexpected according to Lanza’s formalism as both the star and the planet require strong enough magnetic fields for an observable interaction. In addition, in many cases, the data collected were of too low S/N to detect any induced modulations caused by the planet and/or lacked phase coverage of the planetary orbit making it difficult to disentangle planet induced activity from stellar rotational modulation. The star may also have a highly variable magnetic field. If the stellar magnetic field is highly complex in structure then it may be that the magnetic field lines simply do not reach the orbit of the planet [@lanza2009]. Finally, the planet itself may have a weak magnetic field or no field at all.
Models and observations of planet induced variability at many wavelengths
=========================================================================
In addition to Ca II H & K observations, planet phased modulation has been reported in broadband optical photometry from space for $\tau$ Boo [@walker2008] and CoRoT-2 [@pagano2009] and in X-ray for HD 17156 [@maggio2015]. Tentative evidence of planet phased X-ray modulation of HD 179949 was reported by [@scandariato2013]. They find an activity modulation period of $\sim4$ days (with a false alarm probability of 2%), which is longer than the orbital period of 3.1 days, but may be tracing the synodic period of the planet with respect to the star ($P_{syn}$=4.7–5.6 days for $P_{rot}$=7–9 days). Clearer planet phased X-ray and far-UV modulation has also been reported for HD 189733 [@pillitteri2011; @pillitteri2015].
From a modeling perspective, the combined flux at all wavelengths is needed to asses the total power emitted from such an interaction adding value to these higher energy observations. Ideally, simultaneous observations across optical, UV and X-ray activity indicators would be scheduled but has proven to be challenging to accomplish. From this and other perspectives discussed below, statistical studies of a large sample of monitored stars for planet phased stellar activity is the necessary path forward.
The HD 189733 system is one of the most studied as it is a bright K2V dwarf at a distance of 19.3 pc, hosts a transiting hot Jupiter at a distance of only 0.03 AU [@bouchy2005], and exhibits planet induced Ca II H & K variations [@shkolnik2005; @shkolnik2008]. It has been the subject of multiple searches for X-ray flares that coincide with the orbit of the planet. Transit observations of HD 189733b, with phase coverage from $\phi=0.52 - 0.65$ have shown that the X-ray spectrum softened in strict correspondence with the transit event, followed by a flaring event when the planet was at $\phi=0.54$ [@pillitteri2010; @pillitteri2011; @Pillitteri2014]. This phase offset for the beginning of the flare event corresponds to a location of $77^\circ$ forward of the sub-planetary point, as is also the case for the HD 179949 system. This phased emission is best interpreted as the observational signature of an active spot on the surface of the star that is connected to, and co-moving with, the planet [@Pillitteri2014]. Such a hot spot has been analytically derived by modeling the link between an exoplanet and the star [@lanza2012]. These authors calculated that if the planet is sufficiently close to the star (as is the case for hot Jupiters) the magnetic field lines that connect the star to the planet would produce such a phase offset owing to the relative orbital motion of the planet.
Simulations are also helping to understand SPI and planet phased emission through modeling studies aimed not at reproducing individual systems but rather at the general conditions that favor SPI. The first generation of SPI models focused primarily on recovering the phase offset between the sub-planetary point and the chromospheric hot spot rather than explaining the spot’s energy dissipation [@mcivor2006; @preusse2006; @lanza2008]. The next generation of models explicitly included the planet and were able to show that the power generated in a reconnection event between the stellar corona and the planet can reproduce the observed hot spots [@lanza2009; @cohen2011; @lanza2013].
An investigation by @cohen2011 using the MHD code BATS-R-US showed that HD 189733b orbited in-and-out of the variable Alfvén radius and that when the planet was within the Alfvén radius its magnetosphere would reconnect with the stellar coronal field resulting in enhanced flaring from the host star. In their simulations the planet was implemented as an additional boundary condition representing HD 189733b’s density, temperature, and magnetic field strength. They found that SPI varies during the planetary orbit and is highly dependent on the relative orientation of the stellar and planetary magnetic fields.
A recent study by @matsakos2015 was aimed at categorizing various types of SPI using the 3D MHD PLUTO code [@pluto2007; @pluto2012]. They ran 12 models in total, detailed in Table 2 of [@matsakos2015]. Since they were seeking to explore the parameter regime over which the observational signature of SPI changes, they chose to explore various parameters for the planet and star, rather than adopting the parameters for a known system. They classify star-planet interactions into four types illustrated in Figure \[fig:matsakos\]. Types III and IV describe scenarios where an accretion stream forms between the planet and the star. For these interactions, the authors find that the ram pressure from the stellar wind must be greater than the magnetic and tidal pressures from the planet. The accretion stream arises through Kelvin-Helmholtz and Rayleigh-Taylor instabilities and is triggered by the interaction between the stellar wind and the denser planetary material. These simulations showed that the location where the accretion stream typically impacts the stellar surface is dependent on the parameters of the system but is typically $\sim45-90^\circ$ in front of the planet. This finding is in good agreement with the observed SPI phase offsets discussed above.
![The four types of star-planet interaction as described in @matsakos2015. In Type I, the ram and magnetic pressure of the stellar wind is greater than the planetary outflow, confining the material and leading to the formation of a bow shock (e.g., @vidotto2010 [@llama2011]). In Type II, the planetary outflow is stronger than in Type I resulting in material being swept back into a tail. The interactions of Types III and IV, the ram pressure of the stellar wind is greater than the tidal pressure of the planet, resulting in the formation of a tail behind the planet and an accretion stream onto the star. The accretion stream typically impacts the stellar surface $\sim90^\circ$ ahead of the sub-planetary point, in agreement with observations of magnetic SPI.[]{data-label="fig:matsakos"}](matsakos.png){width="\textwidth"}
Statistical studies of magnetic SPI
===================================
As the number of known exoplanets continuously rises, statistical studies are becoming an effective way to study the properties of exoplanetary systems. An efficient strategy with which to study planet induced stellar emission is by analyzing single-epoch observations of a statistical sample in search of a significant difference in emission properties of stars with and without close-in giant planets.
From a sample of stars with Ca II H & K observations, [@hart10] showed a correlation between planet surface gravities and the stellar log R$^{\prime}_{HK}$ activity parameter for 23 systems with planets of M$_p$ $>$ 0.1 M$_J$ , $a$ $<$ 0.1 AU orbiting stars with 4200 K $<$ T$_{\rm eff} < $6200 K, with a weaker correlation with planet mass. In another study of 210 systems, [@krej12] found statistically significant evidence that the equivalent width of the Ca II K line emission and log R$^{\prime}_{HK}$ of the host star correlate with smaller semi-major axis and larger mass of the exoplanet, as would be expected for magnetic and tidal SPI.
The efficiency of extracting data from large photometric catalogs has made studying stellar activity of many more planet hosts possible in both the ultraviolet (UV) and X-ray. A study of 72 exoplanet systems by @poppenhaeger2010 showed no significant correlation between the fractional luminosity $(L_X/L_{\rm bol})$ with planet properties. They did, however, report a correlation of stellar X-ray luminosity with the ratio of planet mass to semi-major axis $(M_p\sin i/a)$, suggesting that massive, close-in planets tend to orbit more X-ray luminous stars. They attributed this correlation to biases of the radial velocity (RV) planet detection method, which favors smaller and further-out planets to be detected around less active, and thus X-ray faint, stars. A study of both RV and transit detected planets by [@shko13] of the far-UV (FUV) emission as observed by the Galaxy Evolution Explorer (GALEX) also searched for evidence of increased stellar activity due to SPI in $\sim$300 FGK planet hosts. This investigation found no clear correlations with $a$ or $M_p$, yet reported tentative evidence for close-in massive planets (i.e. higher $M_p$/$a$) orbiting more FUV-active stars than those with far-out and/or smaller planets, in agreement with past X-ray and Ca II results (Figure \[fig:shko13\]). There may be less potential for detection bias in this case as transit-detected planets orbit stars with a more normal distribution of stellar activity than those with planets discovered with the RV method. To confirm this, a sample of transiting small and distant planets still needs to be identified.
![The residual fractional FUV luminosity (i.e. photospheric flux removed leaving only stellar upper-atmospheric emission) as a function of the ratio of the planet mass to semi-major axis, a measure of star-planet interaction strength [@shko13].[]{data-label="fig:shko13"}](shkolnik2013.pdf){width="80.00000%"}
The first statistcal SPI test for lower mass (K and M) systems was reported by [@fran16] in which they measured a weak positive correlation between the fractional N V luminosity, a transition region FUV emission line, with $M_p/a$ for the most massive planet in the system. They found tentative evidence that the presence of short-period planets (ranging in M$_p$sin$i$ from 3.5 to 615 M$_{Earth}$) enhances the transition region activity on low-mass stars, possibly through the interaction of their magnetospheres (Figure \[fig:fran16\]).
![Fractional N V (at 1240Å) luminosity from a sample of 11 K and M dwarf planet hosts is weakly correlated with a measure of the star-planet interaction strength $M_p/a$, where $M_p$ is the mass of the most massive planet in the system (in Earth masses) and $a$ is the semi-major axis (in AU). The Pearson coefficient and statistical likelihood of a null correlation is shown at the top. This provides tentative evidence that the presence of short-period planets enhances the transition region activity on low-mass stars, possibly through the interaction of their magnetospheres (@fran16). []{data-label="fig:fran16"}](france_2016_updated.pdf){width="80.00000%"}
@cohen2015 modeled the interaction between an M-dwarf and a non-magnetized planet like Venus. Their work shows very different results for the localized space-weather environments for the planet for sub- and super-Alfvénic stellar wind conditions. The authors postulate that these dynamic differences would lead to additional heating and additional energy being deposited into the atmosphere of the planet. In all their simulations they find that the stellar wind penetrates much deeper into the atmosphere than for the magnetized planets simulated in @cohen2014, suggesting that for planets orbiting M dwarfs a magnetosphere may be necessary to shield the planet’s atmosphere.
@vidotto2014 modeled the stellar wind of six M stars ranging from spectral type M0 to M2.5 to study the angular momentum of the host star and the rotational evolution of the star. They found the stellar wind to be highly structured at the orbital separation of the planet, and found that the planetary magnetospheric radii could vary by up to 20% in a single orbit. This will result in high variability in the strength of SPI signatures as the planet orbits through regions of closed and open magnetic field, implying that a larger, statistical study may be the most efficient path forward, especially for M dwarfs.
Planetary Effects on Stellar Angular Momentum Evolution
=======================================================
As the evidence continues to mount that star-planet interactions measurably increase stellar activity, and now for a wider range of planetary systems, there remains an ambiguity in the larger statistical, single-epoch studies as to whether or not this effect is caused by magnetic SPI, tidal SPI or planet search selection biases. Although no tidal SPI has been observed as stellar activity modulated by half the planet’s orbital period (@cuntz2000), there may be other effects due to the presence of the planets or planet formation process on the angular momentum evolution of the stars, which might increase the stellar rotation through tidal spin-up or decrease the efficiency of stellar magnetic breaking [@lanza2010b; @cohen2011]. In both cases, the star would be more active than expected for its mass and age. For main sequence FGK stars, the magnetized stellar wind acts as a brake on the stellar rotation, decreasing the global stellar activity rate as the star ages. This well observed process has given rise to the so-called “age-rotation-activity” relationship. However, the presence of a short-period giant planet may affect the star’s angular momentum. Under this scenario, the age-activity relation will systematically underestimate the star’s age, potentially making “gyrochronology” inapplicable to these systems. This poses an issue for evolutionary studies of exoplanets and their host stars, including planet migration models and planet atmospheric evolution.
Several studies have found that stars hosting giant planets rotate faster than the evolutionary models predict. This increase in rotation rate is thought to be the direct consequence of tidal spin-up of the star by the planet. Additional evidence for the tidal spin-up of stars by giant planets has been found using two hot Jupiter systems by @schro2011 and @pillitteri2011. These studies searched for X-ray emission from M dwarf companions to the active planet hosts CoRoT-2 and HD 189733. Both systems showed no X-ray emission indicating the age of the systems to be $> 2$Gyr; however, the rotation-age relation places these systems between 100-300 Myr for CoRoT-2 and 600 Myr for HD 189733. A study by @lanza2010b showed that tides alone cannot spin-up the star to the levels seen in CoRoT-2 and HD 189733. Rather, his study postulated that the excess rotation is a consequence of interactions between the planetary magnetic field and the stellar coronal field. He proposed that these interactions would result in a magnetic field topology where the majority of the field lines are closed. This configuration would therefore limit the efficiency of the stellar wind to spin-down the star through angular momentum loss. By computing a simple linear force-free model, Lanza (2010) was able to compute the radial extension of the stellar corona and its angular momentum loss. He found that stars that host hot Jupiters show a much slower angular momentum loss rate than similar stars without a short-period giant planet, similar to [@cohen2011].
In order to disentangle the possible causes of the observed increased stellar activity of HJ hosts observed from single-epoch observations, it is necessary to monitor the activity throughout the planet’s orbit and over the stellar rotation period. Such studies can better characterize the star’s variability, generate firmer statistical results of any planet induced activity, and assess the underlining physical processes involved. The first and only attempt to date of this was reported by [@shkolnik2008] in which they monitored 13 HJ systems (all FGK stars) in search of orbit phased variability and then found a correlation between the median activity levels modulated by the planet and the $M_p\sin i/P_{orb}$ (Figures \[fig:magmom\_exo\] and \[fig:magmom\_ss\]). In the case of multi-planet systems, the planet with the largest $M_p\sin i/P_{orb}$ should have the strongest SPI effects.
Summary
=======
Detecting exoplanetary magnetic fields enables us to probe the internal structures of the planets and to place better constraints on their atmospheric mass loss through erosion from the stellar wind. Searching for the observational signatures of magnetic SPI in the form of planet induced stellar activity has proved to be the most successful method to date for detecting magnetic fields of hot Jupiters.
Single-epoch statistical studies in search SPI signatures show that indeed there are significant differences in the activity levels between stars with close-in giant planets compared to those without. However, the cause of this remains ambiguous with four possible explanations.
- Induced stellar activity in the form of interactions between the stellar and planetary magnetic fields.
- The inhibition of magnetic breaking and thus faster than expected stellar rotation and increased stellar activity.
- Tidal spin-up of the star due the presence of the close-in planet.
- Lastly, the selection biases of planet hunting techniques.
These potential underlying causes of such a result highlight the need for further monitoring campaigns across planetary orbit and stellar rotation periods to clearly identify planet-induced excess stellar activity.
The vast majority of SPI studies, both individual monitoring as well as larger single-epoch statistical studies, have concentrated on main sequence FGK stars as they are the dominant hosts of hot Jupiters. These stars have the advantage of being relatively quiescent compared to M dwarfs, and thus teasing out signals produced by magnetic SPI from intrinsic stellar activity is simpler. But they also have the disadvantage of lower stellar magnetic field strengths compared to M dwarfs, lowering the power produced by the interaction.
The modeling of magnetic SPI, especially with realistic stellar magnetic maps from ZDI surveys, continues to advance and aid in the interpretation of observed planet phased enhanced activity across the main sequence. Additional models enable quantitative predictions of the radio flux density for stars displaying signatures of SPI. Radio detections of at least a few of these systems will help calibrate the relative field strengths, and provide for the first time, true magnetic field strengths for hot Jupiters.
Ongoing and future studies of magnetic SPI in a large sample of systems are necessary for improved statistics and distributions of magnetic fields of exoplanets. Extensions of these techniques to other tightly orbiting stellar systems, such as smaller planets close to M dwarfs, are challenging but possible. In these systems, star-planet separations of tens of stellar radii begin to coincide with the radiative habitable zone where planetary magnetic fields are likely a necessary condition for surface habitability. As more close-in planets around relatively bright M dwarfs are discovered by missions such as TESS, the search for magnetic star-planet interactions will be extended to these low-mass stars.
[^1]: <http://www.exoplanets.org>, accessed 2/15/2017
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Haptic devices have been employed to immerse users in VR environments. In particular, hand and finger haptic devices have been deeply developed. However, this type of devices occlude the hand detection by some tracking systems, or in other tracking systems, it is uncomfortable for the users to wear two hand devices (haptic and tracking device). We introduce RecyGlide, which is a novel wearable forearm multimodal display at the forearm. The RecyGlide is composed of inverted five-bar linkages and vibration motors. The device provides multimodal tactile feedback such as slippage, a force vector, pressure, and vibration. We tested the discrimination ability of monomodal and multimodal stimuli patterns in the forearm, and confirmed that the multimodal stimuli patterns are more recognizable. This haptic device was used in VR applications, and we proved that it enhances VR experience and makes it more interactive.'
author:
- Juan Heredia
- Jonathan Tirado
- Vladislav Panov
- Miguel Altamirano Cabrera
- 'Kamal Youcef-Toumi'
- Dzmitry Tsetserukou
bibliography:
- 'sample-base.bib'
title:
- 'Recyglide : A Wearable Multi-modal Stimuli Haptic Display aims to Improve User VR Immersion'
- ' RecyGlide : A Forearm-worn Multi-modal Haptic Display aimed to Improve User VR Immersion'
---
<ccs2012> <concept> <concept\_id>10003120.10003121.10003125.10011752</concept\_id> <concept\_desc>Human-centered computing Haptic devices</concept\_desc> <concept\_significance>500</concept\_significance> </concept> </ccs2012>
{width="\textwidth"}
\[fig:teaser\]
Introduction
============
Recently, several VR applications have been proposed in many fields: medicine, design, marketing, etc. , and to improve user immersion new methods or instruments are need. Haptics provides a solution, and enhances user experience using stimuli [@Han:2018:HAM:3281505.3281507]. Most of the haptics devices are located in the palm or fingers. However, in some cases the haptic device position is a problem because VR Hand Tracking Systems need a free hand (Leapmotion) or a hand instrument (HTC) to recognize the position. Therefore, we propose a novel multi-modal haptic display located in the forearm.
Various haptic devices have been developed in the forearm with mono-modal stimulus [@Dobbelstein:2018:MSM:3267242.3267249; @Moriyama:2018:DWH:3267782.3267795]. However, they have a persistent problem: users have more difficulty perceiving a stimulus in the forearm. The forearm is a not advantageous zone since it does not have as many nerves as the palm or fingertips have. Consequently, our device produces multi stimuli for improving user perceptions. Furthermore, multimodal stimuli experiments have been employed on the user’s hands, where the results show improvements in patterns recognition [@10.1007/978-981-13-3194-7_33].
RecyGlide is a novel forearm haptic display to provide multimodal stimuli. RecyGlide consists of one inverted five-bar linkage 2-DoF system installed parallel to the radius, which produce a sliding force along the user’s forearm. The second stimulus is vibration; two vibration motors are placed in the device’s edges (see Fig. 2 (b)). The schematic and 3D representation are shown in Fig 2.
The user study aims to identify the advantages of multimodal stimuli use in comparison with monomodal sensations. The experiment consists of multimodal and monomodal pattern recognition by users. In our hypothesis, two tactile channels could improve the perception of patterns.
Device Development
==================
$RecyGlide$ provides the sensation of vibration, contact at one point, and sliding at the user’s forearm. The location of the haptic contact point is determined using kinematics model of inverted five-bar linkages, inspired by $LinkTouch$ technology [@tsetserukou2014]. The second stimulus is generated by two vibration motors located on the extreme sides of the device. The two types of stimuli allow creating different patterns at the forearm. Also, it could be used to interact with the VR environment, e.g., the sensation of submerging in liquids, the felling of an animal movement in the forearm, or information delivery about the relative location.
The device has 2-DOF defined by the action of two servomotors. The slippage stimulus is produced by the movement of the motors in the same direction. Conversely, the movement of the motors in the opposite direction generates the force stimulus. The derivation of the previous stimuli creates other stimuli like temperature or pressure.
$RecyGlide$ location is convenient for hand tracking systems. Commercial tracking systems have different principles to achieve their objective. However, the use of a device in hand decreases the performance. In the case of a visual track, if the haptic device is located in hand, the typical shape of the hand changes producing poor tracking performance. For systems with trackers like HTC, it is uncomfortable for the user to wear two devices in hand. The device has been designed to adapt ergonomically to the user’s forearm, allowing free movement of the hand when working in the virtual reality environment.
The technical characteristics are listed in the Table \[char\]. The table shows the type of servomotors,and also the weight, the material of the device.
---------------------- -----------------
Motors Hitec HS-40
Weight $\ 95 g$
Material PLA and TPU 95A
Max. normal force at
contact point $2\ N$
---------------------- -----------------
: Technical specification of RecyGlide.[]{data-label="char"}
RecyGlide is electronically composed by an Arduino MKR 1000, servomotors and vibration motors. This Arduino model helps for IoT applications because of its wifi module. The Arduino is in charge of signal generation to the motors and communication wifi TCP/IP to the computer. The device maintains constant wifi communication with the computer throws a python script. A virtual socket transmits the data from python to the VR application in Unity.
In the experiment, a TCP/ IP server in Arduino was designed. The server provides direct access to the device, avoiding the use of the computer. This utility allows generating apps for cellphones too.
User Study
==========
The objective of the following experiment is to analyze the user’s perception and recognition of patterns when mononodal and multimodal stimuli are rendered on the forearm, and to determine if the multimodal stimuli increase the user’s perception of the contact point position. The first user experience is a contact stimulus over the cutaneous area by the sliding action of the contact point generated by the inverted five-bar linkages device. The second user experience implements stimuli by the combination of vibrating motors and the sliding of the contact point generated by the inverted five-bar linkages device. The results of the experiment will help to understand if the multimodal stimuli improves the perception the position generated by *RecyGlide* device.
Experimental Design
-------------------
For the execution of this experiment, a pattern bank of 6 different combinations between displacement and vibratory signals was designed and is shown in the Fig. \[fig:patterns\]. In the first group of patterns, A (Small Distance (SD): progress of 25%), B (Medium Distance (MD): progress of 50%) and C (Large Distance (LG): progress of 75%), where monomodal stimuli are delivered; the second group, D (Small Distance with vibration (SDV): progress of 25%), E (Medium Distance with vibration (MDV): progress of 50%) and F (Large Distance with vibration (LDV): progress of 75%), includes multimodal stimuli.The vibration is delivered progressively according to the position of the contact point: while the position of the contact point is nearest to the edge where the vibration motor is located, the vibration frequency is higher; when the contact point is located at the middle distance, the vibration is the same in both vibration motors. The higher frequency used in the vibration motors is equal to $500 Hz$. The sliding speed of the contact point over the skin is constant and has a value of $23 mm/s$.
Experimental Setup
------------------
The user was asked to sit in front of a desk, and to wear the $RecyGlide$ device on the right forearm. The device was connected to one Arduino MKR1000. From the python console, the six patterns were sent to the micro-controller throw TCP/IP communication. To reduce the external stimuli, the users wear headphones with white noise. A physical barrier interrupted the vision of the users to their right arm. Before each section of the experiment, a training session was conducted, where all the patterns were delivered to the users five times. Six participants volunteering complete the experiments, two women and four men, with an average age of 27 years old. Each pattern was delivered on their forearm five times in a random order.
Results
-------
To summarize the data obtained in the experiment, a confusion matrix is tabulated in the Table \[confussion\]. The diagonal term of the confusion matrix indicates the percentage of correct responses of subjects.
-- --------------------------------- -------- -------- -------- -------- -------- ---------
SD MD LD SDV MDV LDV
Small Distance (SD) **77** 20 3 0 0 0
Medium Distance (MD) 0 **80** 20 0 0 0
Large Distance (LD) 3 3 **93** 0 0 0
Small Distance Vibration (SDV) 0 0 0 **97** 0 3
Medium Distance Vibration (MDV) 0 0 0 3 **93** 3
Large Distance Vibration (LDV) 0 0 0 0 0 **100**
-- --------------------------------- -------- -------- -------- -------- -------- ---------
: Confusion Matrix for Patterns Recognition.[]{data-label="confussion"}
The results of the experiment revealed that the mean percent correct scores for each subject averaged over all six patterns ranged from 80 to 96.7 percent, with an overall group mean of 90.6 percent of correct answers. Table 1 shows that the distinctive patterns LDV and SDV have higher percentages of recognition 100 and 97, respectively. On the other hand, patterns MD and SD have lower recognition rates of 80 and 77 percent, respectively. For most participants, it was difficult to recognize pattern SD, which usually was confused with pattern MD. Therefore, it is prove the requirement of more distinctive tactile stimuli (vibration) to improve the recognition rate.
The ANOVA results showed a statistically significant difference in the recognition of different patterns (F(5, 30) = 3.2432, p = 0.018532$ <$ 0.05). The paired t-tests showed statistically significant differences between the SD and SDV (p=0.041$<$0.05), and statistical difference between MD and MDV (p=0.025$<$0.05). This results confirm our hypothesis, that the multimodal stimuli improves the perception of the humans in the forearm for short distances. However, the results of paired t-tests between the long distances with and without vibration do not reveal significant differences, thus for long distances perceptions, the multimodal stimuli are not required.
Applications
============
For the demonstration of RecyGlide, some applications were developed using the game engine Unity 3D. The SubmergingHand application clearly illustrate how the device improves the user immersion in a virtual reality environment. The device provides a sensation of the liquid level by the position of the contact point in the forearm. The viscosity of the liquid is represented by the normal force applied in the contact point. When the liquid is viscous, the applied force is high and the vibration motors work in a higher frequency. The application is shown in the Fig. \[fig:sumerg\].
The application “Boundaries Recognition” is an example of how our device helps to perceive the VR environment. Habitually VR users go forward the object boundaries and don’t respect the physics limits of static objects in the scene, such as walls and tables. RecyGlide informs the boundary collision of the hand tracker by activating the vibration motors and moving the contact point to one of the two sides, which is call collision side.
These two applications are basic examples, that can be applied in more complex ones. Additionally, the patterns can represent the current state of some variable or environment characteristic; for example, they can communicate the tracker battery status, the selected environment or the distance to an objective, etc.
Conclusions
-----------
We proposed a new haptic device in the forearm and made some experiments with it. Due to the results of the experiment, we demonstrated that multimodal stimuli patrons are easily recognizable. Therefore we consider that the device is suitable for communicating some VR messages through the use of patterns.
Though the forearm is not an advantageous area, the device has excellent performance and improves VR realism and the user immersion. The feeling of submersion in liquids can be used in numerous applications such as summing simulators, medical operations, games, etc. Boundaries collision detection is useful for all kind of VR applications because it is a constant problem in VR environments.
|
{
"pile_set_name": "arxiv"
}
|
Xsan
Xsan () is Apple Inc.'s storage area network (SAN) or clustered file system for macOS. Xsan enables multiple Mac desktop and Xserve systems to access shared block storage over a Fibre Channel network. With the Xsan file system installed, these computers can read and write to the same storage volume at the same time. Xsan is a complete SAN solution that includes the metadata controller software, the file system client software, and integrated setup, management and monitoring tools.
Xsan has all the normal features to be expected in an enterprise shared disk file system, including support for large files and file systems, multiple mounted file systems, metadata controller failover for fault tolerance, and support for multiple operating systems.
Interoperability
Xsan is based on the StorNext File System made by Quantum Corporation. The StorNext File System and the Xsan file system share the same file system layout and the same protocol when talking to the metadata server. They also seem to share a common code base or very close development based on the new features developed for both file systems.
The Xsan website claims complete interoperability with the StorNext File System: "And because Xsan is completely interoperable with Quantum’s StorNext File System, you can even provide clients on Windows, Linux, and other UNIX platforms with direct Fibre Channel block-level access to the data in your Xsan-managed storage pool."
Quantum Corporation claims: "Complete interoperability with Apple’s Xsan and Promise RAID and Allows Xsan and Xserve RAID to support AIX, HP-UX, IRIX, Red Hat Linux, SuSE Linux, Mac OS X, Solaris, and Windows clients, including support for 64 Bit Windows and Windows Vista."
Some of the command line tools for Xsan begin with the letters cv, which stand for CentraVision – the original name for the file system. XSan clients use TCP ports 49152–65535, with TCP/63146 frequently showing in log files.
Data representation
Xsan file system uses several logical storages to distribute information. The two main classes of information appear on Xsan: the user data (such as files) and the file system metadata (such as folders, file names, file allocation information and so on). Most configurations use different storages for data and metadata.
The file system supports dynamic expansion and distribution of both data and metadata areas.
History
On January 4, 2005, Apple announced shipping of Xsan.
In May 2006, Apple released Xsan 1.2 with support for volume sizes of nearly 2 petabytes.
On August 7, 2006, Apple announced Xsan 1.4, which is available for Intel-based Macintosh computers as a Universal binary and supports file system access control lists.
On December 5, 2006, Apple released Xsan 1.4.1.
On October 18, 2007, Apple released Xsan 1.4.2, which resolves several reliability and compatibility issues.
On February 19, 2008, Apple released Xsan 2, the first major update, which introduces MultiSAN, and completely redesigned administration tools. 2.1 was introduced on June 10, 2008. 2.1.1 was introduced on October 15, 2008. 2.2 was released September 14, 2009.
On July 20, 2011, Apple released Xsan 2.3, included in Mac OS X Lion. This was the first version of Xsan included with macOS.
On August 25, 2011, Apple released Xsan 2.2.2, which brought along several reliability fixes.
On July 25, 2012, Apple released Xsan 3, included in OS X Mountain Lion.
On October 17, 2014, Apple released Xsan 4 with OS X Yosemite.
On September 20, 2016, Apple released Xsan 5 with macOS Sierra and macOS Server 5.2.
References
Krypted.com Xsan Tutorials and Documentation
External links
Apple's Xsan page
Category:Shared disk file systems
Category:Apple Inc. file systems
Category:Apple Inc. software
|
{
"pile_set_name": "wikipedia_en"
}
|
Sandusky Sent Down River
After a Commonwealth of Pennsylvania Corrections Department review, convicted child molester and former Penn State assistant coach Jerry Sandusky has been sent to the Greene State Prison, where he will serve out his sentence (and probably, his life) in protective custody. He’s still pursuing appeals, which no one expects to go anywhere.
Greene is a maximum security prison, classified as a “Supermax“, which contains a Death Row. Lifers there include Philadelphia serial killer Juan Covington, and three convicts await execution. This is one tough place.
“We make individual decisions based on facts,” Corrections Secretary John Wetzel said in a written statement. “Given the high-profile nature of this individual, coupled with the nature of his crimes, this makes him very vulnerable in a prison setting.”
Noooo kidding, John!
(We all know from watching TV crime dramas what happens to guys like Jer in da Big House.)
Just how effective will the security measures be? Better be Biohazard Level 5 containment for Ol’ Jer.
He will not have a cellmate and will be subjected to heightened supervision and an escort when not in his cell. He will get an hour of individual exercise five days a week and three showers a week — alone, save for the escort. He will eat meals in his cell. All other services, including religion, medications, and treatment programming will be conducted in his cell.
All visits will be non-contact. No touching of or by the Tickle Monster.
Sandusky’s legal representations did not return phone calls.
This is close to home for “Jer”. His home town of “Little Washington” is a half-hour north on I-79. A further half-hour north lies the thriving, post-ferrous metropolis of Pittsburgh.
The State Correctional Institution at Greene, as it is formally known, is a maximum-security prison that houses a total of 1,800 inmates and employs 700 people.
Friends' Blogs
Whodat Turkey?
The Nittany Turkey is a retired techno-geek who thinks he knows something about Penn State football and everything else in the world. If there's a topic, we have an opinion on it, and you know what "they" say about opinions! Most of what is posted here involves a heavy dose of hip-shooting conjecture, but unlike some other blogs, we don't represent it as fact. Read More…
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We give a detailed analysis of the proportion of elements in the symmetric group on $n$ points whose order divides $m$, for $n$ sufficiently large and $m\geq n$ with $m=O(n)$.'
address: |
School of Mathematics and Statistics,\
University of Western Australia,\
Nedlands, WA 6907\
Australia.
author:
- 'Alice C. Niemeyer'
- 'Cheryl E. Praeger'
date: '31 March 2006.'
title: On Permutations of Order Dividing a Given Integer
---
Introduction
============
The study of orders of elements in finite symmetric groups goes back at least to the work of Landau [@Landau09 p. 222] who proved that the maximum order of an element of the symmetric group $S_n$ on $n$ points is $e^{(1+o(1))(n\log n)^{1/2}}$. Erdős and Turán took a probabilistic approach in their seminal work in the area, proving in [@ErdosTuran65; @ErdosTuran67] that, for a uniformly distributed random element $g\in S_n$, the random variable $\log|g|$ is normally distributed with mean $(1/2) \log^2n$ and standard deviation $\frac{1}{\sqrt{3}} \log^{3/2}(n)$. Thus most permutations in $S_n$ have order considerably larger than $O(n)$. Nevertheless, permutations of order $O(n)$, that is, of order at most $cn$ for some constant $c$, have received some attention in the literature. Let $P(n,m)$ denote the proportion of permutations $g\in S_n$ which satisfy $g^m = 1$, that is to say, $|g|$ divides $m$. In 1952 Chowla, Herstein and Scott [@Chowlaetal52] found a generating function and some recurrence relations for $P(n,m)$ for $m$ fixed, and asked for its asymptotic behaviour for large $n$. Several years later, Moser and Wyman [@MoserWyman55; @MoserWyman56] derived an asymptotic for $P(n,m)$, for a fixed prime number $m$, expressing it as a contour integral. Then in 1986, Wilf [@Wilf86] obtained explicitly the limiting value of $P(n,m)$ for an arbitrary fixed value of $m$ as $n\rightarrow\infty$, see also the paper [@Volynets] of Volynets. Other authors have considered equations $g^m=h$, for a fixed integer $m$ and $h\in S_n$, see [@BouwerChernoff85; @GaoZha; @MineevPavlov76a; @MineevPavlov76b].
However in many applications, for example in [@Bealsetal03], the parameters $n$ and $m$ are linearly related, so that $m$ is unbounded as $n$ increases. For the special case where $m=n$, Warlimont [@Warlimont78] showed in 1978 that most elements $g\in S_n$ satisfying $g^n=1$ are $n$-cycles, namely he proved that $P(n,n)$, for $n$ sufficiently large, satisfies $$\frac{1}{n} + \frac{2c}{n^2} \le P(n,n) \le \frac{1}{n} + \frac{2c}{n^2} +
O\left(\frac{1}{n^{3-o(1)}}\right)$$ where $c =1$ if $n$ is even and $c=0$ if $n$ is odd. Note that the proportion of $n$-cycles in $S_n$ is $1/n$ and, if $n$ is even, the proportion of elements that are a product of two cycles of length $n/2$ is $2/n^2$. Warlimont’s result proves in particular that most permutations satisfying $g^n=1$ are $n$-cycles. More precisely it implies that the conditional probability that a random element $g\in S_n$ is an $n$-cycle, given that $g^n
=1$, lies between $1-2c n^{-1} - O(n^{-2+o(1)})$ and $1-2c n^{-1} +
O(n^{-2})$.
The main results of this paper, Theorems \[leadingterms\] and \[bounds\], generalise Warlimont’s result, giving a detailed analysis of $P(n,m)$ for large $n$, where $m=O(n)$ and $m\geq n$. For this range of values of $n$ and $m$, we have $rn\leq m<(r+1)n$ for some positive integer $r$, and we analyse $P(n,m)$ for $m$ in this range, for a fixed value of $r$ and $n\rightarrow\infty$. It turns out that the kinds of elements that make the largest contribution to $P(n,m)$ depend heavily on the arithmetic nature of $m$, for example, on whether $m$ is divisible by $n$ or by $r+1$. We separate out several cases in the statement of our results. Theorem \[leadingterms\] deals with two cases for which we give asymptotic expressions for $P(n,m)$. The first of these reduces in the case $m=n$ to Warlimont’s theorem [@Warlimont78] (modulo a small discrepancy in the error term). For other values of $m$ lying strictly between $rn$ and $(r+1)n$ we obtain in Theorem \[bounds\] only an upper bound for $P(n,m)$, since the exact value depends on both the arithmetic nature and the size of $m$ (see also Remark \[remark:leadinterms\]).
\[leadingterms\] Let $n$ and $r$ be positive integers. Then for a fixed value of $r$ and sufficiently large $n$, the following hold.
1. $\displaystyle{
P(n,rn)=\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{2.5-o(1)}}\right)
}$ where $c(r)=\sum
(1+\frac{i+j}{2r})$ and the sum is over all pairs $(i,j)$ such that $1\leq i,j\leq r^2,
ij =r^2,$ and both $r+i, r+j$ divide $rn$. In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even.
2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then $$P(n,m)=\frac{1}{n}+\frac{t+c'(r)}{n^2}+O\left(\frac{1}{n^{2.5-o(1)}}
\right)$$ where $c'(r)=\sum(1+\frac{i+j-2}{2(r+1)})$ and the sum is over all pairs $(i,j)$ such that $1< i,j\leq (r+1)^2,
(i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j$ divide $m$.
\[bounds\] Let $n,m,r$ be positive integers such that $rn< m<(r+1)n$, and ${{\delta}}$ a real number such that $0<{{\delta}}\leq 1/4$. Then for a fixed value of $r$ and sufficiently large $n$, $$P(n,m)\leq \frac{\alpha.(r+1)}{m}+\frac{k(r)}
{n^2}+ O\left(\frac{1}{n^{2.5-2{{\delta}}}}\right)$$where $k(r) = \frac{4(r+3)^4}{r^2}$ and $$\alpha=\left\{\begin{array}{ll}
1&\mbox{if $r+1$ divides $m$ and $n-\frac{m}{r+1}
< \frac{m}{2(r+1)(r+2)-1}$}\\
0&\mbox{otherwise.}
\end{array}\right.$$
\[remark:leadinterms\]
\(a) In Theorem \[leadingterms\](a), the leading term $1/n$ is the proportion of $n$-cycles, while the proportion of permutations containing an $(n-t)$-cycle is $\frac{1}{n-t} = \frac{1}{n} +
\frac{t}{n^2} + O(\frac{1}{n^3})$, which contributes to the first two terms in Theorem \[leadingterms\](b). The terms $\frac{c(r)}{n^2}$ and $\frac{c'(r)}{n^2}$ correspond to permutations in $S_n$ that have two long cycles, and these have lengths $\frac{m}
{r+i}$ and $\frac{m}{r+j}$, for some $(i,j)$ satisfying the conditions in Theorem \[leadingterms\] (a) or (b) respectively, (where $m=rn$ in part (a)).
\(b) In Theorem \[bounds\], if $r+1$ divides $m$ and $n-m/(r+1)<\frac{m}{2(r+1)(r+2)-1}$, then the term $(r+1)/m$ comes from elements containing a cycle of length $m/(r+1)$. The term $\frac{k(r)}{n^2}$ corresponds to permutations with exactly two ‘large’ cycles. More details are given in Remark \[rem:general\].
Our interest in $P(n,m)$ arose from algorithmic applications concerning finite symmetric groups. For example, $n$-cycles in $S_n$ satisfy the equation $g^n=1$, while elements whose cycle structure consists of a 2-cycle and a single additional cycle of odd length $n-t$, where $t = 2$ or $3$, satisfy the equation $g^{2(n-t)} =1$. For an element $g$ of the latter type we can construct a transposition by forming the power $g^{n-t}$. In many cases the group $S_n$ is not given as a permutation group in its natural representation, and, while it is possible to test whether an element $g$ satisfies one of these equations, it is often impossible to determine its cycle structure with certainty. It is therefore important to have lower bounds on the conditional probability that a random element $g$ has a desired cycle structure, given that it satisfies an appropriate equation. Using Theorem \[leadingterms\], we obtained the following estimates of various conditional probabilities.
\[cdnlprobs1\] Let $r, n$ be positive integers and let $g$ be a uniformly distributed random element of $S_n$. Then for a fixed value of $r$ and sufficiently large $n$, the following hold, where $c(r)$ and $c'(r)$ are as in Theorem $\ref{leadingterms}$.
1. The conditional probability $P$ that $g$ is an $n$-cycle, given that $|g|$ divides $rn$, satisfies $$\begin{aligned}
1-\frac{c(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& P
\leq 1-\frac{c(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
2. If $r=t!-1$, then the conditional probability $P$ that $g$ contains an $(n-t)$-cycle, given that $|g|$ divides $t!(n-t)$, satisfies $$\begin{aligned}
1-\frac{c'(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& P
\leq 1-\frac{c'(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
We note that Theorem \[leadingterms\] improves the upper bound of $(1+o(1))/n$ obtained in [@Bealsetal03 Theorem 3.7], while Corollary \[cdnlprobs1\] improves the corresponding lower bound of $1-o(1)$ of [@Bealsetal03 Theorem 1.3(a)]. These results have been developed and refined further in [@NiemeyerPraeger05b] to derive explicit ‘non-asymptotic’ bounds that hold for all $n$ and can be applied directly to improve the recognition algorithms for $S_n$ and $A_n$ in [@Bealsetal03].
[**Commentary on our approach**]{}
Warlimont’s proof in [@Warlimont78] of an upper bound for $P(n,n)$ and the proof of [@Bealsetal03 Theorem 3.7] by Beals and Seress of an upper bound for $P(n,m)$ for certain values of $m$, rely on dividing the elements of $S_n$ into disjoint unions of smaller sets. Warlimont divides the elements according to how many ‘large’ cycles a permutation contains. Fix a real number $s$ such that $1/2 < s <
1$. We say that a cycle of a permutation in $S_n$ is *$s$-small* if its length is strictly less than $n^s$, and is *$s$-large* otherwise. Beals and Seress divide the elements according to the number of cycles in which three specified points lie. Both strategies are sufficient to prove Warlimont’s result or the slightly more general results of [@Bealsetal03 Theorem 3.7]. However, neither is sufficient to prove the general results in this paper. In particular, Warlimont’s approach breaks down when trying to estimate the proportion of elements with no or only one large cycle, which is perhaps why no progress has been made since his paper [@Warlimont78] towards answering Chowla, Herstein and Scott’s original question about the asymptotic behaviour of $P(n,m)$ for large $n$. One of the key ideas that allowed us to generalise Warlimont’s work is the insight that the number of permutations which contain no $s$-large cycles can be estimated by considering their behaviour on three specified points. Another important strategy is our careful analysis of elements containing only one large cycle by separating out divisors of $m$ which are very close to $n$.
We regard Theorem \[lem:props\] below as the main outcome of the first stage of our analysis. It is used in the proof of Theorem \[leadingterms\]. The statement of Theorem \[lem:props\] involves the number $d(m)$ of positive divisors of $m$, and the fact that $d(m)=m^{o(1)}$, see Notation \[notation\] (c). It estimates the proportion $P_0(n,m)$ of elements of $S_n$ of order dividing $m$ and having no $s$-large cycles.
\[lem:props\] Let $n,m$ be positive integers such that $m\geq n$, and let $s$ be a positive real number such that $1/2<s<1$. Then, with $P_0(n,m)$ as defined above, there is a constant $c$ such that $$P_0(n,m)<\frac{c d(m)m^{2s}}{n^3}=O\left(\frac{m^{2s+o(1)}}{n^3}\right).$$
Theorem \[lem:props\] is proved in Section \[sec:proportions\] and the other results are proved in Section \[sec:stheo\].
Proof of Theorem \[lem:props\] {#sec:proportions}
==============================
In this section we introduce some notation that will be used throughout the paper, and we prove Theorem \[lem:props\]. Note that the order $|g|$ of a permutation $g \in S_n$ divides $m$ if and only if the length of each cycle of $g$ divides $m$. Thus $P(n,m)$ is the proportion of elements in $S_n$ all of whose cycle lengths divide $m$. As indicated in the introduction, we estimate $P(n,m)$ by partitioning this proportion in various ways. Sometimes the partition is according to the number of large cycle lengths, and at other times it is defined in terms of the cycles containing certain points. We specify these partitions, and give some other notation, below.
\[notation\]
The numbers $n,m$ are positive integers, and the symmetric group $S_n$ acts naturally on the set $\Omega=\{1,2,\dots,n\}$.
1. $s$ is a real number such that $1/2 < s < 1$. A divisor $d$ of $m$ is said to be $s$-*large* or $s$-*small* if $d \geq m^{s}$ or $d < m^s$, respectively; $D_\ell$ and $D_s$ denote the sets of all $s$-large and $s$-small divisors $d$ of $m$, respectively, such that $d \le n$.
2. For $g\in S_n$ with order dividing $m$, a $g$-cycle of length $d$ is called $s$-*large* or $s$-*small* according as $d$ is an $s$-large or $s$-small divisor of $m$.
3. $d(m)$ denotes the number of positive divisors of $m$ and $\delta$ and $c_\delta$ are positive real numbers such that $\delta < s$ and $d(m) \le c_\delta m^{\delta}$ for all $m \in {\bf{N}}$.
4. The following functions of $n$ and $m$ denote the proportions of elements $g\in S_n$ of order dividing $m$ and satisfying the additional properties given in the last column of the table below.
--------------------- ---------------------------------------------
$P_0(n,m)$ all $g$-cycles are $s$-small
${P_0^{(1)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in the same $g$-cycle,
${P_0^{(2)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in exactly two $g$-cycles
${P_0^{(3)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in three different $g$-cycles
$P_1(n,m)$ $g$ contains exactly one $s$-large cycle
$P_2(n,m)$ $g$ contains exactly two $s$-large cycles
$P_3(n,m)$ $g$ contains exactly three $s$-large cycles
${P_{\geq 4}}(n,m)$ $g$ contains at least four $s$-large cycles
--------------------- ---------------------------------------------
With respect to part (c) we note, see [@NivenZuckermanetal91 pp. 395-396], that for each $\delta >
0$ there exists a constant $c_\delta > 0$ such that $d(m) \le c_\delta
m^\delta$ for all $m \in {\bf{N}}.$ This means that the parameter $\delta$ can be any positive real number and in particular that $d(m) = m^{o(1)}.$
Note that $$\label{eq-pi}
P_0(n,m) = {P_0^{(1)}}(n,m) + {P_0^{(2)}}(n,m) + {P_0^{(3)}}(n,m)$$ and $$\label{eq-qi}
P(n,m) = P_0(n,m) + P_1(n,m) + P_2(n,m) + P_3(n,m)+{P_{\geq 4}}(n,m).$$ We begin by deriving recursive expressions for the $P_0^{(i)}(n,m)$.
\[lem:theps\] Using Notation $\ref{notation}$, the following hold, where we take $P_0(0,m) = 1.$
1. $\displaystyle{{P_0^{(1)}}(n,m) = \frac{(n-3)!}{n!}
\sum_{d \in D_s,\ d\ge 3}{(d-1)(d-2)}P_0(n-d,m),}$
2. $\displaystyle{
{P_0^{(2)}}(n,m) = \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s }{2\le
d_2,\ d_1+d_2\le n}} (d_2-1)P_0(n-d_1-d_2,m)}$,
3. $\displaystyle{
{P_0^{(3)}}(n,m) = \frac{(n-3)!}{n!} \sum_{\stackrel{d_1,d_2,d_3\in D_s
}{d_1+d_2+d_3 \le n}}
P_0(n-d_1-d_2 -d_3,m)}$.
We first compute ${P_0^{(1)}}(n,m)$, the proportion of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which the points $1, 2, 3$ are contained in one $g$-cycle, $C$ say, of length $d$ with $d \in D_s$ and $d\geq 3.$ We can choose the remainder of the support set of $C$ in $\binom{n-3}{d-3}$ ways and then the cycle $C$ in $(d-1)!$ ways. The rest of the permutation $g$ can be chosen in $P_0(n-d,m)(n-d)!$ ways. Thus, for a given $d$, the number of such elements is $(n-3)!(d-1)(d-2)P_0(n-d,m)$. We obtain the proportion ${P_0^{(1)}}(n,m)$ by summing over all $d\in D_s$ with $d\geq3$, and then dividing by $n!$, so part (a) is proved.
Next we determine the proportion ${P_0^{(2)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which one of the points $1, 2, 3$ is contained in a $g$-cycle $C_1$, and the other two of these points are contained in a different $g$-cycle $C_2$. Let $d_1$ and $d_2$ denote the lengths of the cycles $C_1$ and $C_2$, respectively, so $d_1, d_2\in D_s$ and $d_2 \ge 2.$ Firstly we choose the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways. Secondly we choose the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-2}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways. Finally, the rest of the permutation $g$ is chosen in $P_0(n-d_1
-d_2,m)(n-d_1-d_2)!$ ways. Thus, for a given pair $d_1, d_2$, the number of these elements is $(n-3)!(d_2-1)P_0(n-d_1-d_2,m)$. Since there are three choices for $C_1\cap\{ 1, 2, 3\}$, we have $$\begin{aligned}
{P_0^{(2)}}(n,m) & = & \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s}{2\le
d_2,\ d_1+d_2 \le n}} (d_2-1) P_0(n-d_1-d_2,m). \\ \end{aligned}$$ Finally we consider the proportion ${P_0^{(3)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which each one of the points $1, 2, 3$ is contained in a separate $g$-cycle, say $C_i$ contains $i$ and $C_i$ has length $d_i \in D_s$. We can choose, in order, the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways, the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-1}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways, the support set of $C_3$ in $\binom{n-d_1 -d_2 -1}{d_3-1}$ ways and the cycle $C_3$ in $(d_3-1)!$ ways, and the rest of the permutation in $P_0(n-d_1-d_2-d_3,m)(n-d_1-d_2-d_3)!$ ways. The expression for ${P_0^{(3)}}(n,m)$ in part (c) now follows.
Next we derive expressions for the $P_i(n,m)$ and ${P_{\geq 4}}(n,m)$.
\[lem:qi\] Using Notation $\ref{notation}$, and writing $P_0(0,m)=1$,
1. ${\displaystyle P_0(n,m) = \frac{1}{n}\sum_{d\in D_s}
P_0(n-d, m),}$
2. ${\displaystyle P_1(n,m) = \sum_{d\in D_\ell }
\frac{1}{d} P_0(n-d, m)},$
3. ${\displaystyle P_{2}(n,m) = \frac{1}{2} \sum_{d_1, d_2\in D_\ell }
\frac{1}{d_1d_2} P_0(n-d_1-d_2, m)},$ where the sum is over all ordered pairs $(d_1, d_2)$ with $d_1 + d_2
\le n$.
4. ${\displaystyle P_3(n,m) = \frac{1}{6}\sum_{d_1, d_2, d_3
\in D_\ell}
\frac{1}{d_1d_2d_3} P_0(n-d_1-d_2 - d_3, m)}$, where the sum is over all ordered triples $(d_1,d_2,d_3)$ with $d_1 + d_2 + d_3 \le n$.
5. ${\displaystyle {P_{\geq 4}}(n,m) \leq
\frac{1}{24}\sum_{d_1, d_2, d_3,d_4 \in D_\ell}
\frac{1}{d_1d_2d_3d_4} P(n-d_1-d_2 - d_3-d_4, m)}$, where the sum is over all ordered $4$-tuples $(d_1,d_2,d_3,d_4)$ with $d_1 + d_2 + d_3+d_4 \le n$.
For each permutation in $S_n$ of order dividing $m$ and all cycles $s$-small, the point 1 lies in a cycle of length $d$, for some $d\in D_s$. For this value of $d$ there are $\binom{n-1}
{d-1}(d-1)!$ choices of $d$-cycles containing 1, and $P_0(n-d,m)(n-d)!$ choices for the rest of the permutation. Summing over all $d\in D_s$ yields part (a).
The proportion of permutations in $S_n$ of order dividing $m$ and having exactly one $s$-large cycle of length $d$ is $\binom{n}{d}(d-1)! P_0(n-d,m)
(n-d)!/n!$. Summing over all $d\in D_\ell$ yields part (b).
In order to find the proportion of elements in $S_n$ of order dividing $m$ and having exactly two $s$-large cycles we count triples $(C_1, C_2, g)$, where $C_1$ and $C_2$ are cycles of lengths $d_1$ and $d_2$ respectively, $d_1, d_2\in D_\ell$, $g\in S_n$ has order dividing $m$, $g$ contains $C_1$ and $C_2$ in its disjoint cycle representation, and all other $g$-cycles are $s$-small. For a given $d_1, d_2$, we have $\binom{n}{d_1}(d_1-1)!$ choices for $C_1$, then $\binom{n-d_1}{d_2}(d_2-1)!$ choices for $C_2$, and then the rest of the element $g$ containing $C_1$ and $C_2$ can be chosen in $P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ ways. Thus the ordered pair $(d_1,d_2)$ contributes $\frac{n!}{d_1d_2}P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ triples, and each element $g$ with the properties required for part (c) contributes exactly two of these triples. Hence, summing over ordered pairs $d_1, d_2\in D_\ell$ yields (c).
Similar counts are used for parts (d) and (e). For $P_3(n,m), {P_{\geq 4}}(n,m)$ we count 4-tuples $(C_1, C_2,C_3, g)$ and $5$-tuples $(C_1,C_2,C_3,C_4,g)$ respectively, such that, for each $i$, $C_i$ is a cycle of length $d_i$ for some $d_i\in D_\ell$, $g\in S_n$ has order dividing $m$, and $g$ contains all the cycles $C_i$ in its disjoint cycle representation. The reason we have an inequality for ${P_{\geq 4}}(n,m)$ is that in this case each $g$ occurring has at least four $s$-large cycles and hence occurs in at least 24 of the 5-tuples, but possibly more.
We complete this section by giving a proof of Theorem \[lem:props\]. The ideas for its proof were developed from arguments in Warlimont’s paper [@Warlimont78].
\[newPs\] Let $m\geq n\geq3$, and let $s, {{\delta}}$ be as in Notation [\[notation\]]{}. Then $$P_0(n,m) < \frac{(1 + 3c_\delta + c_\delta^2)d(m)m^{2s}}{n(n-1)(n-2)}<
\frac{c'd(m)m^{2s}}{n^3}= O\left(\frac{m^{2s+\delta}}{n^3}\right)$$ where, if $n\geq6$, we may take $$c'=\left\{\begin{array}{ll}
2(1 + 3c_\delta + c_\delta^2)&\mbox{for any $m\geq n$}\\
10&\mbox{if $m\geq c_\delta^{1/(s-\delta)}$.}
\end{array}\right.$$ In particular Theorem [\[lem:props\]]{} is true. Moreover, if in addition $n\geq m^s+cn^a$ for some positive constants $a,c$ with $a\leq 1$, then $P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^{1+3a}}\right)$.
First assume only that $m\geq n\geq3$. Let $D_s$, and $P_0^{(i)}(n,m)$, for $i = 1, 2, 3$, be as in Notation \[notation\]. By (\[eq-pi\]), $P_0(n,m)$ is the sum of the $P_0^{(i)}(n,m)$. We first estimate ${P_0^{(1)}}(n,m).$ By Lemma \[lem:theps\] (a), and using the fact that $d<m^s$ for all $d\in D_s$, $${P_0^{(1)}}(n,m) \le\frac{(n-3)!}{n!}
\sum_{\stackrel{d \in D_s}{d\ge 3}}{(d-1)(d-2)}<
\frac{d(m) m^{2s}}{n(n-1)(n-2)}.$$ Similarly, by Lemma \[lem:theps\] (b), $$\begin{aligned}
{P_0^{(2)}}(n,m) & < & \frac{3(n-3)!}{n!}\sum_{d_1, d_2 \in D_s} (d_2-1)
\le \frac{3d(m)^2m^{s}}{n(n-1)(n-2)}\end{aligned}$$ and by Lemma \[lem:theps\] (c), $$\begin{aligned}
{P_0^{(3)}}(n,m) &<& \frac{(n-3)!}{n!} \sum_{d_1,d_2,d_3\in D_s} 1
\le \frac{d(m)^3}{n(n-1)(n-2)}.\\\end{aligned}$$
Thus, using the fact noted in Notation \[notation\] that $d(m) \le c_\delta m^\delta$, $$\begin{aligned}
P_0(n,m) & \le &
\frac{d(m) \left( m^{2s} +3d(m)m^{s} + d(m)^2\right)
}{n(n-1)(n-2)} \\
&\le&\frac{d(m)m^{2s}\left( 1 +3c_\delta m^{\delta-s} + (c_\delta m^{\delta-s})^2\right)}{
n(n-1)(n-2)}< \frac{c'd(m) m^{2s}}{n^3}.\end{aligned}$$ To estimate $c'$ note first that, for $n\geq6$, $n(n-1)(n-2)> n^3/2$. Thus if $n\geq6$ then, for any $m\geq n$ we may take $c'= 2(1 + 3c_\delta + c_\delta^2).$ If $m\geq c_\delta^{1/(s-\delta)}$, then $c_\delta m^{\delta-s}\leq 1$ and so we may take $c'=10$. Theorem \[lem:props\] now follows since $d(m)=m^{o(1)}$. Now assume that $n\geq m^s+cn^a$ for some positive constants $c$ and $a$. By Lemma \[lem:qi\], $$P_0(n,m)= \frac{1}{n}\sum_{d\in D_s}P_0(n-d, m).$$ For each $d\in D_s$ we have $m>n-d\geq n-m^s\geq cn^a$, and hence applying Theorem \[lem:props\] (which we have just proved), $$P_0(n-d,m) < \frac{c'd(m)m^{2s}}{(n-d)^3}
\leq \frac{c'd(m) m^{2s}}{c^3 n^{3a}}.$$ Thus, $P_0(n,m) \leq \frac{d(m)}{n} \left(\frac{c'd(m)m^{2s}}{c^3n^{3a}}
\right)\le \frac{c'c_\delta^2m^{2s + 2\delta}}{c^3n^{1+3a}}$.
Proof of Theorem \[leadingterms\] {#sec:stheo}
=================================
First we determine the ‘very large’ divisors of $m$ that are at most $n$.
\[lem:divat\] Let $r, m$ and $n$ be positive integers such that $rn\le m < (r+1)n$.
1. If $d$ is a divisor of $m$ such that $d \le n$, then one of the following holds:
1. $d=n = \frac{m}{r}$,
2. $d = \frac{m}{r+1}$ so that $\frac{r}{r+1}n \le d < n$,
3. $d \le \frac{m}{r+2}<\frac{r+1}{r+2}n$.
2. Moreover, if $d_1, d_2$ are divisors of $m$ for which $$d_1\le d_2 \le \frac{m}{r+1}\quad \mbox{and}\quad
n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)},$$ then $d_1=\frac{m}{c_1}, d_2=
\frac{m}{c_2}$, where $c_1, c_2$ divide $m$, and satisfy $c_2 \le 2r+3$, and either $r+2\leq c_2 \le c_1 < 2(r+1)(r+2)$, or $c_2=r+1$, $c_1\geq r(r+1)$.
As $d$ is a divisor of $m$ there is a positive integer $t$ such that $d = \frac{m}{t}$. Now $\frac{m}{t} \le n \le \frac{m}{r}$ and therefore $r \le t.$ If $r = t$ then $r$ divides $m$ and $d = \frac{m}{r} \le n$, and since also $rn \le m$ it follows that $d = \frac{m}{r}=n$ and (i) holds. If $t \ge r+2$ then (iii) holds. Finally, if $t=r+1$, then $d = \frac{m}{r+1}$ and $\frac{r}{r+1}n \le \frac{m}{r+1} < n$ and hence (ii) holds.
Now we prove the last assertion. Suppose that $d_1, d_2$ are divisors of $m$ which are at most $ \frac{m}{r+1}$, and such that $d_1\leq d_2$ and $n\geq d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$. Then, as $d_1,
d_2$ divide $m$, there are integers $c_1, c_2$ such that $d_1 = m/c_1$ and $d_2 = m/c_2.$ Since $d_i \le m/(r+1)$ we have $c_i \ge r+1$ for $i = 1,2$, and since $d_1\le d_2$ we have $c_1\ge c_2$. Now $m/r \ge n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$, and hence $1/r \ge 1/c_1 + 1/c_2 > \frac{2r+3}{2(r+1)(r+2)}$. If $c_2 \ge 2(r+2)$ then, as $c_1\ge c_2$, we would have $1/c_1 + 1/c_2 \le 1/(r+2)$, which is not the case. Thus $r+1 \le c_2 \le 2r+3.$ If $c_2\geq r+2$, then $$\frac{1}{c_1}> \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{c_2} \ge
\frac{2r+3}{2(r+1)(r+2)} - \frac{1}{r+2} =
\frac{1}{2(r+1)(r+2)}$$ and hence $c_1 < 2(r+1)(r+2)$ as in the statement. On the other hand, if $c_2=r+1$, then $$\frac{1}{c_1}\leq \frac{n}{m}-\frac{1}{c_2}\leq \frac{1}{r}-\frac{1}{r+1}=\frac{1}{r(r+1)}$$ so $c_1\geq r(r+1)$.
The next result gives our first estimate of an upper bound for the proportion $P(n,m)$ of elements in $S_n$ of order dividing $m$. Recall our observation that the parameter $\delta$ in Notation \[notation\](c) can be any positive real number; in Proposition \[prop:general\] we will restrict to $\delta \le s-\frac{1}{2}.$ Note that the requirement $rn\leq m<(r+1)n$ implies that $\frac{n}{r+1}\leq n-\frac{m}{r+1}\leq \frac{m}{r(r+1)}$; the first case of Definition \[def:kr\] (b) below requires an upper bound of approximately half this quantity.
\[def:kr\] Let $r,\, m,\, n$ be positive integers such that $rn\le m < (r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-\frac{1}{2}$.
- Let $\alpha = \begin{cases} 1 & \mbox{if\ } m=rn,\\
0 & \mbox{otherwise.}
\end{cases}$
- Let $\alpha' = \begin{cases} 1 & \mbox{if\ } (r+1) \mbox{\
divides\ } m \
\mbox{and\ }n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}, \\
0 & \mbox{otherwise.}
\end{cases}$
- Let $t(r,m,n)$ denote the number of divisors $d$ of $m$ with $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$ such that there exists a divisor $d_0$ of $m$ satisfying
- $d+d_0\leq n$ and
- $\frac{m}{2(r+1)(r+2)}< d_0\leq d$.
- Let $k(r,m,n)=t(r,m,n)\frac{2(r+1)(r+2)(2r+3)}{r^2}.$
\[prop:general\] Let $r,\, m,\, n, s$ and $\delta$ be as in Definition [\[def:kr\]]{}. Then, for a fixed value of $r$ and sufficiently large $n$, $$P(n,m) \le \frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}{n^2}+
O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right),$$ where $\alpha, \alpha', t(r, m, n)$ and $k(r, m, n)$ are as in Definition $\ref{def:kr}.$ Moreover, $t(r,m,n) \le r+3$ and $k(r,m,n) \le
\frac{4(r+3)^4}{r^2} $.
\[rem:general\]
\(a) The term $\frac{1}{n}$, which occurs if and only if $m=rn$, corresponds to the $n$-cycles in $S_n$, and is the exact proportion of these elements. We refine the estimate for $P(n,rn)$ in Theorem \[rn\] below.
\(b) The term $\frac{r+1}{m}$, which occurs only if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$, corresponds to permutations with order dividing $m$ and having either one or two $s$-large cycles, with one (the larger in the case of two cycles) of length $\frac{m}{r+1}$. The proportion of elements of $S_n$ containing a cycle of length $\frac{m}{r+1}$ is $\frac{r+1}{m}$, and if there exists a positive integer $d\leq n-\frac{m}{r+1}$ such that $d$ does not divide $m$, then some of these elements have a $d$-cycle and hence do not have order dividing $m$. Thus $\frac{r+1}{m}$ may be an over-estimate for the proportion of elements in $S_n$ (where $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$) having order dividing $m$, having exactly one $s$-large cycle of length $\frac{m}{r+1}$, and possibly one additional $s$-large cycle of length dividing $m$. However it is difficult to make a more precise estimate for this term that holds for all sufficiently large $m,n$. In Theorem \[rn\] we treat some special cases where this term either does not arise, or can be determined precisely.
\(c) The term $\frac{k(r,m,n)}{n^2}$ arises as follows from permutations that have exactly two $s$-large cycles of lengths dividing $m$. For each of the $t(r,m,n)$ divisors $d$ of $m$ as in Definition \[def:kr\](c), let $d_0(d)$ be the largest of the divisors $d_0$ satisfying Definition \[def:kr\](c)(i),(ii). Note that $d_0(d)$ depends on $d$. Then $k(r,m,n)/n^2$ is an upper bound for the proportion of permutations of order dividing $m$ and having two $s$-large cycles of lengths $d$ and $d_0(d)$, for some $d$ satisfying $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$. As in (b) this term may be an over-estimate, not only for the reason given there, but also because lower bounds for the cycle lengths $d, d_0(d)$ were used to define $k(r,m,n)$. Indeed in the case $m=rn$ we are able to obtain the exact value of the coefficient of the $\frac{1}{n^2}$ summand.
We divide the estimation of $P(n,m)$ into five subcases. Recall that, by (\[eq-qi\]), $P(n,m)$ is the sum of ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$, for $i=0,1,2,3$, where these are as defined in Notation \[notation\]. We will use the recursive formulae for ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$ in Lemma \[lem:qi\], together with the expressions for $P_0(n,m)$ in Theorem \[lem:props\] and Lemma \[newPs\], to estimate these five quantities. Summing these estimates will give, by (\[eq-qi\]), our estimate for $P(n,m)$. We also use the information about divisors of $m$ in Lemma \[lem:divat\].
First we deal with $P_0(n,m)$. Since $r$ is fixed, it follows that, for sufficiently large $n$ (and hence sufficiently large $m$), we have $m^s
\leq \frac{m}{r+2}$, which is less than $\frac{(r+1)n}{r+2}=n-\frac{n}{r+2}$. Thus $n>m^s+\frac{n}{r+2}$, and applying Lemma \[newPs\] with $a=1, c=\frac{1}{r+2}$, it follows that $$P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^4}\right)=O\left(\frac{1}{n^{4-2s-
2{{\delta}}}}\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$$ since $4-2s-2{{\delta}}\geq 1+2s-2{{\delta}}$ when $s\leq 3/4$.
Next we estimate $P_3(n,m)$ and ${P_{\geq 4}}(n,m)$. By Lemma \[lem:qi\], the latter satisfies ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\sum\frac{1}{d_1d_2d_3d_4}$, where the summation is over all ordered 4-tuples of $s$-large divisors of $m$ whose sum is at most $n$. Thus ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\,\frac{d(m)^4}{m^{4s}}=
O\left(\frac{1}{n^{4s-4{{\delta}}}}\right)$. Also $$P_3(n,m)= \frac{1}{6}\sum
\frac{1}{d_1d_2d_3}P_0(n-d_1-d_2-d_3,m),$$ where the summation is over all ordered triples of $s$-large divisors of $m$ whose sum is at most $n$. For such a triple $(d_1,d_2,d_3)$, if each $d_i\leq\frac{m}
{4(r+1)}$, then $n-\sum d_i\geq n-\frac{3m}{4(r+1)}>\frac{n}{4}$, and so by Lemma \[newPs\], $P_0(n-\sum d_i,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3}}
\right)$. Thus the contribution of triples of this type to $P_3(n,m)$ is at most $O\left(\frac{d(m)^3m^{2s+{{\delta}}}}{m^{3s}n^3}
\right)=O\left(\frac{1}{n^{3+s-4{{\delta}}}}\right)$. For each of the remaining triples, the maximum $d_i$ is greater than $\frac{m}{4(r+1)}$ and in particular there is a bounded number of choices for the maximum $d_i$. Thus the contribution of the remaining triples to $P_3(n,m)$ is at most $O\left(\frac{d(m)^2}{m^{1+2s}}
\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. It follows that $$P_3(n,m)+{P_{\geq 4}}(n,m)=O\left(\frac{1}{n^{x_3}}\right),$$ where $x_3=\min\{4s-4{{\delta}},3+s-4{{\delta}},1+2s-2{{\delta}}\}=1+2s-2{{\delta}}$ (using the fact that ${{\delta}}\leq s-\frac{1}{2}\leq \frac{1}{4}$).
Now we estimate $P_2(n,m)$. By Lemma \[lem:qi\], $$P_{2}(n,m)= \frac{1}{2}\sum
\frac{1}{d_1d_2}P_0(n-d_1-d_2,m),$$ where the summation is over all ordered pairs of $s$-large divisors of $m$ whose sum is at most $n$. We divide these pairs $(d_1,d_2)$ into two subsets. The first subset consists of those for which $n- d_1-d_2\geq n^\nu$, where $\nu=(1+2s+{{\delta}})/3$. Note that $\nu<1$ since $\nu\leq s -\frac{1}{6}<1$ (because ${{\delta}}\leq s-\frac{1}{2}$ and $s\leq \frac{3}{4}$). For a pair $(d_1,d_2)$ such that $n- d_1-d_2\geq n^\nu$, by Lemma \[newPs\], $P_0(n-d_1-d_2,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3\nu}}
\right)$. Thus the total contribution to $P_{2}(n,m)$ from pairs of this type is at most $O\left(\frac{d(m)^2m^{2s+{{\delta}}}}{m^{2s}n^{3\nu}}
\right)=O\left(\frac{1}{n^{3\nu-3{{\delta}}}}\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right)$.
Now consider pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$. Since each $d_i<n\leq m/r$, it follows that each $d_i\leq m/(r+1)$. Since $\nu<1$, for sufficiently large $n$ (and hence sufficiently large $m$) we have $n^\nu\leq \left(\frac{m}{r}
\right)^\nu<\frac{m}{2(r+1)(r+2)}$. Thus, for each of the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$, we have $d_1+d_2>n-n^\nu>\frac{m}{r+1}-
\frac{m}{2(r+1)(r+2)}=\frac{m(2r+3)}{2(r+1)(r+2)}$, and hence one of $(d_1,d_2)$, $(d_2,d_1)$ (or both if $d_1=d_2$) satisfies the conditions of Lemma \[lem:divat\] (b). Thus, by Lemma \[lem:divat\] (b), it follows that if $d_1 \le d_2$, then either $(d_0,d):=(d_1, d_2)$ satisfies the conditions of Definition \[def:kr\](c), or $d_2=\frac{m}{r+1}$ and $d_1\leq
\frac{m}{2(r+1)(r+2)}$. Let $P_2'(n,m)$ denote the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ where $\{d_1,d_2\}=\{
\frac{m}{r+1},d_0\}$ and $d_0 \leq \frac{m}{2(r+1)(r+2)}$.
For the other pairs, we note that there are $t(r,m,n) \le r+3$ choices for the larger divisor $d$. Consider a fixed $d\leq \frac{m}{r+1}$, say $d = \frac{m}{c}.$ Then each divisor $d_0$ of $m$, such that $\frac{m}{2(r+1)(r+2)} < d_0 \le d$ and $d + d_0 \le
n$, is equal to $\frac{m}{c_0}$ for some $c_0$ such that $c \le c_0 < 2(r+1)(r+2)$. Let $d_0(d) = \frac{m}{c_0}$ be the largest of these divisors $d_0.$ By Lemma \[lem:divat\](b), the combined contribution to $P_2(n,m)$ from the ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ is (since $d$ and $d_0(d)$ may be equal) at most $$\frac{1}{dd_0(d)} < \frac{2r+3}{m} \cdot \frac{2(r+1)(r+2)}{m} =
\frac{2(r+1)(r+2)(2r+3)}{m^2}.$$ (Note that $\frac{1}{dd_0(d)} \ge \frac{(r+1)^2}{m^2} > \frac{1}{n^2}$.) If $d_0=\frac{m}{c'}$ is any other divisor of this type and $d_0 < d_0(d)$, then $c_0+1 \le c' < 2(r+1)(r+2)$, and so $n-d-d_0=(n-d-d_0(d))+d_0(d)-d_0$ is at least $$d_0(d)-d_0=\frac{m}{c_0} - \frac{m}{c'} \ge\frac{m}{c_0} - \frac{m}{c_0+1}=
\frac{m}{c_0(c_0+1)} > \frac{m}{4(r+1)^2(r+2)^2}.$$ By Lemma \[newPs\], the contribution to $P_2(n,m)$ from the pairs $(d,d_0)$ and $(d_0,d)$ is $O( \frac{1}{m^2}\cdot
\frac{m^{2s+\delta}}{m^3}) = O(\frac{1}{n^{5-2s-\delta}})$. Since there are $t(r,m,n) \le r+3$ choices for $d$, and a bounded number of divisors $d_0$ for a given $d$, the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$ is at most $$P_2'(n,m) + t(r,m,n) \frac{2(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{5-2s-{{\delta}}}}
\right).$$ Thus $$\begin{aligned}
P_2(n,m)&\le& P_2'(n,m) + \frac{2t(r,m,n)(r+1)(r+2)(2r+3)}{n^2r^2}+
O\left(\frac{1}{n^{x_2}}\right) \\
&=& P_2'(n,m) +\frac{k(r,m,n)}{n^2} + O\left(\frac{1}{n^{x_2}}\right)\end{aligned}$$ with $x_2=\min\{1+2s-2{{\delta}},5-2s-{{\delta}}\}=1+2s-2{{\delta}}$. Note that $$k(r,m,n)\leq (r+3)
\frac{2(r+1)(r+2)(2r+3)}{r^2}=4r^2+30r+80+\frac{90}{r}+\frac{36}{r^2}$$ which is less than $\frac{4(r+3)^4}{r^2}$.
Finally we estimate $P_1(n,m)+P'_2(n,m)$. By Lemma \[lem:qi\], $P_1(n,m)=
\sum \frac{1}{d}P_0(n-d,m)$, where the summation is over all $s$-large divisors $d$ of $m$ such that $d\leq n$, and we take $P_0(0,m)=1$. Note that $d\leq n\leq \frac{m}{r}$, so each divisor $d=\frac{m}{c}$ for some $c\geq r$. In the case where $m=rn$, that is, the case where $n$ divides $m$ (and only in this case), we have a contribution to $P_1(n,m)$ of $\frac{1}{n}$ due to $n$-cycles. If $d<n$ then $d=\frac{m}{c}$ with $c\geq r+1$.
Next we consider all divisors $d$ of $m$ such that $d\leq \frac{m}{r+2}$. For each of these divisors, $n-d\geq n - \frac{m}{r+2}\ge n-\frac{(r+1)n}{r+2}
=\frac{n}{r+2}$. Thus by Lemma \[newPs\], $P_0(n-d,m)
= O\left(\frac{m^{2s + \delta}}{n^{3}}\right)
= O\left(\frac{1}{n^{3-2s-\delta}}\right)$. The number of $d$ satisfying $d\geq \frac{m}{2(r+1)}$ is bounded in terms of $r$ (which is fixed), and hence the contribution to $P_1(n,m)$ from all the divisors $d$ satisfying $\frac{m}{2(r+1)}\leq d\leq \frac{m}{r+2}$ is at most $O\left(\frac{1}{m}\,\frac{1}{n^{3-2s-\delta}}\right)=O\left(
\frac{1}{n^{4-2s-\delta}}\right)$. On the other hand, if $m^s\leq d
<\frac{m}{2(r+1)}$, then $n-d>n - \frac{(r+1)n}{2(r+1)} =\frac{n}{2}$. Now since $r$ is fixed and $s<1$, for sufficiently large $n$, we have $m^s<\frac{n}
{4}$, and so $n-d> m^s +\frac{n}{4}$. Then, by Lemma \[newPs\] (applied with $a=1$ and $c=\frac{1}{4}$), $P_0(n-d,m)= O\left(\frac{m^{2s + 2\delta}}{(n-d)^{4}}\right)
= O\left(\frac{1}{n^{4-2s-2\delta}}\right)$, and the contribution to $P_1(n,m)$ from all $s$-large divisors $d< \frac{m}{2(r+1)}$ is at most $\frac{d(m)}{m^s}O\left(\frac{1}{n^{4-2s-2\delta}}\right)=
O\left(\frac{1}{n^{4-s-3\delta}}\right)$. Thus, noting that $\min\{4-2s-{{\delta}},
4-s-3{{\delta}}\}\geq 1+2s-2{{\delta}}$, the contribution to $P_1(n,m)$ from all $s$-large divisors $d$ of $m$ such that $d\leq\frac{m}{r+2}$ is $O\left(\frac{1}{n^{1+2s-2\delta}}\right)$.
By Lemma \[lem:divat\], the only divisor not yet considered is $d=\frac{m} {r+1}$ and this case of course arises only when $r+1$ divides $m$. Suppose then that $r+1$ divides $m$. We must estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from elements containing a cycle of length $d=\frac{m}{r+1}$. The contribution to $P_1(n,m)+P'_2(n,m)$ due to the divisor $d=\frac{m}{r+1}$ is $\frac{r+1}{m}P_0(n-\frac{m}{r+1},m)+\frac{r+1}{m}\sum_{d_0}\frac{1}{d_0}
P_0(n-\frac{m}{r+1}-d_0,m)$, where the summation is over all $s$-large $d_0\leq
\frac{m}{2(r+1)(r+2)}$. Suppose first that $n=\frac{m}{r+1}\geq \frac{m}{2(r+1)(r+2)-1}$, so that for each $d_0$, $n-\frac{m}{r+1}-d_0>\frac{m}{2(r+1)^2(r+2)^2}$. Then, by Lemma \[newPs\], the contribution to $P_1(n,m)+P'_2(n,m)$ is at most $$O\left(\frac{1}{m}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right)
+d(m) O\left(\frac{1}{m^{1+s}}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right)
=O\left(\frac{1}{n^{4-2s-{{\delta}}}}\right)$$ and this is $ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$ since $4-2s-{{\delta}}\geq 1+2s-2{{\delta}}$. Finally suppose that $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)}$. In this case we estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from $d=\frac{m}{r+1}$ by the proportion $\frac{1}{d}=\frac{r+1}{m}$ of elements of $S_n$ containing a $d$-cycle (recognising that this is usually an over-estimate). Putting these estimates together we have $$P_1(n,m)+P'_2(n,m)\leq\frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+
O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right),$$ where $\alpha=1$ if $m=rn$ and is $0$ otherwise, and $\alpha'=1$ if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}$, and is 0 otherwise. The result now follows using (\[eq-qi\]) and the estimates we have obtained for each of the summands.
It is sometimes useful to separate out the results of Proposition \[prop:general\] according to the values of $m,n$. We do this in the theorem below, and also obtain in parts (a) and (b) exact asymptotic expressions for $P(n,rn)$ and $P(n,t!(n-t))$ where $r, t$ are bounded and $n$ is sufficiently large. For this it is convenient to define two sets of integer pairs.
\[T\][For positive integers $r$ and $m$, define the following sets of integer pairs: $$\mathcal{T}(r)=\{(i,j)\,|\, 1\leq i,j\leq r^2, ij
=r^2,\ \mbox{and both}\ r+i, r+j\ \mbox{divide}\ m\}$$ and $\mathcal{T}'(r)=\{(i,j)\,|\, 1< i,j\leq (r+1)^2,
(i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j\ \mbox{divide}\ m\}.
$ ]{}
\[rn\] Let $n,m,r$ be positive integers such that $rn\leq m<(r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-1/2$. Then, the following hold for $r$ fixed and sufficiently large $n$ (where the sets $\mathcal{T}(r)$ and $\mathcal{T}'(r)$ are as in Definition [\[T\]]{}).
1. If $m=rn$, then ${\displaystyle P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where\
${\displaystyle
c(r)=\sum_{(i,j)\in\mathcal{T}(r)}(1+\frac{i+j}{2r}).}
$ In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even.
2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then\
${\displaystyle
P(n,m)=\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right)},$ where\
${\displaystyle c'(r)=\sum_{(i,j)\in\mathcal{T}'(r)}(1+\frac{i+j-2}{2(r+1)})}$.
3. If $rn<m$, then ${\displaystyle P(n,m)\leq \frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}
{n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where $\alpha'$ and $k(r,m,n)$ are as in Definition [\[def:kr\]]{}.
Part (c) follows immediately from Proposition \[prop:general\]. Next we prove part (a). Suppose that $m=rn$. If $r+1$ divides $m$ then we have $n-\frac{m}{r+1}=
\frac{m}{r(r+1)}>\frac{m}{2(r+1)(r+2)-1}$. It follows from Proposition \[prop:general\] that $P(n,m)\leq\frac{1}{n}+\frac{k(r,m,n)}
{n^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. To complete the proof we refine the argument given in the proof of Proposition \[prop:general\] for $P_2(n,m)$ which gave rise to the term $\frac{k(r,m,n)}{n^2}$. The elements contributing to this term were those with exactly two $s$-large cycles, where one of these cycles had length $d=\frac{m}{r+i}$ for some $i$ such that $1\leq
i\leq r+3$, and the other had length $d_0(d)=\frac{m}{r+j}$ for some $j$ such that $r+i\leq r+j <
2(r+1)(r+2)$ and $d + d_0(d) \le n.$ Moreover, for a given value of $d$, the value of $d_0(d)$ was the largest integer with these properties. Since we now assume that $m=rn$ we have $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r}$$ that is, $r(2r+i+j)\leq(r+i)(r+j)$, which is equivalent to $r^2\leq ij$. If $d+d_0(d)$ is strictly less than $n$, that is to say, if $r^2<ij$, and thus $ij-r^2\geq1$, then $$n-d-d_0(d)=n-\frac{rn(2r+i+j)}{(r+i)(r+j)}=\frac{n(ij-r^2)}{(r+i)(r+j)}\geq
\frac{n}{(r+i)(r+j)},$$ and since $i\leq r+3$ and $r+j<2(r+1)(r+2)$ we have $\frac{n}{(r+i)(r+j)}
\geq \frac{n}{2(r+1)(r+2)(2r+3)}$. It now follows from Lemma \[newPs\] that the contribution to $P_2(n,m)$ from all ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ with $d,d_0(d)$ as above and $n>d+d_0(d)$ is $O\left(
\frac{1}{n^2}\,\frac{m^{2s+{{\delta}}}}{n^3}\right)=O\left(\frac{1}{n^{5-2s-{{\delta}}}}
\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus when $m=rn$, the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(\frac{m}{r+i},\frac{m}{r+j})$ such that $r^2=ij$ and $1\leq i,j\leq
r^2$. (Note that we no longer assume $i\leq j$.) These are precisely the pairs $(i,j)\in\mathcal{T}(r)$. For such a pair $(\frac{m}{r+i},\frac{m}{r+j})$, the contribution to $P_2(n,m)$ is $$\frac{1}{2}\cdot\frac{r+i}{m}\cdot\frac{r+j}{m}=
\frac{r^2+r(i+j)+ij}{2n^2r^2}=\frac{1}{n^2}(1+\frac{i+j}{2r})$$ (since $ij=r^2$). Thus $P(n,m)\leq\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Moreover, for each $(i,j)\in\mathcal{T}(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$ is a permutation of order dividing $m$. Thus $P(n,rn)\geq \frac{1}{n}+\frac{c(r)}{n^2}$, and the main assertion of part (a) is proved. Finally we note that, if $r=1$ then the only possible pair in $\mathcal{T}(1)$ is $(1,1)$, and for this pair to lie in the set we require that $r+1=2$ divides $m=n$. Thus $c(1)$ is 0 if $n$ is odd, and is 2 if $n$ is even.
Finally we prove part (b) where we have $r=t!-1$ and $m=t!(n-t)$. Then $rn=(t!-1)n=m+t\cdot t!-n$ which is less than $m$ if $n>t\cdot t!$. Also $(r+1)n=t!\,n>m$. Thus, for sufficiently large $n$, we have $rn<m<(r+1)n$. Moreover, $r+1$ divides $m$ and $n-\frac{m}{r+1}=n-(n-t)=t$, which for sufficiently large $n$ is less than $\frac{n-t}{3t!}<\frac{m}{2(r+1)(r+2)-1}$. It now follows from part (c) that $P(n,t!(n-t))\leq \frac{1}{n-t}+\frac{k(r,m,n)}{n^2}+O\left(\frac{1}
{n^{1+2s-2{{\delta}}}}\right)$. Our next task is to improve the coefficient of the $O(\frac{1}{n^2})$ term using a similar argument to the proof of part (a). The elements contributing to this term have exactly two $s$-large cycles of lengths $d=\frac{m}{r+i}$ and $d_0(d)=\frac{m}{r+j}$, with $r+i,r+j\leq (r+1)(r+2)$ and $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r+1}+t.$$ This is equivalent to $(r+1)(2r+i+j)\leq(r+i)(r+j)+\frac{t(r+1)(r+i)(r+j)}{m}$, and hence, for sufficiently large $n$ (and hence sufficiently large $m$), $(r+1)(2r+i+j)\leq (r+i)(r+j)$. This is equivalent to $(i-1)(j-1)\geq (r+1)^2$. If $(i-1)(j-1)> (r+1)^2$, then $$\begin{aligned}
n-d-d_0(d)&=&(t+\frac{m}{r+1}) - \frac{m(2r+i+j)}{(r+i)(r+j)}\\
&=&t+\frac{m((i-1)(j-1)-(r+1)^2)}{(r+1)(r+i)(r+j)}\\
&>&\frac{rn}{(r+1)^3(r+2)^2}.\end{aligned}$$ As for part (a), the contribution to $P_2(n,m)$ from all pairs $(\frac{m}{r+i},\frac{m}{r+j})$ with $(i-1)(j-1)> (r+1)^2$ is $O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(d,d_0(d))=(\frac{m}{r+i},\frac{m}{r+j})$ such that $(r+1)^2=(i-1)(j-1)$ and $1\leq i,j\leq (r+1)^2$. These are precisely the pairs $(i,j)\in\mathcal{T}'(r)$. For each of these pairs we have $r^2+2r=ij-i-j$ and the contribution to $P_2(n,m)$ is $$\begin{aligned}
\frac{1}{2dd_0(d)}&=&\frac{(r+i)(r+j)}{2m^2}=
\frac{r^2+r(i+j)+ij}{2(r+1)^2(n-t)^2}\\
&=&\frac{(r+1)(2r+i+j)}{2(r+1)^2(n-t)^2}=
\frac{1}{(n-t)^2}\left(1+\frac{i+j-2}{2(r+1)}\right).\end{aligned}$$ Thus $P(n,m)\leq\frac{1}{n-t}+\frac{c'(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. On the other hand, each permutation in $S_n$ that contains an $(n-t)$-cycle has order dividing $t!(n-t)=m$, and the proportion of these elements is $\frac{1}{n-t}$. Also, for each $(i,j)\in\mathcal{T}'(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$, and inducing any permutation on the remaining $n-\frac{m}{r+i}-\frac{m}{r+j}=t$ points, is a permutation of order dividing $m=t!(n-t)$, and the proportion of all such elements is $\frac{c'(r)}{(n-t)^2}$. Thus $P(n,m)\geq \frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}$, and the assertion of part (b) is proved.
It is a simple matter now to prove Theorems \[leadingterms\] and \[bounds\].
The first theorem follows from Theorem \[rn\] (a) and (b) on setting $s=3/4$ and allowing $\delta \rightarrow 0$. Note that $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$ and $\frac{1}{(n-t)^2} = \frac{1}{n^2} + O(\frac{1}{n^3})$. For the second theorem, again we set $s=3/4$ in Theorem \[rn\](c). By Proposition \[prop:general\] we have $k(r,m,n) \le \frac{4(r+3)^4}{r^2}$. If we define $k(r) = \frac{4(r+3)^4}{r^2}$ the result follows.
Finally we derive the conditional probabilities in Corollary \[cdnlprobs1\].
Let $r,\, n$ be positive integers with $r$ fixed and $n$ ‘sufficiently large’, and let $g$ be a uniformly distributed random element of $S_n$. First set $m = rn.$ Let $A$ denote the event that $g$ is an $n$-cycle, and let $B$ denote the event that $g$ has order dividing $m$, so that the probability ${{\rm{Prob}}}(B)$ is $P(n,m)$. Then, by elementary probability theory, we have $$\begin{aligned}
{{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )}
{{{\rm{Prob}}}(B)}
= \frac{\frac{1}{n}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[leadingterms\], $\frac{1}{n}+\frac{c(r)}{n^2}<P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}+O\left(\frac{1}
{n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned}
1-\frac{c(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B)
\leq 1-\frac{c(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
Now suppose that $r=t!-1$ for some integer $t\geq2$, and let $A$ denote the event that $g$ contains an $(n-t)$-cycle, so that ${{\rm{Prob}}}(A)=\frac{1}{n-t}$. Then, with $B$ as above for the integer $m:=t!(n-t)$, we have $$\begin{aligned}
{{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )}
{{{\rm{Prob}}}(B)}
= \frac{\frac{1}{n-t}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[rn\](b), $\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}<P(n,m)=\frac{1}{n-t}+
\frac{c'(r)}{(n-t)^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned}
1-\frac{c'(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B)
\leq 1-\frac{c'(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\end{aligned}$$
This research was supported ARC Discovery Grants DP0209706 and DP0557587. The authors thank the referee for carefully reading the submitted version and advice on the paper. {#this-research-was-supported-arc-discovery-grants-dp0209706-and-dp0557587.-the-authors-thank-the-referee-for-carefully-reading-the-submitted-version-and-advice-on-the-paper. .unnumbered}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[10]{}
Robert Beals, Charles R. Leedham-Green, Alice C. Niemeyer, Cheryl E. Praeger, and Ákos Seress, , , 355(5),(2003), [2097–2113]{}.
I.Z. Bouwer and W.W. Chernoff, [Solutions to [$x\sp r=\alpha$]{} in the symmetric group]{}, [Tenth British combinatorial conference (Glasgow, 1985)]{}, [*Ars Combin.*]{}(A) 20, (1985), 83-88.
S. Chowla, I. N. Herstein and W. R. Scott, The solutions of $x^d=1$ in symmetric groups, *Norske Vid. Selsk.* [**25**]{} (1952), 29–31.
P. Erd[ő]{}s, and P. Tur[á]{}n, , , [**4**]{}, (1965), [175–186]{}.
P. Erd[ő]{}s, and P. Tur[á]{}n, , , [18]{}, (1967), [309–320]{}.
Lu Gao and Jian Guo Zha. [Solving the equation [$x\sp n=\sigma$]{} in the symmetric group [$S\sb m$]{}]{}, [*J. Math. (Wuhan)*]{}, 7 (2), (1987), 173–176, 1987.
E. Landau. , , 1909.
. [An equation in permutations]{}, [*Trudy Mat. Inst. Steklov.*]{}, 142 : 182–194, 270, 1976.
. [*The number of permutations of a special form*]{}, [*Mat. Sb. (N.S.)*]{}, 99(141) [**3**]{}: 468–476, 480, 1976.
Leo Moser and Max Wyman, , , 7, (1955), 159–168.
Leo Moser and Max Wyman, , , 8, (1956), 225–233.
Alice C. Niemeyer and Cheryl E. Praeger, On the proportion of permutations of order a multiple of the degree, preprint, 2005.
Alice C. Niemeyer and Cheryl E. Praeger, On the frequency of permutations containing a long cycle, *J. Algebra* [**300**]{} (2006), 289-304.
Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. . John Wiley & Sons, New York, 5th edition, 1991.
L.M. Volynets. . , 40:155–160, 286, 1986.
Richard Warlimont. Über die [A]{}nzahl der [L]{}ösungen von $x\sp{n}=1$ in der symmetrischen [G]{}ruppe ${S}\sb{n}$. , 30(6), (1978), 591–594.
Herbert S. Wilf. , , 15(2), (1986), [228-232]{}.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We present the general scheme for construction of noiseless networks detecting entanglement with the help of linear, hermiticity-preserving maps. We show how to apply the method to detect entanglement of unknown state without its prior reconstruction. In particular, we prove there always exists noiseless network detecting entanglement with the help of positive, but not completely positive maps. Then the generalization of the method to the case of entanglement detection with arbitrary, not necessarily hermiticity-preserving, linear contractions on product states is presented.'
author:
- Paweł Horodecki
- Remigiusz Augusiak
- Maciej Demianowicz
title: |
General construction of noiseless networks detecting entanglement\
with help of linear maps
---
Introduction
============
It has been known that entanglement can be detected with help of special class of maps called positive maps [@sep; @Peres; @book]. In particular there is an important criterion [@sep] saying that $\varrho$ acting on a given product Hilbert space $\mathcal{H}_{A}{\otimes}\mathcal{H}_{B}$ is separable if and only if for all positive (but not completely positive) maps $\Lambda :
\mathcal{B}(\mathcal{H}_{B})\rightarrow
\mathcal{B}(\mathcal{H}_{A})$ [@B] the following operator $$X_{\Lambda}(\varrho)=[I \otimes \Lambda](\varrho)$$ has all non-negative eigenvalues which usually is written as $$[I \otimes \Lambda](\varrho) \geq 0 \label{PositiveMaps1}.$$ Here by $I$ we denote the identity map acting on $\mathcal{B}(\mathcal{H}_{A})$. Since any positivity-preserving map is also hermiticity-preserving, it makes sense to speak about eigenvalues of $X_{\Lambda}(\varrho)$. However, it should be emphasized that there are many $\Lambda$s (and equivalently the corresponding criteria) and to characterize them is a hard and still unsolved problem (see, e.g., Ref. [@Kossakowski] and references therein).
For a long time the above criterion has been treated as purely mathematical. One used to take matrix $\varrho$ (obtained in some [*prior*]{} state estimation procedure) and then put it into the formula (\[PositiveMaps1\]). Then its spectrum was calculated and the conclusion was drawn. However it can be seen that for, say states acting on $\mathcal{H}_{A}{\otimes}\mathcal{H}_{B}\sim
\mathbb{C}^{d}{\otimes}\mathbb{C}^{d}$ and maps $\Lambda :
\mathcal{B}(\mathbb{C}^{d}) \rightarrow
\mathcal{B}(\mathbb{C}^{d})$, the spectrum of the operator $X_{\Lambda}(\varrho)$ consists of $n_{\mathrm{spec}}=d^{2}$ elements, while full [*prior*]{} estimation of such states corresponds to $n_{\mathrm{est}}=d^{4}-1$ parameters.
The question was raised [@PHAE] as to whether one can perform the test (\[PositiveMaps1\]) physically without necessity of [*prior*]{} tomography of the state $\varrho$ despite the fact that the map $I{\otimes}\Lambda$ is not physically realizable. The corresponding answer was [@PHAE] that one can use the notion of structural physical approximation $\widetilde{I \otimes
\Lambda}$ (SPA) of un–physical map $I \otimes \Lambda$ which is physically realizable already, but at the same time the spectrum of the state $$\tilde{X}_{\Lambda}(\varrho)=[\widetilde{I \otimes
\Lambda}](\varrho)$$ is just an affine transformation of that of the (unphysical) operator $X_{\Lambda}(\varrho)$. The spectrum of $\tilde{X}_{\Lambda}(\varrho)$ can be measured with help of the spectrum estimator [@Estimator], which requires estimation of only $d^{2}$ parameters which (because of affinity) are in one to one correspondence with the needed spectrum of (\[PositiveMaps1\]). Note that for $2{\otimes}2$ systems (the composite system of two qubits), similar approaches lead to the method of detection of entanglement measures (concurrence [@Concurrence] and entanglement of formation [@EoF]) without the state reconstruction [@PHPRL].
The disadvantage of the above method is [@Carteret] that realization of SPA requires addition the noise to the system (we have to put some controlled ancillas, couple the system, and then trace them out). In Ref. [@Carteret] the question was raised about the existence of noiseless quantum networks, i.e., those of which the only input data are: (i) unknown quantum information represented by $\varrho^{{\otimes}m}$ (ii) the controlled measured qubit which reproduces us the spectrum moments (see Ref. [@Estimator]). It was shown that for at least one positive map (transposition) $T$ the noiseless network exists [@Carteret]. Such networks for two-qubit concurrence and three-qubit tangle have also been designed [@Carteret2].
In the present paper we ask a general question: do noiseless networks work only for special maps (functions) or do they exist for any positive map test? In the case of a positive answer to the latter: is it possible to design a general method for constructing them? Can it be adopted to any criteria other than the one defined in (\[PositiveMaps1\])?
For this purpose we first show how to measure a spectrum of the matrix $\Theta(\varrho)$, where $\Theta :
\mathcal{B}(\mathbb{C}^{m})\rightarrow\mathcal{B}(\mathbb{C}^{m})$ is an arbitrary linear, hermiticity-preserving map and $\varrho$ is a given density operator acting on $\mathbb{C}^{m}$, with the help of only $m$ parameters estimated instead of $m^{2}-1$. For bipartite $\varrho$ where $m=d^{2}$ this gives $d^{2}$ instead of $d^{4}-1$. This approach is consistent with previous results [@Grassl; @Leifer; @Brun] where arbitrary polynomials of elements of a given state $\varrho$ have been considered. In these works it was shown out that any at most $k$-th degree polynomial of a density matrix $\varrho$ can be measured with help of two collective observables on $k$ copies of $\varrho$. In fact one can treat the moments of $\Theta(\varrho)$ which we analyze below as polynomials belonging to such a class. We derive the explicit form of observables for the sake of possible future application. Moreover, approach presented in the present paper allows for quite natural identification of observable that detects an arbitrary polynomial of the state $\varrho$ subjected to some transformation $\Theta$. Then we provide an immediate application in entanglement detection showing that for suitable $\Theta$ the scheme constitutes just a right method for detecting entanglement without prior state reconstruction with the help of either positive map criteria (\[PositiveMaps1\]) or linear contraction methods discussed later.
General scheme for construction of noiseless network detecting spectrum of $\Theta(\varrho)$ {#general}
============================================================================================
Construction of an observable
-----------------------------
Since $m \times m$ matrix $\Theta(\varrho)$ is hermitian its spectrum may be calculated using only $m$ numbers $$\label{alfy}
\alpha_{k}\equiv {\mathrm{Tr}}[\Theta(\varrho)]^{k}=\sum_{i=1}^{m}\lambda_{i}^{k}\qquad
(k=1,\ldots,m),$$ where $\lambda_{i}$ are eigenvalues of $\Theta(\varrho)$. We shall show that all these spectrum moments can be represented by mean values of special observables. To this aim let us consider the permutation operator $V^{(k)}$ defined by the formula $$\label{VKa}
V^{(k)} |e_{1}\rangle|e_{2}\rangle \otimes ... \otimes |e_{k}
\rangle= |e_{k}\rangle|e_{1}\rangle \otimes ... \otimes |e_{k-1}
\rangle,
$$ where $(k=1,\ldots,m)$ and ${|e_{i}\rangle}$ are vectors from $\mathbb{C}^{m}$. One can see that $V^{(1)}$ is just an identity operator $\mathbbm{1}_{m}$ acting on $\mathbb{C}^{m}$. Combining Eqs. (\[alfy\]) and (\[VKa\]) we infer that $\alpha_{k}$ may be expressed by relation $$\alpha_{k}={\mathrm{Tr}}\left\{V^{(k)}[\Theta(\varrho)]^{\otimes k}\right\}
\label{alfa}$$ which is generalization of the formula from Refs. [@Estimator; @PHAE] where $\Theta$ was (unlike here) required to be a physical operation. At this stage the careful analysis of the right–hand side of Eq. (\[alfa\]) shows that $\alpha_{k}$ is a polynomial of at most $k$-th degree in matrix elements of $\varrho$. This, together with the observation of Refs. [@Brun; @Grassl; @Leifer] allows us already to construct a single collective observable that detects $\alpha_{k}$. However, for the sake of possible future applications we derive the observable explicitly below. To this aim we first notice that $\alpha_{k}$ may be obtained using hermitian conjugation of $V^{(k)}$ which again is a permutation operator but permutes states ${|e_{i}\rangle}$ in the reversed order. Therefore all the numbers $\alpha_{k}$ may be expressed as $$\label{alfa2}
\alpha_{k}=\frac{1}{2}{\mathrm{Tr}}\left[\left(V^{(k)}+V^{(k)\dagger}\right)\Theta(\varrho)^{\otimes
k}\right].$$ Let us focus for a while on the map $\Theta$. Due to its hermiticity-preserving property it may be expressed as $$\Theta(\cdot)=\sum_{j=0}^{m^{2}-1}\eta_{j}K_{j}(\cdot)K_{j}^{\dagger}$$ with $\eta_{j}\in\mathbb{R}$ and $K_{j}$ being linearly independent $m$-by-$m$ matrices. By the virtue of this fact and some well-known properties of the trace, after rather straightforward algebra we may rewrite Eq. (\[alfa2\]) as $$\label{alfa3}
\alpha_{k}=\frac{1}{2}{\mathrm{Tr}}\left[\left(\Theta^{\dagger}\right)^{{\otimes}k}\left(V^{(k)}+V^{(k)\dagger}\right)\varrho^{{\otimes}k}\right],$$ where $\Theta^{\dagger}$ is a dual map to $\Theta$ and is given by $\Theta^{\dagger}(\cdot)=\sum_{i}\eta_{i}K_{i}^{\dagger}(\cdot)K_{i}$. Here we have applied a map $(\Theta^{\dagger})^{{\otimes}k}$ on the operator $V^{(k)}+V^{(k)\dagger}$ instead of applying $\Theta^{{\otimes}k}$ to $\varrho^{\otimes k}$. This apparently purely mathematical trick with the aid of the fact that the square brackets in the above contain a hermitian operator allows us to express the numbers $\alpha_{k}$ as a mean value of some observables in the state $\varrho^{{\otimes}k}$. Indeed, introducing $$\label{obs}
\mathcal{O}^{(k)}_{\Theta}=
\frac{1}{2}{\mathrm{Tr}}\left[\left(\Theta^{\dagger}\right)^{{\otimes}k}\left(V^{(k)}+V^{(k)\dagger}\right)\right]$$ we arrive at $$\alpha_{k}={\mathrm{Tr}}\left[\mathcal{O}^{(k)}_{\Theta} \varrho^{\otimes
k}\right]. \label{MeanValues}$$ In general, a naive measurement of all mean values would require estimation of much more parameters that $m$. But there is a possibility of building a unitary network that requires estimation of exactly $m$ parameters using the idea that we recall and refine below.
Finally, let us notice that the above approach generalizes measurements of polynomials of elements of $\varrho$ in the sense that it shows explicitly how to measure the polynomials of elements of $\Theta(\varrho)$. Of course, this is only of rather conceptual importance since both issues are mathematically equivalent and have the origin in Refs. [@Grassl; @Leifer; @Brun].
Detecting mean of an observable by measurement on a single qubit revised
------------------------------------------------------------------------
Let $\mathcal{A}$ be an arbitrary observable (it may be even infinite dimensional) which spectrum lies between finite numbers $a^{\min}_{\mathcal{A}}$ and $a^{\max}_{\mathcal{A}}$ and $\sigma$ be a state acting on $\mathcal{H}$. In Ref. [@binpovm] it has been pointed out that the mean value $\langle \mathcal{A}
\rangle_{\sigma}= {\mathrm{Tr}}\mathcal{A}\sigma$ may be estimated in process involving the measurement of only one qubit. This fact is in good agreement with further proof that single qubits may serve as interfaces connecting quantum devices [@Lloyd]. Below we recall the mathematical details of the measurement proposed in Ref. [@binpovm]. At the beginning one defines the following numbers $$a^{(-)}_{\mathcal{A}}\equiv
\max\{0,-a^{\min}_{\mathcal{A}}\},\qquad a^{(+)}_{\mathcal{A}}
\equiv a^{(-)}_{\mathcal{A}}+a^{\max}_{\mathcal{A}},$$ and observe that the hermitian operators $$\begin{aligned}
V_{0}=\sqrt{\left(a^{(-)}_{\mathcal{A}}\mathbbm{1}_{\mathcal{H}}+\mathcal{A}\right)\Big/a^{(+)}_{\mathcal{A}}}\end{aligned}$$ and $$V_{1}=\sqrt{\mathbbm{1}_{\mathcal{H}} - V_{0}^\dagger V_{0}}$$ satisfy $\sum_{i=0}^{1}V_{i}^{\dagger}V_{i}=\mathbbm{1}_{\mathcal{H}}$ [@Identity] and as such define a generalized quantum measurement which can easily be extended to a unitary evolution (see Appendix A of Ref. [@APS] for a detailed description). Consider a partial isometry on the Hilbert space $\mathbb{C}^{2}
\otimes \mathcal{H}$ defined by the formula $$\tilde{U}_{\mathcal{A}}=\sum_{i=0}^{1} |i\rangle \langle 0|
\otimes V_{i}=\left(
\begin{array}{cc}
V_{0} & 0\\
V_{1} & 0
\end{array} \right).$$ The first Hilbert space $\mathbb{C}^{2}$ represents the qubit which shall be measured in order to estimate the mean value $\langle \mathcal{A} \rangle_{\sigma}$. The partial isometry can always be extended to unitary $U_{\mathcal{A}}$ such that if it acts on $|0 \rangle \langle 0| \otimes \sigma$ then the final measurement of observable $\sigma_{z}$ [@Pauli] on the first (qubit) system gives probabilities “spin-up” (of finding it in the state $|0\rangle$) and “spin-down” (of finding in state $|1\rangle$), respectively of the form $$p_{0}={\mathrm{Tr}}\left(V_{0}^\dagger V_{0}\varrho\right), \qquad
p_{1}={\mathrm{Tr}}\left(V_{1}^\dagger V_{1}\varrho\right)=1-p_{0}.$$ One of the possible extensions of $\tilde{U}_{\mathcal{A}}$ to the unitary on $\mathbb{C}^{2}{\otimes}\mathcal{H}$ is the following $$\label{isometry}
U_{\mathcal{A}}=\left(
\begin{array}{cc}
V_{0} & -V_{1}\\
V_{1} & V_{0}
\end{array}
\right)=\mathbbm{1}_{2}{\otimes}V_{0}-i\sigma_{y}{\otimes}V_{1}.$$ The unitarity of $U_{\mathcal{A}}$ follows from the fact that operators $V_{0}$ and $V_{1}$ commute. Due to the practical reasons instead of unitary operation representing POVM $\{V_{0},V_{1}\}$ we shall consider $$\label{Udet}
U^{\mathrm{det}}(\mathcal{A},U'_{\mathcal{H}})=\left(\mathbbm{1}_{2}
{\otimes}U_{\mathcal{H}}'\right)U_{\mathcal{A}}\left(\mathbbm{1}_{2}{\otimes}U_{\mathcal{H}}'\right)^{\dagger},$$ where $\mathbbm{1}_{2}$ is an identity operator on the one-qubit Hilbert space $\mathbb{C}^{2}$ and $U_{\mathcal{H}}'$ is an arbitrary unitary operation that acts on ${\cal H}$ and simplifies the decomposition of $U_{\mathcal{A}}$ into elementary gates. Now if we define a mean value of measurement of $\sigma_{z}$ on the first qubit after action of the network (which sometimes may be called visibility): $$v_{\mathcal{A}}={\mathrm{Tr}}\left[ \left(\sigma_{z}\otimes
\mathbbm{1}_{\mathcal{H}}\right) \left(\mathbbm{1}_{2} \otimes
U_{\mathcal{H}}'\right) U_{\mathcal{A}} \mathcal{P}_{0}{\otimes}\sigma
U^{\dagger}_{\mathcal{A}}\left(\mathbbm{1}_{2} \otimes
U_{\mathcal{H}}'\right)^{\dagger}\right], \label{vis}$$ where $\mathcal{P}_{0}$ is a projector onto state ${|0\rangle}$, i.e., $\mathcal{P}_{0}={{|0\rangle}{\langle0|}}$, then we have an easy formula for the mean value of the initial observable $\mathcal{A}$: $$\label{meanA}
\langle \mathcal{A}\rangle_{\sigma}=a^{(+)}_{\mathcal{A}}p_{0}-
a^{(-)}_{\mathcal{A}}=a^{(+)}_{\mathcal{A}}\frac{v_{\mathcal{A}}+1}{2}-a^{(-)}_{\mathcal{A}}.$$
A general scheme of a network estimating the mean value (\[meanA\]) is provided in Fig. \[Fig1\].
![General scheme of a network for estimating mean value of an observable $\mathcal{A}$, with a bounded spectrum, in a given state $\sigma$. Both $U_{\mathcal{H}'}$ and its conjugate $U_{\mathcal{H}}'^{\dagger}$ standing before $U_{\mathcal{A}}$ can obviously be removed as they give rise to identity, last unitary on the bottom wire can be removed as it does not impact measurement statistics on the top qubit. However, they have been put to simplify subsequent network structure.[]{data-label="Fig1"}](network1.eps){width="8cm"}
We put an additional unitary operation on the bottom wire after unitary $U_{\mathcal{A}}$ (which does not change the statistics of the measurement on control qubit) and divided identity operator into two unitaries acting on that wire which explicitly shows how simplification introduced in Eq. (\[Udet\]) works in practice.
Now one may ask if the mean value $\langle\mathcal{A}\rangle_{\sigma}$ belongs to some fixed interval, i.e., $$\label{int}
c_{1}\le\langle\mathcal{A}\rangle_{\sigma}\le c_{2},$$ where $c_{1}$ and $c_{2}$ are real numbers belonging to the spectrum of $\mathcal{A}$, i.e., $[a_{\mathcal{A}}^{\mathrm{min}},a_{\mathcal{A}}^{\mathrm{max}}]$ (e.g. if $\mathcal{A}$ is an entanglement witness and we want to check the entanglement of a state $\sigma$ then we can put $c_{1}=0$ and $c_{2}=a_{\mathcal{A}}^{\mathrm{max}}$, and condition (\[int\]) reduces to $\langle\mathcal{A}\rangle_{\sigma}\ge 0$). Then one easily infers that the condition (\[int\]) rewritten for visibility is $$2\frac{c_{1}+a_{\mathcal{A}}^{(-)}}{a_{\mathcal{A}}^{(+)}}-1\le
v_{\mathcal{A}}\le
2\frac{c_{2}+a_{\mathcal{A}}^{(-)}}{a_{\mathcal{A}}^{(+)}}-1.$$ Having the general network estimating $v_{\mathcal{A}}$, one needs to decompose an isometry $U_{\mathcal{A}}$ onto elementary gates. One of possible ways to achieve this goal is, as we shall see below, to diagonalize the operator $V_{0}$. Hence we may choose $U_{\mathcal{H}}'$ (see Eq. (\[Udet\])) to be $$U_{\mathcal{H}}'=\sum_{\bold{k}}
{|{\bold{k}}\rangle}{\langle\phi_{\bold{k}}|}$$ with ${|\phi _{\bold{k}}\rangle}$ being normalized eigenvectors of $V_{0}$ indexed by a binary number with length $2^k$. Since $V_{0}$ and $V_{1}$ commutes, this operation diagonalizes $V_{1}$ as well. By virtue of these facts, Eq. (\[Udet\]) reduces to $$U^{\mathrm{det}}(\mathcal{A},U_{\mathcal{H}'})=\sum_{\bold{k}}
U_{\bold{k}}\otimes{{|\bold{k}\rangle}{\langle\bold{k}|}},$$ with unitaries (as previously indexed by a binary number) $$U_{\bold{k}}=\sqrt{\lambda_{\bold{k}}}\mathbbm{1}_{2}-i\sqrt{1-\lambda_{\bold{k}}}\sigma_y,$$ where $\lambda_{\bold{k}}$ are eigenvalues of $V_{0}$. So in fact we have a combination of operations on the first qubit controlled by $2^k$ wires. All this combined gives us the network shown in the Fig. \[Fig2\].
![Noiseless network for estimating moments of $\Theta(\varrho)$ with $\varrho$ being a bipartite mixed state, i.e., density matrix acting on $\mathbb{C}^{2}{\otimes}\mathbb{C}^{2}$.[]{data-label="Fig2"}](figure2.eps){width="8.5cm"}
Now we are in the position to combine all the elements presented so far an show how, if put together, they provide the general scheme for constructing noiseless network for spectrum of $\Theta(\varrho)$ for a given quantum state $\varrho$. For the sake of clarity below we itemize all steps necessary to obtain the spectrum of $\Theta(\varrho)$:
(i)
: Take all observables $\mathcal{O}^{(k)}$ $(k=1,\ldots,m)$ defined by Eq. (\[obs\]).
(ii)
: Construct unitary operations $U_{\mathcal{O}^{(k)}}$ according to the the given prescription. Consider the unitary operation $U^{\mathrm{det}}(\mathcal{A},U'_{\mathcal{H}})$ ($U'_{\mathcal{H}}$ arbitrary). Find decomposition of the operation into elementary quantum gates and minimize the number of gates in the decomposition with respect to $U_{\mathcal{H}}'$. Build the (optimal) network found in this way.
(iii)
: Act with the network on initial state $\mathcal{P}_{0} \otimes \varrho^{\otimes k}$.
(iv)
: Measure the ,,visibilities” $v_{\mathcal{O}^{(k)}_{\Theta}}$ $(k=1,\ldots,m)$ according to (\[vis\]).
(v)
: Using Eq. (\[meanA\]) calculate the values of $\alpha_{k}$ $(k=1,\ldots,m)$ representing the moments of $\Theta(\varrho)$.
Detecting entanglement with networks: example
---------------------------------------------
The first obvious application of the presented scheme is entanglement detection [*via*]{} positive but not completely positive maps. In fact for any bipartite state $\varrho\in\mathcal{B}(\mathcal{H}_{A}{\otimes}\mathcal{H}_{B})$ we only need to substitute $\Theta$ with $\mathbbm{1}_{A} \otimes
\Lambda_{B}$ with $\Lambda_{B}$ being some positive map. Then application of the above scheme immediately reproduces all the results of the schemes from Ref. [@PHAE] but without additional noise added (presence of which required more precision in measurement of visibility).
As an illustrative example consider $\Lambda_{B}=T$, i.e., $\Theta$ is partial transposition on the second subsystem (usually denoted by $T_B$ or by $\Gamma$), in $2\otimes 2$ systems. Due to to the fact that partial transposition is trace–preserving we need only three numbers $\alpha _{k}$, ($k=2,3,4$) measurable [*via*]{} observables $$\mathcal{O}^{(2)}_{T}=V_1^{(2)}\otimes V_2^{(2)}$$ and $$\mathcal{O}^{(3,4)}_{T}=\frac{1}{2}\left( V_1^{(3,4)}\otimes
V_2^{(3,4)\dagger}+V_1^{(3,4)\dagger}\otimes V_2^{(3,4)}\right),$$ where subscripts mean that we exchange first and second subsystems respectively. The hermitian conjugation in the above may be replaced by transposition since the permutation operators have real entries. For simplicity we show only the network measuring second moment of $\varrho^{T_{B}}$. General scheme from Fig. \[Fig2\]. reduces then to the scheme from Fig. \[Fig3\].
![Network estimating the second moment of partially transposed two-qubit density matrix $\varrho$. $U_{\mathcal{H}'}$ is decomposed to single qubit gates; here $\displaystyle
U=(1/\sqrt{2})(\mathbbm{1}_{2}+i\sigma_{y})$. []{data-label="Fig3"}](figure3.eps){width="8.5cm"}
Note that the network can also be regarded as a one measuring purity of a state as ${\mathrm{Tr}}(\varrho ^{T_{B}})^2={\mathrm{Tr}}\varrho ^2$. Note that the this network is not optimal since an alternative network [@Estimator] measuring ${\mathrm{Tr}}\varrho^{2}$ requires two controlled swaps.
Extension to linear contractions criteria
=========================================
The above approach may be generalized to the so-called [*linear contractions criteria*]{}. To see this let us recall that the powerful criterion called computable cross norm (CCN) or matrix realignment criterion has recently been introduced [@CCN; @CCN1]. This criterion is easy to apply (involves simple permutation of matrix elements) and has been shown [@CCN; @CCN1] to be independent on a positive partial transposition (PPT) test [@Peres]. It has been further generalized to the [*linear contractions criterion*]{} [@PartialCCN] which we shall recall below. If by $\varrho_{A_{i}}\;(i=1,\ldots,n)$ we denote density matrices acting on Hilbert spaces $\mathcal{H}_{A_{i}}$ and by $\tilde{\mathcal{H}}$ certain Hilbert space, then for some linear map $\mathcal{R} :
\mathcal{B}(\mathcal{H}_{A_{1}}{\otimes}\ldots{\otimes}\mathcal{H}_{A_{n}})\rightarrow
\mathcal{B}(\tilde{\mathcal{H}})$ we have the following
[**Theorem**]{} [@PartialCCN]. [*If some ${\cal R}$ satisfies $$\label{Theorem}
\left|\left|{\mathcal {R}}\left(\varrho_{A_1} {\otimes}\varrho_{A_2}
{\otimes}\ldots {\otimes}\varrho_{A_n}\right)\right|\right|_{{\mathrm{Tr}}}\leq 1,$$ then for any separable state $\varrho_{A_1A_2 \ldots
A_n}\in\mathcal{B}(\mathcal{H}_{A_{1}}{\otimes}\ldots{\otimes}\mathcal{H}_{A_{n}})$ one has*]{} $$\label{Theorem2}
||\mathcal {R}(\varrho_{A_1A_2 \ldots A_n})||_{{\mathrm{Tr}}}\leq 1.$$ The maps $\mathcal{R}$ satisfying (\[Theorem\]) are linear contractions on product states and hereafter they shall be called, in brief, linear contractions. In particular, the separability condition (\[Theorem2\]) comprises the generalization of the realignment test to permutation criteria [@PartialCCN; @Chen] (see also Ref. [@Fan]).
The noisy network for entanglement detection with the help of the latter have been proposed in Ref. [@PHPLA2003]. Here we improve this result in two ways, namely, by taking into account all maps $\mathcal{R}$ of type (\[Theorem\]) (not only permutation maps) and introducing the corresponding noiseless networks instead of noisy ones. For these purposes we need to generalize the lemma from Ref. [@PHPLA2003] formulated previously only for real maps $\mathcal{S} :
\mathcal{B}(\mathcal{H})\rightarrow\mathcal{B}(\mathcal{H})$. We represent action of $\mathcal{S}$ on any $\varrho\in
\mathcal{B}(\mathcal{H})$ as $${\mathcal{S}}(\varrho)=\sum_{ij,kl}{\mathcal{S}}_{ij,kl}{\mathrm{Tr}}(\varrho
P_{ij}) P_{kl},$$ where in Dirac notation $P_{xy}=|x\rangle \langle y|$. Let us define complex conjugate of the map $\mathcal{S}$ [*via*]{} complex conjugation of its elements, i.e., $${\mathcal{S}}^{*}(\varrho)=\sum_{ij,kl}{\mathcal{S}}_{ij,kl}^{*}{\mathrm{Tr}}(\varrho
P_{ij}) P_{kl},$$ where asterisk stands for the complex conjugation. The we have the following lemma which is easy to proof by inspection: [**Lemma.**]{} [*Let ${\mathcal{S}}$ be an arbitrary linear map on $\mathscr{B}(\mathcal{H})$. Then the map ${\mathcal{S}}' \equiv
[T \circ {\mathcal{S}}^{*} \circ T]$ satisfies ${\mathcal{S}}'
(\varrho)=[{\mathcal{S}}(\varrho) ]^{\dagger}$.*]{}
Now let us come to the initial problem of this section. Suppose then we have $\mathcal{R}$ satisfying Eq. (\[Theorem\]) and a given physical source producing copies of a system in state $\varrho$ for which we would like to check Eq. (\[Theorem2\]). Let us observe that $$\label{24}
||\mathcal{R}(\varrho)||_{\mathrm{Tr}}=\sum_{i}\sqrt{\gamma_{i}},$$ where $\{\gamma_{i}\}$ are eigenvalues of the operator $X_{\mathcal{R}}(\varrho)=\mathcal{R}(\varrho)\mathcal{R}(\varrho)^{\dagger}.$ Below we show how to find the spectrum $\{\gamma_{i}\}$. We need to apply our previous scheme from Sec. \[general\] to the special case. Let us define the map $L_{\mathcal{R}}=\mathcal{R}
\otimes {\cal R}'$, where ${\cal R}$ is our linear contraction and $\mathcal{R}'$ is defined according to the prescription given in the Lemma above, i.e., $\mathcal{R}'=[T\circ \mathcal{R}^{*}\circ
T]$. Let us also put $\varrho'=\varrho^{{\otimes}2}$ and apply the scheme presented above to detect the spectrum of $L_{\cal
R}(\varrho')$. It is easy to see that the moments detected in that way are $${\mathrm{Tr}}[L_{\mathcal{R}}(\varrho')]^{k}=
{\mathrm{Tr}}\left[\mathcal{R}(\varrho)\mathcal{R}(\varrho)^{\dagger}\right]^{k}=\sum_{i}\gamma_{i}^{k}.$$ From the moments one easily reconstructs $\{\gamma_{i}\}$ and may check the violation of Eq. (\[Theorem2\]).
Summary {#Summary}
=======
We have shown how to detect the spectrum of the operator $\Theta(\varrho)$ for arbitrary linear hermiticity-preserving map $\Theta$ given the source producing copies of the system in state $\varrho$. The network involved in the measurement is noiseless in the sense of [@Carteret] and the measurement is required only on the controlled qubit. Further we have shown how to apply the method to provide general noiseless network scheme of detection detecting entanglement with the help of criteria belonging to one of two classes, namely, those involving positive maps and applying linear contractions on product states.
The structure of the proposed networks is not optimal and needs further investigations. Here however we have been interested in quite a fundamental question which is interesting by itself: [*Is it possible to get noiseless networks schemes for any criterion from one of the above classes?*]{} Up to now their existence was known [*only*]{} for special case of positive partial transpose (cf. [@Carteret2]). Here we have provided a positive answer to the question.
Finally, let us note that the above approach can be viewed as an application of collective observables \[see Eq. (\[MeanValues\])\]. The general paradigm initiated in Refs. [@PHPRL; @PHPRA2003] has been recently fruitfully applied in the context of general concurrence estimates [@AolitaMintert; @MintertBuchleitner] which has been even preliminarily experimentally illustrated. Moreover, recently the universal collective observable detecting any two-qubit entanglement has been constructed [@My]. It seems that the present approach needs further analysis from the point of view of collective observables including especially collective entanglement witness (see [@PHPRA2003; @MintertBuchleitner]).
P. H. thanks Artur Ekert for valuable discussions. The work is supported by the Polish Ministry of Science and Education under the grant No. 1 P03B 095 29, EU project QPRODIS (IST-2001-38877) and IP project SCALA. Figures were prepared with help of QCircuit package.
[0]{}
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996).
A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996).
Alber G. [*et al.*]{}, [*Quantum Information: An Introduction To Basic Theoretical Concepts And Experiments, Springer Tracts in Modern Physics*]{} [**173**]{}, Springer, Berlin (2003).
Here $\mathcal{B}(\mathcal{H}_{i})$ $(i=A,B)$ denotes bounded operators acting on $\mathcal{H}_{i}$.
A. Kossakowski, Open Sys. Inf. Dyn. [**10**]{}, 221 (2003); G. Kimura and A. Kossakowski, [*ibid.*]{} [**11**]{}, 343 (2004).
P. Horodecki and A. K. Ekert, Phys. Rev. Lett. [**89**]{}, 127902 (2002).
A. K. Ekert, C. M. Alves, D. K. L. Oi, M. Horodecki, P. Horodecki, and L. C. Kwek, Phys. Rev. Lett. [**88**]{}, 217901 (2002).
S. Hill and W. K. Wootters, Phys. Rev. Lett. [**78**]{}, 5022 (1997); W. K. Wootters, [*ibid.*]{} [**80**]{}, 2245 (1998).
C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A [**54**]{}, 3824 (1996).
P. Horodecki, Phys. Rev. Lett. [**90**]{}, 167901 (2003).
H. A. Carteret, Phys. Rev. Lett. [**94**]{}, 040502 (2005).
H. A. Carteret, quant-ph/0309212.
T. A. Brun, Quant. Inf. Comp. [**4**]{}, 401 (2004).
M. S. Leifer, N. Linden, and A. Winter, Phys. Rev. A [**69**]{}, 052304 (2004).
M. Grassl, M. Roetteler, and T. Beth, Phys. Rev. A [**58**]{}, 1833-1839 (1998).
P. Horodecki, Phys. Rev. A [**67**]{}, 060101(R) (2003).
S. Lloyd, A. J. Landahl, and J.-J. E. Slotine, Phys. Rev. A [**69**]{}, 012305 (2004).
By $\mathbbm{1}_{\mathcal{H}}$ we denote an identity operator on $\mathcal{H}$.
We use standard Pauli matrices, i.e., $\sigma_{x}={|0\rangle}{\langle1|}+{|1\rangle}{\langle0|},\sigma_{y}=
-i{|0\rangle}{\langle1|}+i{|1\rangle}{\langle0|}, \sigma_{z}={{|0\rangle}{\langle0|}}-{{|1\rangle}{\langle1|}}$.
P. Horodecki, Acta Phys. Pol. A [**101**]{}, 399 (2002).
O. Rudolph, quant-ph/0202121.
K. Chen and L. A. Wu, Quant. Inf. Comp. [**3**]{}, 193 (2003).
M. Horodecki, P. Horodecki, and R. Horodecki, Open Syst. Inf. Dyn. [**13**]{}, 103 (2006); quant-ph/0206008.
K. Chen and L. A. Wu, Phys. Lett. A [**306**]{}, 14 (2002).
H. Fan, quant-ph/0210168; P. Wocjan and M. Horodecki, Open Sys. Inf. Dyn. [**12**]{}, 331 (2005).
P. Horodecki, Phys. Lett. A [**319**]{}, 1 (2003).
P. Horodecki, Phys. Rev. A [**68**]{}, 052101 (2003).
L. Aolita and F. Mintert, Phys. Rev. Lett. [**97**]{}, 050501 (2006).
F. Mintert and A. Buchleitner, quant-ph/0605250.
R. Augusiak, P. Horodecki, and M. Demianowicz, quant-ph/0604109.
|
{
"pile_set_name": "arxiv"
}
|
/*#######################################################
* Copyright (c) 2014 Jeff Martin
* Copyright (c) 2015 Pedro Lafuente
* Copyright (c) 2017-2019 Gregor Santner
*
* Licensed under the MIT license.
* You can get a copy of the license text here:
* https://opensource.org/licenses/MIT
###########################################################*/
package other.writeily.ui;
import android.app.Dialog;
import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.v4.app.DialogFragment;
import android.support.v7.app.AlertDialog;
import android.text.TextUtils;
import net.gsantner.markor.R;
import net.gsantner.markor.util.AppSettings;
import java.io.Serializable;
public class WrConfirmDialog extends DialogFragment {
public static final String FRAGMENT_TAG = "WrConfirmDialog";
private static final String EXTRA_TITLE = "EXTRA_TITLE";
private static final String EXTRA_MESSAGE = "EXTRA_MESSAGE";
public static final String EXTRA_DATA = "EXTRA_DATA";
private Serializable _data;
private ConfirmDialogCallback[] _callbacks;
private String _summary;
public static WrConfirmDialog newInstance(String title, String message,
Serializable data, ConfirmDialogCallback... callbacks) {
WrConfirmDialog confirmDialog = new WrConfirmDialog();
Bundle args = new Bundle();
args.putSerializable(EXTRA_DATA, data);
args.putString(EXTRA_TITLE, title);
args.putString(EXTRA_MESSAGE, message);
confirmDialog.setArguments(args);
confirmDialog.setCallbacks(callbacks);
return confirmDialog;
}
public void setCallbacks(ConfirmDialogCallback[] callbacks) {
_callbacks = callbacks;
}
@Override
@NonNull
public Dialog onCreateDialog(Bundle savedInstanceState) {
String title = getArguments().getString(EXTRA_TITLE);
String message = getArguments().getString(EXTRA_MESSAGE);
_data = getArguments().getSerializable(EXTRA_DATA);
AlertDialog.Builder dialogBuilder;
boolean darkTheme = AppSettings.get().isDarkThemeEnabled();
dialogBuilder = new AlertDialog.Builder(getActivity(), darkTheme ?
R.style.Theme_AppCompat_Dialog : R.style.Theme_AppCompat_Light_Dialog);
dialogBuilder.setTitle(title);
if (!TextUtils.isEmpty(message)) {
dialogBuilder.setMessage(message);
}
dialogBuilder.setPositiveButton(getString(android.R.string.ok), (dialog, which) -> {
if (_callbacks != null) {
for (ConfirmDialogCallback cdc : _callbacks) {
if (cdc != null) {
cdc.onConfirmDialogAnswer(true, _data);
}
}
}
});
dialogBuilder.setNegativeButton(getString(R.string.cancel), (dialog, which) -> {
dialog.dismiss();
for (ConfirmDialogCallback cdc : _callbacks) {
cdc.onConfirmDialogAnswer(false, _data);
}
});
return dialogBuilder.show();
}
public interface ConfirmDialogCallback {
void onConfirmDialogAnswer(boolean confirmed, Serializable data);
}
}
|
{
"pile_set_name": "github"
}
|
---
abstract: 'This paper studies vector quantile regression (VQR), which is a way to model the dependence of a random vector of interest with respect to a vector of explanatory variables so to capture the whole conditional distribution, and not only the conditional mean. The problem of vector quantile regression is formulated as an optimal transport problem subject to an additional mean-independence condition. This paper provides a new set of results on VQR beyond the case with correct specification which had been the focus of previous work. First, we show that even under misspecification, the VQR problem still has a solution which provides a general representation of the conditional dependence between random vectors. Second, we provide a detailed comparison with the classical approach of Koenker and Bassett in the case when the dependent variable is univariate and we show that in that case, VQR is equivalent to classical quantile regression with an additional monotonicity constraint.'
author:
- 'G. Carlier[^1], V. Chernozhukov [^2], A. Galichon [^3]'
title: Vector quantile regression beyond correct specification
---
**Keywords:** vector quantile regression, optimal transport, duality.
Introduction {#intro}
============
Vector quantile regression was recently introduced in [@ccg] in order to generalize the technique of quantile regression when the dependent random variable is multivariate. Quantile regression, pioneered by Koenker and Bassett [@kb], provides a powerful way to study dependence between random variables assuming a linear form for the quantile of the endogenous variable $Y$ given the explanatory variables $X$. It has therefore become a very popular tool in many areas of economics, program evaluation, biometrics, etc. However, a well-known limitation of the approach is that $Y$ should be scalar so that its quantile map is defined. When $Y$ is multivariate, there is no canonical notion of quantile, and the picture is less clear than in the univariate case[^4]. The approach proposed in [@ccg] is based on optimal transport ideas and can be described as follows. For a random vector vector $Y$ taking values in ${{\bf}{R}}^{d}$, we look for a random vector $U$ uniformly distributed on the unit cube $[0,1]^{d}$ and which is maximally correlated to $Y$, finding such a $U$ is an optimal transport problem. A celebrated result of Brenier [@brenier] implies that such an optimal $U$ is characterized by the existence of a convex function $\varphi $ such that $Y=\nabla \varphi (U)$. When $d=1$, of course, the optimal transport map of Brenier $\nabla \varphi
=Q$ is the quantile of $Y$ and in higher dimensions, it still has one of the main properties of univariate quantiles, namely monotonicity. Thus Brenier’s map $\nabla \varphi $ is a natural candidate to be considered as the vector quantile of $Y$, and one advantage of such an approach is the pointwise relation $Y=\nabla \varphi (U)$ where $U$ is a uniformly distributed random vector which best approximates $Y$ in $L^{2}$.
If, in addition, we are given another random vector $X$ capturing a set of observable explanatory variables, we wish to have a tractable method to estimate the conditional quantile of $Y$ given $X=x$, that is the map $u\in
\lbrack 0,1]^{d}\mapsto Q(x,u)\in {{\bf}{R}}^{d}$. In the univariate case $d=1$, and if the conditional quantile is affine in $x$ i.e. $Q(x,u)=\alpha
(u)+\beta (u)x$, the quantile regression method of Koenker and Bassett gives a constructive and powerful linear programming approach to compute the coefficients $\alpha (t)$ and $\beta (t)$ for any fixed $t\in \lbrack 0,1]$, dual to the linear programming problem: $$\sup_{(U_{t})}\{{{\bf}{E}}(U_{t}Y)\;:\;U_{t}\in \lbrack 0,1],{{\bf}{E}}(U_{t})=(1-t),\;{{\bf}{E}}(XU_{t})={{\bf}{E}}(X)\}. \label{kobas}$$Under correct specification, i.e. when the true conditional quantile is affine in $x$, this variational approach estimates the true coefficients $\alpha (t)$ and $\beta (t)$. In [@ccg], we have shown that in the multivariate case as well, when the true vector quantile is affine in $x$, one may estimate it by a variational problem which consists in finding the uniformly distributed random variable $U$ such that ${{\bf}{E}}(X|U)={{\bf}{E}}(X)$ (mean independence) and maximally correlated with $Y$.
The purpose of the present paper is to understand what these variational approaches tell about the dependence between $Y$ and $X$ in the general case i.e. without assuming any particular form for the conditional quantile. Our main results are the following:
- **A general representation of dependence:** we will characterize the solution of the optimal transport problem with a mean-independence constraint from [@ccg] and relate it to a relaxed form of vector quantile regression. To be more precise, our theorem \[qrasfoc\] below will provide the following general representation of the distribution of $\left( X,Y\right) $: $$\begin{split}
Y& \in \partial \Phi _{X}^{\ast \ast }(U)\mbox{ with $X\mapsto \Phi_X(U)$
affine}, \\
& \Phi _{X}(U)=\Phi _{X}^{\ast \ast }(U)\mbox{ almost surely,} \\
& \mbox{ $U$ uniformly distributed on $[0,1]^d$},\;{{\bf}{E}}(X|U)={{\bf}{E}}(X),
\end{split}$$where $\Phi _{x}^{\ast \ast }$ denotes the convex envelope of $u\rightarrow
\Phi _{x}\left( u\right) $ for a fixed $x$, and $\partial $ denotes the subdifferential. The main ingredients are convex duality and an existence theorem for optimal dual variables. The latter is a non-trivial extension of Kantorovich duality: indeed, the existence of a Lagrange multiplier associated to the mean-independence constraint is not straightforward and we shall prove it thanks to Komlos’ theorem (theorem \[existdual\]). Vector quantile regression is *under correct specification* if $\Phi _{x}\left( u\right) $ is convex for all $x$ in the support, in which case one can write $$\begin{split}
Y& =\nabla \Phi _{X}(U)\mbox{ with $\Phi_X(.)$ convex, $X\mapsto \Phi_X(U)$
affine}, \\
& \mbox{ $U$ uniformly distributed on $[0,1]^d$},\;{{\bf}{E}}(X|U)={{\bf}{E}}(X).
\end{split}$$While our previous paper [@ccg] focused on the case with correct specification, the results we obtain in the present paper are general.
- **A precise link with classical quantile regression in the univariate case:** it was shown in [@ccg] that in the particular case when $d=1$ and under correct specification, classical quantile regression and vector quantile regression are equivalent. Going beyond correct specification here, we shall see that the optimal transport approach is equivalent (theorem \[equivkb\]) to a variant of (\[kobas\]) where one further imposes the monotonicity constraint that $t\mapsto U_{t}$ is nonincreasing (which is consistent with the fact that the true quantile $Q(x,t)$ is nondecreasing with respect to $t$).
The paper is organized as follows. In section \[mvquant\], introduces vector quantiles through optimal transport. Section \[mvqr\] is devoted to a precise, duality based, analysis of the vector quantile regression beyond correct specification. Finally, we shall revisit in section \[univar\] the univariate case and then carefully relate the Koenker and Bassett approach to that of [@ccg].
Vector quantiles and optimal transport {#mvquant}
======================================
Let $(\Omega ,{\mathcal{F}},{\bf}{P})$ be some nonatomic probability space, and let $\left( X,Y\right) $ be a random vector, where the vector of explanatory variables $X$ is valued in ${{\bf}{R}}^{N}$ and the vector of dependent variables $Y$ is valued in ${{\bf}{R}}^{d}$.
Vector quantiles by correlation maximization {#mvquant1}
--------------------------------------------
The notion of vector quantile was recently introduced by Ekeland, Galichon and Henry [@egh], Galichon and Henry [@gh] and was used in the framework of quantile regression in our companion paper [@ccg]. The starting point for this approach is the correlation maximization problem $$\max \{{{\bf}{E}}(V\cdot Y),\;\mathop{\mathrm{Law}}\nolimits(V)=\mu \}
\label{otmd}$$where $\mu :=\mathop{\mathrm{uniform}}\nolimits([0,1]^{d})$ is the uniform measure on the unit $d$-dimensional cube $[0,1]^{d}$. This problem is equivalent to the optimal transport problem which consists in minimizing ${{\bf}{E}}(|Y-V|^{2})$ among uniformly distributed random vectors $V$. As shown in the seminal paper of Brenier [@brenier], this problem has a solution $U$ which is characterized by the condition $$Y=\nabla \varphi (U)$$for some (essentially uniquely defined) convex function $\varphi $ which is again obtained by solving a dual formulation of (\[otmd\]). Arguing that gradients of convex functions are the natural multivariate extension of monotone nondecreasing functions, the authors of [@egh] and [@gh] considered the function $Q:=\nabla \varphi $ as the vector quantile of $Y$. This function $Q=\nabla \varphi $ is by definition the Brenier’s map, i.e. the optimal transport map (for the quadratic cost) between the uniform measure on $[0,1]^{d}$ and $\mathop{\mathrm{Law}}\nolimits(Y)$:
**(Brenier’s theorem)** If $Y$ is a squared-integrable random vector valued in ${\bf}{R}^{d}$, there is a unique map of the form $T=\nabla
\varphi $ with $\varphi $ convex on $[0,1]^{d}$ such that $\nabla \varphi
_{\#}\mu =\mathop{\mathrm{Law}}\nolimits(Y)$, this map is by definition the vector quantile function of $Y$.
We refer to the textbooks [@villani], [@villani2] and [@sanbook] for a presentation of optimal transport theory, and to [@otme] for a survey of applications to economics.
Conditional vector quantiles {#mvquant2}
----------------------------
Take a $N$-dimensional random vector $X$ of regressors, $\nu :=\mathop{\mathrm{Law}}\nolimits(X,Y)$, $m:=\mathop{\mathrm{Law}}\nolimits(X)$, $\nu =\nu ^{x}\otimes m$ where $m$ is the law of $X$ and $\nu ^{x}$ is the law of $Y$ given $X=x$. One can consider $Q(x,u)=\nabla \varphi (x,u)$ as the optimal transport between $\mu $ and $\nu ^{x}$, under some regularity assumptions on $\nu ^{x}$, one can invert $Q(x,.)$: $Q(x,.)^{-1}=\nabla
_{y}\varphi (x,.)^{\ast }$ (where the Legendre transform is taken for fixed $x$) and one can define $U$ through $$U=\nabla _{y}\varphi ^{\ast }(X,Y),\;Y=Q(X,U)=\nabla _{u}\varphi (X,U)$$$Q(X,.)$ is then the conditional vector quantile of $Y$ given $X$. There is, as we will see in dimension one, a variational principle behind this definition:
- $U$ is uniformly distributed, independent from $X$ and solves:
$$\label{otmdindep}
\max \{ {{\bf}{E}}(V\cdot Y), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu,
V\perp \! \! \! \perp X\}$$
- the conditional quantile $Q(x,.)$ and its inverse are given by $Q(x,u)=\nabla_u \varphi(x,u)$, $F(x,y)=\nabla_y \psi(x,y)$ (the link between $F$ and $Q$ being $F(x, Q(x,u))=u$), the potentials $\psi$ and $\varphi$ are convex conjugates ($x$ being fixed) and solve
$$\min \int \varphi(x,u) m(dx) \mu(du) + \int \psi(x,y) \nu(dx, dy) \; : \;
\psi(x,y)+\varphi(x,u)\ge y\cdot u.$$
Note that if the conditional quantile function is affine in $X$ and $Y=Q(X,U)=\alpha(U) +\beta(U) X$ where $U$ is uniform and independent from $X$, the function $u\mapsto \alpha(u)+\beta(u)x$ should be the gradient of some function of $u$ which requires $$\alpha=\nabla \varphi, \; \beta=Db^T$$ for some potential $\varphi$ and some vector-valued function $b$ in which case, $Q(x,.)$ is the gradient of $u\mapsto \varphi(u)+b(u)\cdot x$. Moreover, since quantiles are gradients of convex potentials one should also have $$u\in [0,1]^d \mapsto \varphi(u) +b(u) \cdot x \mbox{ is convex}.$$
Vector quantile regression {#mvqr}
==========================
In the next paragraphs, we will impose a parametric form of the dependence of the dependence of the vector quantile $Q(x,u)$ with respect to $x$. More specifically, we shall assume that $Q(x,u)$ is affine in $x$. In the scalar case ($d=1$), this problem is called quantile regression; we shall investigate that case in section \[univar\] below.
Correlation maximization {#mvqr1}
------------------------
Without loss of generality we normalize $X$ so that it is centered $${{\bf}{E}}(X)=0.$$Our approach to vector quantile regression is based on a variation of the correlation maximization problem \[otmdindep\], where the independence constraint is replaced by a mean-independence constraint, that is$$\max \{{{\bf}{E}}(V\cdot Y),\;\mathop{\mathrm{Law}}\nolimits(V)=\mu ,\;{{\bf}{E}}(X|V)=0\}. \label{maxcorrmimd}$$where $\mu =\mathop{\mathrm{uniform}}\nolimits([0,1]^{d})$ is the uniform measure on the unit $d$-dimensional cube.
An obvious connection with the specification of vector quantile regression (i.e. the validity of an affine in $x$ form for the conditional quantile) is given by:
If $Y=\nabla \varphi(U)+ Db(U)^T X$ with
- $u\mapsto \varphi(u)+ b(u)\cdot x$ convex and smooth for $m$-a.e $x$,
- $\mathop{\mathrm{Law}}\nolimits(U)=\mu$, ${{\bf}{E}}(X\vert U)=0$,
then $U$ solves (\[maxcorrmimd\]).
This result follows from [@ccg], but for the sake of completeness, we give a proof: $$Y=\nabla \Phi_X(U), \mbox{ with } \Phi_X(t)=\varphi(t)+b(X)\cdot t.$$ Let $V$ be such that ${\mathop{\mathrm{Law}}\nolimits}(V)=\mu$, ${{{\bf}E}}(X\vert V)=0$, then by Young’s inequality $$V\cdot Y \le \Phi_X(V)+\Phi_X^*(Y)$$ but $Y=\nabla \Phi_X(U)$ implies that $$U\cdot Y = \Phi_X(U)+\Phi_X^*(Y)$$ so taking expectations gives the desired result.
Duality {#mvqr2}
-------
From now on, we do not assume a particular form for the conditional quantile and wish to study which information (\[maxcorrmimd\]) can give regarding the dependence of $X$ and $Y$. Once again, a good starting point is convex duality. As explained in details in [@ccg], the dual of ([maxcorrmimd]{}) takes the form $$\label{dualmimc}
\inf_{(\psi, \varphi, b)} {{\bf}{E}}(\psi(X,Y)+\varphi(U)) \; : \;
\psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y.$$ where $U$ is any uniformly distributed random vector on $[0,1]^d$ i.e. $\mathop{\mathrm{Law}}\nolimits(U)=\mu=\mathop{\mathrm{uniform}}\nolimits([0,1]^d)$ and the infimum is taken over continuous functions $\psi\in C(\mathop{\mathrm{spt}}\nolimits(\nu), {{\bf}{R}})$, $\varphi \in
C([0,1]^d, {{\bf}{R}})$ and $b\in C([0,1]^d, {{\bf}{R}}^N)$ satisfying the pointwise constraint $$\label{constraintdudual}
\psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y, \; \; \forall (x,y,t)\in \mathop{\mathrm{spt}}\nolimits(\nu)\times [0,1]^d.$$
Since for fixed $(\varphi, b)$, the largest $\psi$ which satisfies the pointwise constraint in (\[dualmimc\]) is given by the convex function $$\psi(x,y):=\max_{t\in [0,1]^d} \{ t\cdot y- \varphi(t) -b(t)\cdot x\}$$ one may equivalently rewrite (\[dualmimc\]) as the minimimization over continuous functions $\varphi$ and $b$ of $$\int \max_{t\in [0,1]} \{ ty- \varphi(t) -b(t)\cdot x\} \nu(dx,dy)+\int_{[0,
1]^d} \varphi(t)\mu(dt).$$ We claim now that the infimum over continuous functions $(\varphi, b)$ coincides with the one over smooth or simply integrable functions. Indeed, let $b\in L^1((0,1)^d)^N$, $\varphi \in L^1((0,1)^d))$ and $\psi$ such that (\[constraintdudual\]) holds. Let $\eps>0$ and, extend $\varphi$ and $b$ to $Q_\eps:=[0,1]^d+B_{\eps}$ ($B_\eps$ being the closed Euclidean ball of center $0$ and radius $\eps$): $$\varphi_\eps(t):=\begin{cases}
\varphi(t), \mbox{ if $t\in (0,1)^d$} \\
\frac{1}{\eps} \mbox{ if $t\in Q_\eps \setminus (0,1)^d$}\end{cases}, \; b_\eps(t):=\begin{cases}
b(t), \mbox{ if $t\in (0,1)^d$} \\
0 \mbox{ if $t\in Q_\eps \setminus (0,1)^d$}\end{cases}$$ and for $(x,y)\in \mathop{\mathrm{spt}}\nolimits(\nu)$: $$\psi_\eps(x,y):=\max\Big(\psi(x,y), \max_{t\in Q_\eps \setminus (0,1)^d}
(t\cdot y-\frac{1}{\eps})\Big)$$ then by construction $(\psi_\eps, \varphi_\eps, b_\eps)$ satisfies ([constraintdudual]{}) on $\mathop{\mathrm{spt}}\nolimits(\nu)\times Q_\eps$. Let $\rho \in C_c^{\infty}({{\bf}{R}}^d)$ be a centered, smooth probability density supported on $B_1$, and define the mollifiers $\rho_\delta:=\delta^{-d} \rho(\frac{.}{\delta})$, then for $\delta \in (0, \eps)$, defining the smooth functions $b_{\eps, \delta}:=\rho_\delta \star b_\eps$ and $\varphi_{\eps, \delta}:=\rho_\delta \star \varphi_\eps$, we have that $(\psi_\eps, \varphi_{\eps, \delta}, b_{\eps, \delta})$ satisfies ([constraintdudual]{}). By monotone convergence, $\int \psi_\eps \mbox{d} \nu$ converges to $\int \psi \mbox{d}\nu$, moreover $$\lim_{\delta \to 0} \int_{(0,1)^d} \varphi_{\eps, \delta} = \int_{(0,1)^d}
\varphi_\eps=\int_{(0,1)^d} \varphi,$$ we deduce that the value of the minimization problem (\[dualmimc\]), can indifferently be obtained by minimizing over continuous, smooth or $L^1$ $\varphi$ and $b$’s. The existence of optimal ($L^1$) functions $\psi,
\varphi $ and $b$ is not totally obvious and is proven in the appendix under the following assumptions:
- the support of $\nu$, is of the form $\mathop{\mathrm{spt}}\nolimits(\nu):=\overline{\Omega}$ where $\Omega$ is an open bounded convex subset of ${{\bf}{R}}^N\times {{\bf}{R}}^d$,
- $\nu\in L^{\infty}(\Omega)$,
- $\nu$ is bounded away from zero on compact subsets of $\Omega$ that is for every $K$ compact, included in $\Omega$ there exists $\alpha_K>0$ such that $\nu\ge \alpha_K$ a.e. on $K$.
\[existdual\] Under the assumptions above, the dual problem ([dualmimc]{}) admits at least a solution.
Vector quantile regression as optimality conditions {#mvqr3}
---------------------------------------------------
Let $U$ solve (\[maxcorrmimd\]) and $(\psi, \varphi, b)$ solve its dual (\[dualmimc\]). Recall that, without loss of generality, we can take $\psi$ convex given $$\label{psiconj}
\psi(x,y)=\sup_{t\in [0,1]^d} \{ t\cdot y- \varphi(t) -b(t)\cdot x\} .$$ The constraint of the dual is $$\label{contraintedual}
\psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y, \; \forall (x,y,t)\in
\Omega\times [0,1]^d,$$ and the primal-dual relations give that, almost-surely $$\label{dualrel000}
\psi(X,Y)+\varphi(U)+b(U)\cdot X= U\cdot Y.$$ Which, since $\psi$, given by (\[psiconj\]) , is convex, yields $$(-b(U), U)\in \partial \psi(X,Y), \mbox{ or, equivalently } (X,Y)\in
\partial \psi^*(-b(U),U).$$
Problems (\[maxcorrmimd\]) and (\[dualmimc\]) have thus enabled us to find:
- $U$ uniformly distributed with $X$ mean-independent from $U$,
- $\phi$ : $[0,1]^d \to {{\bf}{R}}$, $b$ : $[0,1]^d \to {{\bf}{R}}^N$ and $\psi$ : $\Omega\to {{\bf}{R}}$ convex,
such that $(X,Y)\in \partial \psi ^{\ast }(-b(U),U)$. Specification of vector quantile regression rather asks whether one can write $Y=\nabla
\varphi (U)+Db(U)^{T}X:=\nabla \Phi _{X}(U)$ with $u\mapsto \Phi
_{x}(u):=\varphi (u)+b(u)x$ convex in $u$ for fixed $x$. The smoothness of $\varphi $ and $b$ is actually related to this specification issue. Indeed, if $\varphi $ and $b$ were smooth then (by the envelope theorem) we would have $$Y=\nabla \varphi (U)+Db(U)^{T}X=\nabla \Phi _{X}(U).$$But even smoothness of $\varphi $ and $b$ are not enough to guarantee that the conditional quantile is affine in $x$, which would also require $u\mapsto \Phi _{x}(u)$ to be convex. Note also that if $\psi $ was smooth, we would then have $$U=\nabla _{y}\psi (X,Y),\;-b(U)=\nabla _{x}\psi (X,Y)$$so that $b$ and $\psi $ should be related by the vectorial Hamilton-Jacobi equation $$\nabla _{x}\psi (x,y)+b(\nabla _{y}\psi (x,y))=0. \label{hj}$$
In general (without assuming any smoothness), define $$\psi_x(y)=\psi(x,y).$$ We then have, thanks to (\[contraintedual\])-(\[dualrel000\]) $$U\in \partial \psi_X(Y) \mbox{ i.e. } Y \in \partial \psi_X^* (U).$$ The constraint of (\[dualmimc\]) also gives $$\psi_x(y) +\Phi_x(t)\ge t\cdot y$$ since Legendre Transform is order-reversing, this implies $$\label{inegdualps}
\psi_x \ge \Phi_x^*$$ hence $$\psi_x^* \le (\Phi_x)^{**} \le \Phi_x$$ (where $\Phi_x^{**}$ denotes the convex envelope of $\Phi_x$). Duality between (\[maxcorrmimd\]) and (\[dualmimc\]) thus gives:
\[qrasfoc\] Let $U$ solve (\[maxcorrmimd\]), $(\psi, \varphi, b)$ solve its dual (\[dualmimc\]) and set $\Phi_x(t):=\varphi(t)+b(t)\cdot x$ for every $(t,x)\in [0,1]^d \times \mathop{\mathrm{spt}}\nolimits(m)$ then $$\label{relaxspec}
\Phi_X(U)= \Phi_X^{**}(U) \mbox{ and } U \in \partial \Phi_X^*(Y)
\mbox{
i.e. } Y \in \partial \Phi_X^{**}(U).$$ almost surely.
From the duality relation [(\[dualrel000\])]{} and [(\[inegdualps\])]{}, we have $$U\cdot Y=\psi_X(Y)+\Phi_X(U)\ge \Phi_X^*(Y)+\Phi_X(U)$$ so that $U\cdot Y= \Phi_X^*(Y)+\Phi_X(U)$ and then $$\Phi_X^{**}(U)\ge U\cdot Y-\Phi_X^*(Y)=\Phi_X(U).$$ Hence, $\Phi_X(U)=\Phi_X^{**}(U)$ and $U\cdot Y=\Phi_X^*(Y)+\Phi_X^{**}(U)$ i.e. $U\in \partial \Phi_X^*(Y)$ almost surely, and the latter is equivalent to the requirement that $Y \in \partial \Phi_X^{**}(U)$.
The previous theorem thus gives the following interpretation of the correlation maximization with a mean independence constraint ([maxcorrmimd]{}) and its dual (\[dualmimc\]). These two variational problems in duality lead to the pointwise relations (\[relaxspec\]) which can be seen as best approximations of a specification assumption: $$Y=\nabla \Phi _{X}(U),\;(X,U)\mapsto \Phi _{X}(U)\mbox{ affine in $X$,
convex in $U$}$$with $U$ uniformly distributed and ${{{\bf}E}}(X\vert U)=0$. Indeed in (\[relaxspec\]), $\Phi _{X}$ is replaced by its convex envelope, the uniform random variable $U$ solving (\[maxcorrmimd\]) is shown to lie a.s. in the contact set $\Phi _{X}=\Phi _{X}^{\ast \ast }$ and the gradient of $\Phi_X$ (which may not be well-defined) is replaced by a subgradient of $\Phi_X^{**}$.
The univariate case {#univar}
===================
We now study in detail the case when the dependent variable $X$ is scalar, i.e. $d=1$. As before, let $(\Omega ,{\mathcal{F}},{\bf}{P})$ be some nonatomic probability space and $Y$ be some *univariate* random variable defined on this space. Denoting by $F_{Y}$ the distribution function of $Y$: $$F_{Y}(\alpha ):={\bf}{P}(Y\leq \alpha ),\;\forall \alpha \in {{\bf}{R}}$$the *quantile* function of $Y$, $Q_{Y}=F_{Y}^{-1}$ is the generalized inverse of $F_{Y}$ given by the formula: $$Q_{Y}(t):=\inf \{\alpha \in {{\bf}{R}}\;:\;F_{Y}(\alpha )>t\}\mbox{ for all
}t\in (0,1). \label{defqy}$$Let us now recall two well-known facts about quantiles:
- $\alpha=Q_Y(t)$ is a solution of the convex minimization problem $$\label{minquant}
\min_{\alpha} \{{{\bf}{E}}((Y-\alpha)_+)+ \alpha(1-t)\}$$
- there exists a uniformly distributed random variable $U$ such that $Y=Q_Y(U)$. Moreover, among uniformly distributed random variables, $U$ is maximally correlated to $Y$ in the sense that it solves $$\label{oteasy}
\max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu\}$$ where $\mu:=\mathop{\mathrm{uniform}}\nolimits([0,1])$ is the uniform measure on $[0,1]$.
Of course, when $\mathop{\mathrm{Law}}\nolimits(Y)$ has no atom, i.e. when $F_Y$ is continuous, $U$ is unique and given by $U=F_Y(Y)$. Problem ([oteasy]{}) is the easiest example of optimal transport problem one can think of. The decomposition of a random variable $Y$ as the composed of a monotone nondecreasing function and a uniformly distributed random variable is called a *polar factorization* of $Y$, the existence of such decompositions goes back to Ryff [@ryff] and the extension to the multivariate case (by optimal transport) is due to Brenier [@brenier].
We therefore see that there are basically two different approaches to study or estimate quantiles:
- the *local* or “$t$ by $t$” approach which consists, for a fixed probability level $t$, in using directly formula (\[defqy\]) or the minimization problem (\[minquant\]) (or some approximation of it), this can be done very efficiently in practice but has the disadvantage of forgetting the fundamental global property of the quantile function: it should be monotone in $t$,
- the global approach (or polar factorization approach), where quantiles of $Y$ are defined as all nondecreasing functions $Q$ for which one can write $Y=Q(U)$ with $U$ uniformly distributed; in this approach, one rather tries to recover directly the whole monotone function $Q$ (or the uniform variable $U$ that is maximally correlated to $Y$), in this global approach, one should rather use the optimization problem (\[oteasy\]).
Let us assume now that, in addition to the random variable $Y$, we are also given a random vector $X\in {{\bf}{R}}^N$ which we may think of as being a list of explanatory variables for $Y$. We are therefore interested in the dependence between $Y$ and $X$ and in particular the conditional quantiles of $Y$ given $X=x$. In the sequel we shall denote by $\nu$ the joint law of $(X,Y)$, $\nu:=\mathop{\mathrm{Law}}\nolimits(X,Y)$ and assume that $\nu$ is compactly supported on ${{\bf}{R}}^{N+1}$ (i.e. $X$ and $Y$ are bounded). We shall also denote by $m$ the first marginal of $\nu$ i.e. $m:={\Pi_{X}}_\# \nu=\mathop{\mathrm{Law}}\nolimits(X)$. We shall denote by $F(x,y)$ the conditional cdf: $$F(x,y):={\bf}{P}(Y\le y \vert X=x)$$ and $Q(x,t)$ the conditional quantile $$Q(x,t):=\inf\{\alpha\in {{\bf}{R}} \; : \; F(x,\alpha)>t\}.$$ For the sake of simplicity we shall also assume that:
- for $m$-a.e. $x$, $t\mapsto Q(x,t)$ is continuous and increasing (so that for $m$-a.e. $x$, identities $Q(x, F(x,y))=y$ and $F(x, Q(x,t))=t$ hold for every $y$ and every $t$),
- the law of $(X,Y)$ does not charge nonvertical hyperplanes i.e. for every $(\alpha, \beta)\in {{\bf}{R}}^{1+N}$, ${\bf}{P}(Y=\alpha+\beta\cdot X)=0$.
Finally we denote by $\nu^x$ the conditional probability of $Y$ given $X=x$ so that $\nu=m\otimes \nu^x$.
A variational characterization of conditional quantiles {#univar1}
-------------------------------------------------------
Let us define the random variable $U:=F(X,Y)$, then by construction: $$\begin{split}
{\bf}{P}(U< t\vert X=x)&={\bf}{P}(F(x,Y)<t \vert X=x)={\bf}{P}(Y<Q(x,t) \vert X=x) \\
&=F(x,Q(x,t))=t.
\end{split}$$ From this elementary observation we deduce that
- $U$ is independent from $X$ (since its conditional cdf does not depend on $x$),
- $U$ is uniformly distributed,
- $Y=Q(X,U)$ where $Q(x,.)$ is increasing.
This easy remark leads to a sort of conditional polar factorization of $Y$ with an independence condition between $U$ and $X$. We would like to emphasize now that there is a variational principle behind this conditional decomposition. Recall that we have denoted by $\mu$ the uniform measure on $[0,1]$. Let us consider the variant of the optimal transport problem ([oteasy]{}) where one further requires $U$ to be independent from the vector of regressors $X$: $$\label{otindep1d}
\max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu, \; V
\perp \! \! \! \perp X \}.$$ which in terms of joint law $\theta=\mathop{\mathrm{Law}}\nolimits(X,Y, U)$ can be written as $$\label{mk1}
\max_{\theta\in I (\nu, \mu)} \int u\cdot y \; \theta(dx, dy, du)$$ where $I(\mu, \nu)$ consists of probability measures $\theta$ on ${{\bf}{R}}^{N+1}\times [0,1]$ such that the $(X,Y)$ marginal of $\theta$ is $\nu$ and the $(X,U)$ marginal of $\theta$ is $m\otimes \mu$. Problem (\[mk1\]) is a linear programming problem and our assumptions easily imply that it possesses solutions, moreover its dual formulation (see [@ccg] for details) reads as the minimization of $$\label{mk1dual}
\inf J(\varphi, \psi)= \int \varphi(x,u)m(dx) \mu(du)+\int \psi(x,y) \nu(dx,
dy)$$ among pairs of potentials $\varphi$, $\psi$ that pointwise satisfy the constraint $$\label{constr1}
\varphi(x,u)+\psi(x,y)\ge uy.$$ Rewriting $J(\varphi, \psi)$ as $$J(\varphi, \psi)= \int \Big( \int \varphi(x,u) \mu(du)+\int \psi(x,y)
\nu^x(dy) \Big) m(dx)$$ and using the fact that the right hand side of the constraint (\[constr1\]) has no dependence in $x$, we observe that (\[mk1dual\]) can actually be solved “$x$ by $x$”. More precisely, for fixed $x$ in the support of $m$, $\varphi(x,.)$ and $\psi(x,.)$ are obtained by solving $$\inf \int f(u) \mu(du)+ \int g(y) \nu^x(dy) \; : \; f(u)+g(y)\ge uy$$ which appears naturally in optimal transport and is well-known to admit a solution which is given by a pair of convex conjugate functions (see [villani]{} [@villani2]). In other words, the infimum in (\[mk1dual\]) is attained by a pair $\varphi$ and $\psi$ such that for $m$-a.e. $x$, $\varphi(x,.)$ and $\psi(x,.)$ are conjugate convex functions: $$\varphi(x,u)=\sup_{y} \{uy-\psi(x,y)\}, \; \psi(x,y):=\sup_{u}
\{uy-\varphi(x,u)\}.$$ Since $\varphi(x,.)$ is convex it is differentiable a.e. and then $\partial_u \varphi(x,u)$ is defined for a.e. $u$, moreover $\partial_u
\varphi(x,.)_\#\mu=\nu^x$; hence $\partial_u \varphi(x,.)$ is a nondecreasing map which pushes $\mu$ forward to $\nu^x$: it thus coincides with the conditional quantile $$\label{quantilopt}
\partial_u \varphi(x,t)=Q(x,t) \mbox{ for $m$-a.e. $x$ and every $t$}.$$ We then have the following variational characterization of conditional quantiles
Let $\varphi$ and $\psi$ solve (\[mk1dual\]). Then for $m$-a.e. $x$, the conditional quantile $Q(x,.)$ is given by: $$Q(x,.) =\partial_u \varphi(x,.)$$ and the conditional cdf $F(x,.)$ is given by: $$F(x,.)=\partial_y \psi(x,.).$$
Let now $\theta$ solve (\[mk1\]), there is a unique $U$ such that $\mathop{\mathrm{Law}}\nolimits(X,Y, U)=\theta$ (so that $U$ is uniformly distributed and independent from $X$) and it is given by $Y=\partial_u
\varphi(X,U)$ almost surely.
The fact that identity [(\[quantilopt\])]{} holds for every $t$ and $m$ a.e. $x$ comes from the continuity of the conditional quantile. The second identity comes from the continuity of the conditional cdf. Now, duality tells us that the maximum in [(\[mk1\])]{} coincides with the infimum in [(\[mk1dual\])]{}, so that if $\theta\in I(\mu, \nu)$ is optimal for [(\[mk1dual\])]{} and $({\widetilde{X}}, {\widetilde{Y}}, {\widetilde{U}})$ has law $\theta$[^5], we have $${{{\bf}E}}({\widetilde{U}}{\widetilde{Y}})={{{\bf}E}}(\varphi({\widetilde{X}},{\widetilde{U}})+\psi({\widetilde{X}},{\widetilde{Y}})).$$ Hence, almost surely $${\widetilde{U}}{\widetilde{Y}}=\varphi({\widetilde{X}},{\widetilde{U}})+\psi({\widetilde{X}},{\widetilde{Y}}).$$ which, since $\varphi(x,.)$ and $\psi(x,.)$ are conjugate and $\varphi(x,.)$ is differentiable, gives $$\label{sousdiff}
{\widetilde{Y}}= \partial_u \varphi({\widetilde{X}}, {\widetilde{U}})=Q({\widetilde{X}}, {\widetilde{U}}).$$ Since $F(x,.)$ is the inverse of the conditional quantile, we can invert the previous relation as $$\label{sousdiff2}
{\widetilde{U}}= \partial_y \psi({\widetilde{X}}, {\widetilde{Y}})=F({\widetilde{X}}, {\widetilde{U}}).$$ We then define $$U:= \partial_y \psi(X, Y)=F(X,Y),$$ then obviously, by construction ${\mathop{\mathrm{Law}}\nolimits}(X,Y, U)=\theta$ and $Y=\partial_u \varphi(X,U)=Q(X,U)$ almost surely. If ${\mathop{\mathrm{Law}}\nolimits}(X,Y, U)=\theta$ , then as observed above, necessarily $U=F(X,Y)$ which proves the uniqueness claim.
To sum up, thanks to the two problems (\[mk1\]) and (\[mk1dual\]), we have been able to find a *conditional polar factorization* of $Y$ as $$\label{polarfact1}
Y=Q(X,U), \; \mbox{ $Q$ nondecreasing in $U$, $U$ uniform, $U {\perp \! \! \! \perp}X$}.$$ One obtains $U$ thanks to the the correlation maximization with an independence constraint problem (\[otindep1d\]) and one obtains the primitive of $Q(X,.)$ by the dual problem (\[mk1dual\]).
In this decomposition, it is very demanding to ask that $U$ is independent from the regressors $X$, in turn, the function $Q(X,.)$ is just monotone nondecreasing. In practice, the econometrician rather looks for a specific form of $Q$ (linear in $X$ for instance), which by duality will amount to relaxing the independence constraint. We shall develop this idea in details in the next paragraphs and relate it to classical quantile regression.
Quantile regression: from specification to quasi-specification {#univar2}
--------------------------------------------------------------
From now on, we normalize $X$ to be centered i.e. assume (and this is without loss of generality) that $${{\bf}{E}}(X)=0.$$ We also assume that $m:=\mathop{\mathrm{Law}}\nolimits(X)$ is nondegenerate in the sense that its support contains some ball centered at ${{\bf}{E}}(X)=0$.
Since the seminal work of Koenker and Bassett [@kb], it has been widely accepted that a convenient way to estimate conditional quantiles is to stipulate an affine form with respect to $x$ for the conditional quantile. Since a quantile function should be monotone in its second argument, this leads to the following definition
Quantile regression is under correct specification if there exist $(\alpha, \beta)\in C([0,1],
{{\bf}{R}})\times C([0,1], {{\bf}{R}}^N)$ such that for $m$-a.e. $x$ $$\label{monqr}
t\mapsto \alpha(t)+\beta(t)\cdot x \mbox{ is increasing on $[0,1]$}$$ and $$\label{linearcq}
Q(x,t)=\alpha(t)+ x\cdot \beta(t),$$ for $m$-a.e. $x$ and every $t\in [0,1]$. If (\[monqr\])-(\[linearcq\]) hold, quantile regression is under correct specification with regression coefficients $(\alpha, \beta)$.
Specification of quantile regression can be characterized by
Let $(\alpha, \beta)$ be continuous and satisfy (\[monqr\]). Quantile regression is under correct specification with regression coefficients $(\alpha, \beta)$ if and only if there exists $U$ such that $$\label{polarfqrind}
Y=\alpha(U)+X\cdot \beta(U) \mbox{ a.s.} , \; \mathop{\mathrm{Law}}\nolimits(U)=\mu, \; U \perp \! \! \! \perp X.$$
The fact that specification of quantile regression implies decomposition [(\[polarfqrind\])]{} has already been explained in paragraph \[univar1\]. Let us assume [(\[polarfqrind\])]{}, and compute $$\begin{split}
F(x, \alpha(t)+\beta(t)\cdot x)&={{\bf}{P}}(Y\leq \alpha(t)+\beta(t) x\vert X=x)\\
&= {{\bf}{P}}(\alpha(U)+x\cdot \beta(U) \leq \alpha(t)+\beta(t) x\vert X=x)\\
&={{\bf}{P}}(U\leq t \vert X=x)={{\bf}{P}}(U\le t)=t
\end{split}$$ so that $Q(x,t)=\alpha(t)+\beta(t)\cdot x$.
Koenker and Bassett showed that, for a fixed probability level $t$, the regression coefficients $(\alpha, \beta)$ can be estimated by quantile regression i.e. the minimization problem $$\label{kb0}
\inf_{(\alpha, \beta) \in {{\bf}{R}}^{1+N}} {{\bf}{E}}(\rho_t(Y-\alpha -
\beta\cdot X))$$ where the penalty $\rho_t$ is given by $\rho_t(z) := tz_- +(1-t)z_+$ with $z_-$ and $z_+$ denoting the negative and positive parts of $z$. For further use, note that (\[kb0\]) can be conveniently be rewritten as $$\label{kb1}
\inf_{(\alpha, \beta) \in {{\bf}{R}}^{1+N}} \{ {{\bf}{E}}((Y-\alpha-\beta \cdot X)_+)+(1-t) \alpha\}.$$ As already noticed by Koenker and Bassett, this convex program admits as dual formulation $$\label{dt}
\sup \{{{\bf}{E}}(U_t Y)) \; : \; U_t \in [0,1], \; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0 \}.$$ An optimal $(\alpha, \beta)$ for (\[kb1\]) and an optimal $U_t$ in ([dt]{}) are related by the complementary slackness condition: $$Y>\alpha +\beta \cdot X \Rightarrow U_t=1, \mbox{ and } \; Y<\alpha + \beta
\cdot X \Rightarrow U_t=0.$$ Note that $\alpha$ appears naturally as a Lagrange multiplier associated to the constraint ${{\bf}{E}}(U_t)=(1-t)$ and $\beta$ as a Lagrange multiplier associated to ${{\bf}{E}}(U_t X)=0$. Since $\nu=\mathop{\mathrm{Law}}\nolimits(X,Y)$ gives zero mass to nonvertical hyperplanes, we may simply write $$\label{frombetatoU}
U_t=\mathbf{1}_{\{Y>\alpha +\beta \cdot X\}}$$ and thus the constraints ${{\bf}{E}}(U_t)=(1-t)$, ${{\bf}{E}}(XU_t)=0$ read $$\label{normaleq}
{{\bf}{E}}( \mathbf{1}_{\{Y> \alpha+ \beta \cdot X \}})={\bf}{P}(Y>
\alpha+ \beta \cdot X) = (1-t),\; {{\bf}{E}}(X \mathbf{1}_{\{Y> \alpha+
\beta \cdot X \}} ) =0$$ which simply are the first-order conditions for (\[kb1\]).
Any pair $(\alpha, \beta)$ which solves[^6] the optimality conditions ([normaleq]{}) for the Koenker and Bassett approach will be denoted $$\alpha=\alpha^{QR}(t), \beta=\beta^{QR}(t)$$ and the variable $U_t$ solving (\[dt\]) given by (\[frombetatoU\]) will similarly be denoted $U_t^{QR}$ $$\label{utqr}
U_t^{QR}:=\mathbf{1}_{\{Y>\alpha^{QR}(t) +\beta^{QR}(t) \cdot X\}}.$$
Note that in the previous considerations the probability level $t$ is fixed, this is what we called the “$t$ by $t$” approach. For this approach to be consistent with conditional quantile estimation, if we allow $t$ to vary we should add an additional monotonicity requirement:
Quantile regression is under quasi-specification if there exists for each $t$, a solution $(\alpha^{QR}(t), \beta^{QR}(t))$ of (\[normaleq\]) (equivalently the minimization problem (\[kb0\])) such that $t\in [0,1]\mapsto
(\alpha^{QR}(t), \beta^{QR}(t))$ is continuous and, for $m$-a.e. $x$ $$\label{monqrqs}
t\mapsto \alpha^{QR}(t)+\beta^{QR}(t)\cdot x \mbox{ is increasing on $[0,1]$}.$$
A first consequence of quasi-specification is given by
\[qrqsdec\] If quantile regression is under quasi-specification and if we define $U^{QR}:=\int_0^1 U_t^{QR} dt$ (recall that $U_t^{QR}$ is given by (\[utqr\])) then:
- $U^{QR}$ is uniformly distributed,
- $X$ is mean-independent from $U^{QR}$ i.e. ${{\bf}{E}}(X\vert
U^{QR})={{\bf}{E}}(X)=0$,
- $Y=\alpha^{QR}(U^{QR})+ \beta^{QR}(U^{QR})\cdot X$ a.s.
Moreover $U^{QR}$ solves the correlation maximization problem with a mean-independence constraint:
$$\label{maxcorrmi}
\max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu, \; {{\bf}{E}}(X\vert V)=0\}.$$
Obviously $$U_t^{QR}=1\Rightarrow U^{QR} \ge t, \mbox{ and } \; U^{QR}>t \Rightarrow U_t^{QR}=1$$ hence ${{\bf}{P}}(U^{QR}\ge t)\ge {{\bf}{P}}(U_t^{QR}=1)={{\bf}{P}}(Y> \alpha^{QR}(t)+\beta^{QR}(t)\cdot X)=(1-t)$ and ${{\bf}{P}}(U^{QR}> t)\le {{\bf}{P}}(U_t^{QR}=1)=(1-t)$ which proves that $U^{QR}$ is uniformly distributed and $\{U^{QR}>t\}$ coincides with $\{U^{QR}_t=1\}$ up to a set of null probability. We thus have ${{{\bf}E}}(X {{\bf 1}}_{U^{QR}>t})={{{\bf}E}}(X U_t^{QR})=0$, by a standard approximation argument we deduce that ${{{\bf}E}}(Xf(U^{QR}))=0$ for every $f\in C([0,1], {{{\bf}R}})$ which means that $X$ is mean-independent from $U^{QR}$.
As already observed $U^{QR}>t$ implies that $Y>\alpha^{QR}(t)+\beta^{QR}(t)\cdot X$ in particular $Y\ge \alpha^{QR}(U^{QR}-\delta)+\beta^{QR}(U^{QR}- \delta) \cdot X$ for $\delta>0$, letting $\delta\to 0^+$ and using the continuity of $(\alpha^{QR}, \beta^{QR})$ we get $Y\ge \alpha^{QR}(U^{QR})+\beta^{QR}(U^{QR}) \cdot X$. The converse inequality is obtained similarly by remaking that $U^{QR}<t$ implies that $Y\le \alpha^{QR}(t)+\beta^{QR}(t)\cdot X$.
Let us now prove that $U^{QR}$ solves [(\[maxcorrmi\])]{}. Take $V$ uniformly distributed, such that $X$ is mean-independent from $V$ and set $V_t:={{\bf 1}}_{\{V>t \}}$, we then have ${{{\bf}E}}(X V_t)=0$, ${{{\bf}E}}(V_t)=(1-t)$ but since $U_t^{QR}$ solves [(\[dt\])]{} we have ${{{\bf}E}}(V_t Y)\le {{{\bf}E}}(U_t^{QR}Y)$. Observing that $V=\int_0^1 V_t dt$ and integrating the previous inequality with respect to $t$ gives ${{{\bf}E}}(VY)\le {{{\bf}E}}(U^{QR}Y)$ so that $U^{QR}$ solves [(\[maxcorrmi\])]{}.
Let us continue with a uniqueness argument for the mean-independent decomposition given in proposition \[qrqsdec\]:
\[uniquedec\] Let us assume that $$Y=\alpha(U)+\beta(U)\cdot X=\overline{\alpha} (\overline{U})+ \overline{\beta}(\overline{U})\cdot X$$ with:
- both $U$ and $\overline{U}$ uniformly distributed,
- $X$ is mean-independent from $U$ and $\overline{U}$: ${{\bf}{E}}(X\vert U)={{\bf}{E}}(X\vert \overline{U})=0$,
- $\alpha, \beta, \overline{\alpha}, \overline{\beta}$ are continuous on $[0,1]$,
- $(\alpha, \beta)$ and $(\overline{\alpha}, \overline{\beta})$ satisfy the monotonicity condition (\[monqr\]),
then $$\alpha=\overline{\alpha}, \; \beta=\overline{\beta}, \; U=\overline{U}.$$
Let us define for every $t\in [0,1]$ $$\varphi(t):=\int_0^t \alpha(s)ds, \; b(t):=\int_0^t \beta(s)ds.$$ Let us also define for $(x,y)$ in ${{{\bf}R}}^{N+1}$: $$\psi(x,y):=\max_{t\in [0,1]} \{ty-\varphi(t)-b(t)\cdot x\}$$ thanks to monotonicity condition [(\[monqr\])]{}, the maximization program above is strictly concave in $t$ for every $y$ and $m$-a.e.$x$. We then remark that $Y=\alpha(U)+\beta(U)\cdot X=\varphi'(U)+b'(U)\cdot X$ exactly is the first-order condition for the above maximization problem when $(x,y)=(X,Y)$. In other words, we have $$\label{ineqq}
\psi(x,y)+b(t)\cdot x + \varphi(t)\ge ty, \; \forall (t,x,y)\in [0,1]\times {{{\bf}R}}^N\times {{{\bf}R}}$$ with and equality for $(x,y,t)=(X,Y,U)$ i.e. $$\label{eqas}
\psi(X,Y)+b(U)\cdot X + \varphi(U)=UY, \; \mbox{ a.s. }$$ Using the fact that ${\mathop{\mathrm{Law}}\nolimits}(U)={\mathop{\mathrm{Law}}\nolimits}({\overline{U}})$ and the fact that mean-independence gives ${{{\bf}E}}(b(U)\cdot X)={{{\bf}E}}(b({\overline{U}})\cdot X)=0$, we have $${{{\bf}E}}(UY)={{{\bf}E}}( \psi(X,Y)+b(U)\cdot X + \varphi(U))= {{{\bf}E}}( \psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}})) \ge {{{\bf}E}}({\overline{U}}Y)$$ but reversing the role of $U$ and ${\overline{U}}$, we also have ${{{\bf}E}}(UY)\le {{{\bf}E}}({\overline{U}}Y)$ and then $${{{\bf}E}}({\overline{U}}Y)= {{{\bf}E}}( \psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}}))$$ so that, thanks to inequality [(\[ineqq\])]{} $$\psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}})={\overline{U}}Y, \; \mbox{ a.s. }$$ which means that ${\overline{U}}$ solves $\max_{t\in [0,1]} \{tY-\varphi(t)-b(t)\cdot X\}$ which, by strict concavity admits $U$ as unique solution. This proves that $U={\overline{U}}$ and thus $$\alpha(U)-{\overline{\alpha}}(U)=({\overline{\beta}}(U)-\beta(U))\cdot X$$ taking the conditional expectation of both sides with respect to $U$, we then obtain $\alpha={\overline{\alpha}}$ and thus $\beta(U)\cdot X={\overline{\beta}}(U)\cdot X$ a.s.. We then compute $$\begin{split}
F(x, \alpha(t)+\beta(t)\cdot x)&= {{\bf}{P}}(\alpha(U)+\beta(U)\cdot X \le \alpha(t)+\beta(t)\cdot x \vert X=x) \\
&={{\bf}{P}}( \alpha(U)+ \beta(U)\cdot x \le \alpha(t)+\beta(t)\cdot x \vert X=x)\\
&={{\bf}{P}}(U\le t \vert X=x)
\end{split}$$ and similarly $F(x, \alpha(t)+{\overline{\beta}}(t)\cdot x)={{\bf}{P}}(U\le t \vert X=x)=F(x, \alpha(t)+\beta(t)\cdot x)$. Since $F(x,.)$ is increasing for $m$-a.e. $x$, we deduce that $\beta(t)\cdot x={\overline{\beta}}(t)\cdot x$ for $m$-a.e. $x$ and every $t\in[0,1]$. Finally, the previous considerations and the nondegeneracy of $m$ enable us to conclude that $\beta={\overline{\beta}}$.
If quantile regression is under quasi-specification, the regression coefficients $(\alpha^{QR}, \beta^{QR})$ are uniquely defined and if $Y$ can be written as $$Y=\alpha(U)+\beta(U)\cdot X$$ for $U$ uniformly distributed, $X$ being mean independent from $U$, $(\alpha, \beta)$ continuous such that the monotonicity condition (\[monqr\]) holds then necessarily $$\alpha=\alpha^{QR}, \; \beta=\beta^{QR}.$$
To sum up, we have shown that quasi-specification is equivalent to the validity of the factor linear model: $$Y=\alpha(U)+\beta(U)\cdot X$$ for $(\alpha, \beta)$ continuous and satisfying the monotonicity condition (\[monqr\]) and $U$, uniformly distributed and such that $X$ is mean-independent from $U$. This has to be compared with the decomposition of paragraph \[univar1\] where $U$ is required to be independent from $X$ but the dependence of $Y$ with respect to $U$, given $X$, is given by any nondecreasing function of $U$.
Global approaches and duality {#univar3}
-----------------------------
Now we wish to address quantile regression in the case where neither specification nor quasi-specification can be taken for granted. In such a general situation, keeping in mind the remarks from the previous paragraphs, we can think of two natural approaches.
The first one consists in studying directly the correlation maximization with a mean-independence constraint (\[maxcorrmi\]). The second one consists in getting back to the Koenker and Bassett $t$ by $t$ problem ([dt]{}) but adding as an additional global consistency constraint that $U_t$ should be nonincreasing with respect to $t$:
$$\label{monconstr}
\sup\{{{\bf}{E}}(\int_0^1 U_t Ydt ) \; : \: U_t \mbox{ nonincr.}, U_t\in
[0,1],\; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0\}$$
Our aim is to compare these two approaches (and in particular to show that the maximization problems (\[maxcorrmi\]) and (\[monconstr\]) have the same value) as well as their dual formulations. Before going further, let us remark that (\[maxcorrmi\]) can directly be considered in the multivariate case whereas the monotonicity constrained problem (\[monconstr\]) makes sense only in the univariate case.
As proven in [@ccg], (\[maxcorrmi\]) is dual to $$\label{dualmi}
\inf_{(\psi, \varphi, b)} \{{{\bf}{E}}(\psi(X,Y))+{{\bf}{E}}(\varphi(U))
\; : \; \psi(x,y)+ \varphi(u)\ge uy -b(u)\cdot x\}$$ which can be reformulated as: $$\label{dualmiref}
\inf_{(\varphi, b)} \int \max_{t\in [0,1]} ( ty- \varphi(t) -b(t)\cdot x)
\nu(dx, dy) +\int_0^1 \varphi(t) dt$$ in the sense that $$\label{nodualgap}
\sup (\ref{maxcorrmi})=\inf(\ref{dualmi})=\inf (\ref{dualmiref}).$$
The existence of a solution to (\[dualmi\]) is not straightforward and is established under appropriate assumptions in the appendix directly in the multivariate case. The following result shows that there is a $t$-dependent reformulation of (\[maxcorrmi\]):
\[treform\] The value of (\[maxcorrmi\]) coincides with $$\label{monconstr01}
\sup\{{{\bf}{E}}(\int_0^t U_t Ydt ) \; : \: U_t \mbox{ nonincr.}, U_t\in
\{0,1\},\; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0\}$$
Let $U$ be admissible for [(\[maxcorrmi\])]{} and define $U_t:={{\bf 1}}_{\{U>t\}}$ then $U=\int_0^1 U_t dt$ and obviously $(U_t)_t$ is admissible for [(\[monconstr01\])]{}, we thus have $\sup {(\ref{maxcorrmi})} \le \sup {(\ref{monconstr01})}$. Take now $(V_t)_t$ admissible for [(\[monconstr01\])]{} and let $V:=\int_0^1 V_t dt$, we then have $$V>t \Rightarrow V_t=1\Rightarrow V\ge t$$ since ${{{\bf}E}}(V_t)=(1-t)$ this implies that $V$ is uniformly distributed and $V_t={{\bf 1}}_{\{V>t\}}$ a.s. so that ${{{\bf}E}}(X {{\bf 1}}_{\{V>t\}})=0$ which implies that $X$ is mean-independent from $V$ and thus ${{{\bf}E}}(\int_0^1 V_t Y dt)\le \sup {(\ref{maxcorrmi})}$. We conclude that $\sup {(\ref{maxcorrmi})} = \sup {(\ref{monconstr01})}$.
Let us now define $$\mathcal{C}:=\{u \; : \; [0,1]\mapsto [0,1], \mbox { nonincreasing}\}$$
Let $(U_t)_t$ be admissible for (\[monconstr\]) and set $$v_t(x,y):={{\bf}{E}}(U_t \vert X=x, Y=y), \; V_t:= v_t(X,Y)$$ it is obvious that $(V_t)_t$ is admissible for (\[monconstr\]) and by construction ${{\bf}{E}}(V_t Y)={{\bf}{E}}(U_t Y)$. Moreover the deterministic function $(t,x,y)\mapsto v_t(x,y)$ satisfies the following conditions: $$\label{CCt}
\mbox{for fixed $(x,y)$, } t\mapsto v_t(x,y) \mbox{ belongs to ${{\cal C}}$,}$$ and for a.e. $t\in [0,1]$, $$\label{moments}
\int v_t(x,y) \nu(dx, dy)=(1-t), \; \int v_t(x,y) x\nu(dx, dy)=0.$$ Conversely, if $(t,x,y)\mapsto v_t(x,y)$ satisfies (\[CCt\])-(\[moments\]), $V_t:=v_t(X,Y)$ is admissible for (\[monconstr\]) and ${{\bf}{E}}(V_t
Y)=\int v_t(x,y) y \nu(dx, dy)$. All this proves that $\sup(\ref{monconstr})$ coincides with $$\label{supvt}
\sup_{(t,x,y)\mapsto v_t(x,y)} \int v_t(x,y) y \nu(dx, dy)dt
\mbox{ subject
to: } (\ref{CCt})-(\ref{moments})$$
\[equivkb\] $$\sup (\ref{maxcorrmi})=\sup (\ref{monconstr}).$$
We know from lemma \[treform\] and the remarks above that $$\sup {(\ref{maxcorrmi})}=\sup {(\ref{monconstr01})} \le \sup {(\ref{monconstr})}=\sup {(\ref{supvt})}.$$ But now we may get rid of constraints [(\[moments\])]{} by rewriting [(\[supvt\])]{} in sup-inf form as $$\begin{split}
\sup_{{\quad\text{$v_t$ satisfies {(\ref{CCt})}}\quad}} \inf_{(\alpha, \beta)} \int v_t(x,y)(y-\alpha(t)-\beta(t) \cdot x) \nu(dx,dy)dt +\int_0^1 (1-t)\alpha(t) dt.
\end{split}$$ Recall that one always have $\sup \inf \le \inf \sup$ so that $\sup{(\ref{supvt})}$ is less than $$\begin{split}
\inf_{(\alpha, \beta)} \sup_{{\quad\text{$v_t$ satisf. {(\ref{CCt})}}\quad}} \int v_t(x,y)(y-\alpha(t)-\beta(t) \cdot x) \nu(dx,dy)dt +\int_0^1 (1-t)\alpha(t) dt\\
\le \inf_{(\alpha, \beta)} \int \Big (\sup_{v\in {{\cal C}}} \int_0^1 v(t)(y-\alpha(t)-\beta(t)x)dt \Big) \nu(dx,dy)+ \int_0^1 (1-t)\alpha(t) dt.
\end{split}$$ It follows from Lemma \[suppC\] below that, for $q\in L^1(0,1)$ defining $Q(t):=\int_0^t q(s) ds$, one has $$\sup_{v\in {{\cal C}}} \int_0^1 v(t) q(t)dt=\max_{t\in [0,1]} Q(t).$$ So setting $\varphi(t):=\int_0^t \alpha(s) ds$, $b(t):=\int_0^t \beta(s)ds$ and remarking that integrating by parts immediately gives $$\int_0^1 (1-t)\alpha(t) dt=\int_0^1 \varphi(t) dt$$ we thus have $$\begin{split}
\sup_{v\in {{\cal C}}} \int_0^1 v(t)(y-\alpha(t)-\beta(t)x)dt + \int_0^1 (1-t)\alpha(t) dt\\
= \max_{t\in[0,1]} \{t y-\varphi(t)-b(t) x\} +\int_0^1 \varphi(t) dt.
\end{split}$$ This yields[^7] $$\sup{(\ref{supvt})} \le \inf_{(\varphi, b)} \int \max_{t\in [0,1]} ( ty- \varphi(t) -b(t)\cdot x) \nu(dx, dy) +\int_0^1 \varphi(t) dt =\inf {(\ref{dualmiref})}$$ but we know from [(\[nodualgap\])]{} that $\inf {(\ref{dualmiref})} =\sup {(\ref{maxcorrmi})}$ which ends the proof.
In the previous proof, we have used the elementary result (proven in the appendix).
\[suppC\] Let $q\in L^1(0,1)$ and define $Q(t):=\int_0^t q(s) ds$ for every $t\in [0,1]$, one has $$\sup_{v\in \mathcal{C}} \int_0^1 v(t) q(t)dt=\max_{t\in [0,1]} Q(t).$$
Appendix {#appendix .unnumbered}
========
Proof of Lemma \[suppC\] {#proof-of-lemma-suppc .unnumbered}
------------------------
Since $\mathbf{1}_{[0,t]} \in \mathcal{C}$, one obviously first has $$\sup_{v\in \mathcal{C}} \int_0^1 v(s) q(s)ds \ge \max_{t\in [0,1]} \int_0^t
q(s)ds=\max_{t\in [0,1]} Q(t).$$ Let us now prove the converse inequality, taking an arbitrary $v\in \mathcal{C}$. We first observe that $Q$ is absolutely continuous and that $v$ is of bounded variation (its derivative in the sense of distributions being a bounded nonpositive measure which we denote by $\eta$), integrating by parts and using the definition of $\mathcal{C}$ then give: $$\begin{split}
\int_0^1 v(s) q(s)ds &=-\int_0^1 Q \eta + v(1^-) Q(1) \\
& \le (\max_{[0,1]} Q)\times (-\eta([0,1]) + v(1^-) Q(1) \\
&= (\max_{[0,1]} Q) (v(0^+)-v(1^-)) +v(1^-) Q(1) \\
&= (\max_{[0,1]} Q) v(0^+) + (Q(1)- \max_{[0,1]} Q) v(1^-) \\
& \le \max_{[0,1]} Q.
\end{split}$$
Proof of theorem \[existdual\] {#proof-of-theorem-existdual .unnumbered}
------------------------------
Let us denote by $(0, \overline{y})$ the barycenter of $\nu$: $$\int_{\Omega} x \; \nu(dx, dy)=0, \; \int_{\Omega} y \; \nu(dx, dy)=:\overline{y}$$ and observe that $(0, \overline{y})\in \Omega$ (otherwise, by convexity, $\nu $ would be supported on $\partial \Omega$ which would contradict our assumption that $\nu\in L^{\infty}(\Omega)$).
We wish to prove the existence of optimal potentials for the problem $$\label{duallike}
\inf_{\psi, \varphi, b } \int_{\Omega} \psi(x,y) d \nu(x,y) + \int_{[0,1]^d}
\varphi(u) d\mu(u)$$ subject to the pointwise constraint that $$\label{const}
\psi(x,y)+\varphi(u)\ge u\cdot y -b(u)\cdot x, \; (x,y)\in \overline{\Omega}, \; u\in [0,1]^d.$$ Of course, we can take $\psi$ that satisfies $$\psi(x,y):=\sup_{u\in [0,1]^d} \{ u\cdot y -b(u)\cdot x-\varphi(u)\}$$ so that $\psi$ can be chosen convex and $1$ Lipschitz with respect to $y$. In particular, we have $$\label{lip}
\psi(x,\overline{y})-\vert y-\overline{y} \vert \le \psi(x,y) \leq \psi(x,\overline{y})+\vert y-\overline{y} \vert.$$ The problem being invariant by the transform $(\psi, \varphi)\to (\psi+C,
\psi-C)$ ($C$ being an arbitrary constant), we can add as a normalization the condition that $$\label{normaliz}
\psi(0, \overline{y})=0.$$ This normalization and the constraint (\[const\]) imply that $$\label{fipos}
\varphi(t)\ge t\cdot \overline{y} -\psi(0, \overline{y}) \ge -\vert
\overline{y} \vert.$$ We note that there is one extra invariance of the problem: if one adds an affine term $q\cdot x$ to $\psi$ this does not change the cost and neither does it affect the constraint, provided one modifies $b$ accordingly by substracting to it the constant vector $q$. Take then $q$ in the subdifferential of $x\mapsto \psi(x,\overline{y})$ at $0$ and change $\psi$ into $\psi-q\cdot x$, we obtain a new potential with the same properties as above and with the additional property that $\psi(.,\overline{y})$ is minimal at $x=0$, and thus $\psi(x,\overline{y})\ge 0$, together with ([lip]{}) this gives the lower bound $$\label{lb}
\psi(x,y)\ge -\vert y-\overline{y} \vert \ge -C$$ where the bound comes from the boundedness of $\Omega$ (from now one, $C$ will denote a generic constant maybe changing from one line to another).
Now take a minimizing sequence $(\psi_n, \varphi_n, b_n)\in C(\overline{\Omega}, {{\bf}{R}})\times C([0,1]^d, {{\bf}{R}})\times C([0,1]^d, {{\bf}{R}}^N)$ where for each $n$, $\psi_n$ has been chosen with the same properties as above. Since $\varphi_n$ and $\psi_n$ are bounded from below ($\varphi_n \ge -\vert \overline{y}\vert$ and $\psi_n \ge C$) and since the sequence is minimizing, we deduce immediately that $\psi_n$ and $\varphi_n$ are bounded sequences in $L^1$. Let $z=(x,y)\in \Omega$ and $r>0$ be such that the distance between $z$ and the complement of $\Omega$ is at least $2r$, (so that $B_r(z)$ is in the set of points that are at least at distance $r$ from $\partial \Omega$), by assumption there is an $\alpha_r>0$ such that $\nu \ge \alpha_r$ on $B_r(z)$. We then deduce from the convexity of $\psi_n$: $$C \le \psi_n(z)\le \frac{1}{\vert B_r(z)\vert }\int_{B_r(z)} \psi_n \leq
\frac{1}{\vert B_r(z)\vert \alpha_r } \int_{B_r(z)} \vert \psi_n\vert \nu
\le \frac{1}{\vert B_r(z)\vert \alpha_r } \Vert \psi_n \Vert_{L^1(\nu)}$$ so that $\psi_n$ is actually bounded in $L^{\infty}_{{\mathrm{loc}}}$ and by convexity, we also have $$\Vert \nabla \psi_n \Vert_{L^{\infty}(B_r(z))} \le \frac{2}{R-r} \Vert
\psi_n \Vert_{L^{\infty}(B_R(z))}$$ whenever $R>r$ and $B_R(z)\subset \Omega$ (see for instance Lemma 5.1 in [@cg] for a proof of such bounds). We can thus conclude that $\psi_n$ is also locally uniformly Lipschitz. Therefore, thanks to Ascoli’s theorem, we can assume, taking a subsequence if necessary, that $\psi_n$ converges locally uniformly to some potential $\psi$.
Let us now prove that $b_n$ is bounded in $L^1$, for this take $r>0$ such that $B_{2r}(0, \overline{y})$ is included in $\Omega$. For every $x\in
B_r(0)$, any $t\in[0,1]^d $ and any $n$ we then have $$\begin{split}
-b_n(t) \cdot x \le \varphi_n(t)-t \cdot \overline{y} + \Vert \psi_n
\Vert_{L^{\infty} (B_r(0, \overline{y}))} \le C+\varphi_n(t)
\end{split}$$ maximizing in $x \in B_r(0)$ immediately gives $$\vert b_n(t )\vert r \leq C +\varphi_n(t).$$ From which we deduce that $b_n$ is bounded in $L^1$ since $\varphi_n$ is.
From Komlos’ theorem (see [@komlos]), we may find a subsequence such that the Cesaro means $$\frac{1}{n} \sum_{k=1}^n \varphi_k, \; \frac{1}{n} \sum_{k=1}^n b_k$$ converge a.e. respectively to some $\varphi$ and $b$. Clearly $\psi$, $\varphi$ and $b$ satisfy the linear constraint (\[const\]), and since the sequence of Cesaro means $(\psi^{\prime }_n, \phi^{\prime }_n, b^{\prime
}_n):= n^{-1}\sum_{k=1}^n (\psi_k, \phi_k, b_k)$ is also minimizing, we deduce from Fatous’ Lemma $$\begin{split}
&\int_{\Omega} \psi(x,y) d \nu(x,y) + \int_{[0,1]^d} \varphi(u) d\mu(u) \\
& \leq \liminf_n \int_{\Omega} \psi^{\prime }_n(x,y) d \nu(x,y) +
\int_{[0,1]^d} \varphi^{\prime }_n(u) d\mu(u)=\inf(\ref{duallike})
\end{split}$$ which ends the existence proof.
[99]{} A. Belloni, R.L. Winkler, On Multivariate Quantiles Under Partial Orders, The Annals of Statistics, **39** (2), 1125-1179 (2011).
Y. Brenier, Polar factorization and monotone rearrangement of vector-valued functions, Comm. Pure Appl. Math. 44 ** 4**, 375–417 (1991).
G. Carlier, A. Galichon, Exponential convergence for a convexifying equation, ESAIM, Control, Optimisation and Calculus of Variations, **18** (3), 611-620 (2012).
G. Carlier, V. Chernozhukov, A. Galichon, Vector quantile regression: an optimal transport approach, The Annals of Statistics, **44** (3), 1165-1192 (2016)
I. Ekeland, A. Galichon, M. Henry, Comonotonic measures of multivariate risks, Math. Finance, **22** (1), 109-132 (2012).
I. Ekeland, R. Temam, *Convex Analysis and Variational Problems*, Classics in Mathematics, Society for Industrial and Applied Mathematics, Philadelphia, (1999).
A. Galichon, *Optimal Transport Methods in Economics*, Princeton University Press.
A. Galichon, M. Henry, Dual theory of choice with multivariate risks, J. Econ Theory, **47** (4), 1501-516 (2012).
M. Hallin, D. Paindaveine, M. Siman, Multivariate quantiles and multiple-output regression quantiles: From $L^1$ optimization to halfspace depth, The Annals of Statistics, **38** (2), 635-669 (2010).
R. Koenker, G. Bassett, Regression Quantiles, Econometrica, **46**, 33-50 (1978).
J. Komlos, A generalization of a problem of Steinhaus, *Acta Mathematica Academiae Scientiarum Hungaricae*, **18** (1–2), 1967, pp. 217–229.
G. Puccetti, M. Scarsini, Multivariate comonotonicity, *Journal of Multivariate Analysis* 101, 291–304 (2010).
J.V.Ryff , Measure preserving Transformations and Rearrangements, *J. Math. Anal. and Applications* **31** (1970), 449–458.
F. Santambrogio, *Optimal Transport for Applied Mathematicians*, Progress in Nonlinear Differential Equations and Their Applications 87, Birkhäuser Basel, 2015.
C. Villani, *Topics in optimal transportation*, Graduate Studies in Mathematics, 58, American Mathematical Society, Providence, RI, 2003.
C. Villani, *Optimal transport: Old and New*, Grundlehren der mathematischen Wissenschaften, Springer-Verlag, Heidelberg, 2009.
[^1]: [CEREMADE, UMR CNRS 7534, Université Paris IX Dauphine, Pl. de Lattre de Tassigny, 75775 Paris Cedex 16, FRANCE, and MOKAPLAN Inria Paris, `carlier@ceremade.dauphine.fr`]{}
[^2]: [Department of Economics, MIT, 50 Memorial Drive, E52-361B, Cambridge, MA 02142, USA, `vchern@mit.edu`]{}
[^3]: [Economics Department and Courant Institute of Mathematical Sciences, NYU, 70 Washington Square South, New York, NY 10013, USA `ag133@nyu.edu`.]{}
[^4]: There is actually an important literature that aims at generalizing the notion of quantile to a multidimensional setting and various different approaches have been proposed; see in particular [@belloni], [hallin]{}, [@PuccettiScarsini] and the references therein.
[^5]: the fact that there exists such a triple follows from the nonatomicity of the underlying space.
[^6]: Uniqueness will be discussed later on.
[^7]: The functions $\varphi$ and $b$ constructed above vanish at $0$ and are absolutely continuous but this is by no means a restriction in the minimization problem [(\[dualmiref\])]{} as explained in paragraph \[mvqr2\].
|
{
"pile_set_name": "arxiv"
}
|
Four-ever? Competition remedies in the audit market
Oxera
In light of recent accounting scandals, there are widespread calls for the UK competition authority to re-examine the audit market. Yet spending a substantial amount of resources on a market investigation, and concluding once again that there is a competition problem, is of little value if a suitable remedy cannot be found. A break-up of the Big Four is perceived by many as a necessary and long-awaited intervention, but is it the right solution? And if not, what would be an alternative remedy?
The UK audit market has gone through some turmoil recently.[1] This month the Financial Reporting Council (FRC), which regulates UK audit, announced a deterioration in audit quality across the ‘Big Four’ firms (KPMG, PwC, Deloitte and EY) compared with the previous year. Most notably, the FRC noted that 50% of KPMG’s FTSE 350 audits failed to reach the FRC’s standard for audit quality.[2] At a global level, the International Forum of Independent Audit Regulators found significant problems in 40% of the 918 audits of listed public interest entities that it inspected last year.[3]
The recent audit failures uncovered by regulators are hardly trivial. In Miller Energy the US Securities and Exchange Commission found that KPMG had overvalued certain assets by more than 100 times.[4] In BHS the FRC noted that PwC had signed off the accounts just days before the company was sold for £1.[5] In the more recent case of Carillion, equity analysts appeared unaware of the warning signs that might have been flagged by a good audit.[6]
These market outcomes in audit services are unsatisfactory from a policy perspective. The Big Four’s joint market share in FTSE 350 audit has been close to 100% for many years, and the Big Four likewise dominate the audit of large companies across the world. It is this high market concentration that is frequently blamed for the poor outcomes,[7] and regulators and competition authorities across the world have raised concerns about concentration ever since the collapse of Arthur Andersen in 2002. This year, two UK Parliamentary Committees have called for a new competition investigation by the Competition and Markets Authority (CMA) that ‘should explicitly include consideration of both breaking up the Big Four into more audit firms, and detaching audit arms from those providing other professional services’.[8] The Chief Executive Officer of the FRC and the CEO of PwC have both expressed support for the idea of having the CMA study the audit market afresh.[9]
Previous remedies in the audit market
The audit market is effectively dominated at the top end by the Big Four, and despite turmoil in financial markets the audit market structure has remained largely unchanged since 2002.[10] Concerns emanating from the high concentration include a lack of choice, a lack of innovation, higher audit fees, conflicts of interest, a lack of independence that weakens auditor professional scepticism, a systemic risk if one Big Four firm should fail, and, above all, poor-quality audit reducing the credibility and reliability of audited financial statements for the world’s largest companies.[11]
The previous investigation by the UK Competition Commission (CC), predecessor to the CMA, put forward a package of seven remedies, the most significant of which was a requirement that FTSE 350 companies put their audit out to tender at least every ten years (‘mandatory tendering’). Shortly thereafter, the EU introduced rules that obliged listed companies to switch their auditor (‘mandatory rotation’) every 20 years.[12] At the conclusion of the previous market investigation the CC expressed confidence in its package of remedies, noting that they should ‘increase choice’ and provide a ‘substantially improved environment for competition’.[13] The CC’s remedies package did not include any structural remedies.
The CC and EU remedies have not solved the problem of attracting more competition from outside the Big Four.[14] Indeed, the leading non-Big Four firms, Grant Thornton and BDO, between them have fewer FTSE 350 clients than before the regulatory interventions. In 2013, just before the new measures to boost competition were enacted, Grant Thornton had six FTSE 350 audit clients. In 2016, this number was unchanged. But in 2018 the firm said that it would exit the market for large audits.[15] In 2013 BDO had eight FTSE 350 clients, falling to five in 2016.[16] The previous rule changes are therefore widely perceived to have failed to remedy concerns over market concentration. The Big Four accountancy firms still audit 97% of FTSE 350 companies, a similar rate to that found by Oxera[17] in its 2006 market study for the FRC.[18]
What could structural remedies achieve?
Vertical separation
There are different types of structural remedies. Vertical separation of the Big Four firms into audit and non-audit services would not increase the basic number of firms participating in the FTSE 350 audit market, but it would increase the effective choice for many companies that have non-audit relationships with Big Four audit firms. These relationships can preclude, whether legally or in terms of company perception,[19] considering all four current audit firms as viable substitute auditors.[20]
Vertical separation would also be oriented towards audit quality, removing the conflicts of interest that can arise when the auditor also supplies valuable non-audit services. Yet the idea was not popular among investors at the time of the previous competition investigation. In 2012, an Oxera investor survey report found that ‘almost all investors surveyed do not want to see structural separation of the Big Four firms into audit and non-audit activities.’[21]
Horizontal separation
Horizontal separation of the Big Four firms would immediately improve choice in the sense of seeing more than four firms in the market, and also choice in terms of seeing several non-conflicted audit firms in every audit tender. Such a separation would therefore also, in general terms, improve competition. It could also serve audit quality by reducing the number of instances where a company involved in a complex transaction cannot realistically find an adviser that is not subject to some conflict of interest.
In the case of Carillion, PwC acted as the company’s pensions consultant (2002–17), then switched to advising the pension scheme trustees on Carillion’s restructuring proposals (from July 2017), and was finally appointed by the government to help manage the defunct Carillion after its collapse (from January 2018).[22] It would appear that PwC was the only viable choice to advise on Carillion’s insolvency, because it was the only Big Four firm that did not have active contracts with Carillion at the time of Carillion’s demise.[23] Expanding the market from a ‘Big Four’ to a ‘Large 6’ seems attractive in the face of such apparent conflicts, but realistically it would be a very difficult exercise if the aim is to create a ‘Large 6’ group of firms of similar size with similar international networks.
Would a break-up increase audit quality?
Audits are for the protection of investors against false accounting by a company’s management. The starting point is therefore that the true customer of audit, the investor, is not the procurer of audit services. This alone creates an environment in which market failures may be expected.
But why does audit quality fall short? Boeing and Airbus, Coca-Cola and Pepsi, and the Silicon Valley giants all operate in concentrated markets—but it seems highly unlikely that half of new aeroplanes, or soft drinks cans, possess substantial errors. Market concentration per se does not entail a poor-quality product: even a monopolist will have regard to product quality, knowing that if its product is faulty the financial consequences of fines and compensating consumers will typically be severe.
In equilibrium, a firm would only produce faulty items to the extent that it is rational to do so—i.e. if errors cannot be detected or if the financial consequences of errors are insubstantial. It seems to be widely accepted that audit quality is below the level demanded by investors, on whose behalf the audit is undertaken. The economics literature on audit has studied the link between greater market concentration and higher audit fees, but this does not help us very much in the present circumstances, where the primary concerns are not to do with high prices, or even exclusionary conduct, but with limited choice and sub-optimal quality. Where does the solution lie?
Penalties for poor-quality service
In public services markets (health, education) there is a high degree of regulatory supervision of quality—such as barring doctors who are found to be negligent, and awarding damages to patients harmed by negligence—even when the main providers are state-owned and have no incentive to chase profits at the expense of quality. In 2017, the UK National Health Service (NHS) estimated that the total liability for outstanding medical negligence cases could be as much as £56.1bn, and the £1.5bn annual NHS payout to settle claims is expected to double by 2023.[24] In audit, the strength of regulatory supervision by the FRC is subject to an independent review following concerns that it lacks adequate powers to intervene in the market.[25]
However, the FRC has recently been levying higher fines for audit errors. It fined PwC £6.5m regarding failed UK retailer, BHS;[26] £5.1m for its auditing of accountancy group, RSM Tenon (also, ironically, an auditor);[27] and £5m in relation to the property company, Connaught.[28] The other Big Four firms have also faced heavy fines, in both the UK and USA: £1.8m for EY’s auditing of Tech Data;[29] £4.8m for KPMG’s work on Miller Energy;[30] and £4m for Deloitte relating to the audit of Aero Inventory.[31] The FRC is also fining audit partners whom it finds to be responsible for misconduct—for example, the lead partner for BHS has been fined £325k and banned from working as an auditor for 15 years.[32] These FRC penalties are, however, minor relative to the £38m audit-related settlement reached by the UK’s largest pension scheme, USS, with PwC Brazil as part of a class action lawsuit against troubled oil giant, Petrobras.[33] But note that the FRC has this month implemented an increase in fines to £10m or more for ‘seriously poor audit work by a Big 4 firm’, following an independent review in 2017 of FRC sanctions.[34]
Are audit fines providing optimal enforcement?
From an economics perspective, if the deterrence effect of penalties is sufficiently severe, firms that might otherwise chase market share by cutting prices and their costs for a given audit will be deterred from cutting quality. In other words, when deterrence is weak, there is an opportunity for rent-seeking by firms that cut quality on unobservable dimensions. Although it might be argued that the cost to an accountant’s reputation is great enough to give the right incentives, this point seems difficult to sustain in light of the continued flourishing of firms that have had quite major hits to their professional reputations.
How large would audit fines need to be in order to deter bad audit? This article cannot provide the answer, but it may be instructive to look at a comparison between audit fines and cartel fines (in the EU). The latter are set based on the European Commission’s criteria. As the Commission explains:
The Commission’s policy with regards to competition law infringements is one of prevention … [fines] are ultimately aimed at prevention, and must hence fulfil two objectives: to punish and to deter. Breaking the competition rules is profitable if it goes unpunished – that is why companies do it.[35]
European Commission cartel fines are set based on the gravity and the duration of a competition infringement, and are capped at a maximum of 10% of a company’s total turnover. The 10% turnover ceiling for fines is engaged only when a cartel fine based on the usual criteria would otherwise be set at more than 10% of turnover.
Cartel fines are large compared with audit fines, as Tables 1 and 2 illustrate. Looking at FRC audit fines in the cases mentioned above, the average fine is 0.016% of a Big Four firm’s annual global turnover, as shown in Table 1. The final column of Table 1 indicates that increasing this percentage to 0.5% would lead to fines of a much greater order of magnitude. This is purely illustrative; it is not a recommendation as to the optimal size of audit fines.
Source: FRC and the audit firms’ annual reports for fiscal year 2017.
How do cartel fines compare? Weighted by the number of fines falling into each percentage bracket of turnover, the average European Commission cartel fine is 2.40% of turnover. This means that cartel fines expressed as a percentage of global turnover are about 150 times larger (2.40% divided by 0.016%) than FRC audit fines measured in the same way. Table 2 shows the calculation of the weighted average European Commission cartel fine.[36]
Table 2 European Commission weighted average cartel fines as a percentage of a company’s global turnover
Source: European Commission cartel statistics, last updated 21 March 2018.
It might be argued that increased deterrence for poor audit would come at the cost of competition, such as financial penalties leading to market exit and a ‘Big Three’, or hiking the barriers to entry for non-Big Four audit firms. Likewise, the Commission does not wish to fine a cartel with penalties that are so high that the consequence would be a reduction in the number of market competitors (or else the competition remedy would be self-defeating). Hence the scaling of cartel fines to turnover, and the ‘inability to pay’ test, whereby the Commission can reduce the scale of fines where it is shown that they pose a serious threat to the economic viability of the undertaking concerned. Scaling audit fines to audit firm turnover makes it unlikely that such penalties would deter entry or cause the market exit of one of the Big Four. The cartel fines policy therefore has useful principles, albeit it does not indicate the right order of magnitude for audit fines.
Fines set as a percentage of turnover would of course decline if measured against a smaller metric for revenue. As a hypothetical exercise, taking Big Four audit-only revenues as the denominator, the FRC fines mentioned previously would be on average 0.039% of the firms’ global audit-only revenues. In this scenario cartel fines at 2.40% of global turnover would be about 60 times greater than the FRC recent audit fines (2.40% divided by 0.039%), and a hypothetical fine of 0.5% of audit fines would amount to between £45m and £60m. The latter figures are much closer to the penalties proposed in last year’s independent review of FRC sanctions—i.e. ‘£10 million or more (before any discount)’. Note also that the independent review recommended that ‘the figure could be well above [£10m] if dishonesty or conscious wrongdoing were involved.’[37]
Evidence on the deterrence effect of cartel fines can be found in the economics literature. Professor Stephen Davies at the ESRC Centre for Competition Policy estimates that cartel deterrence is highly effective:
On the most conservative of our estimates, more than half of all potential cartel harm never occurs, because it is deterred. This is very much a lower bound, and the proportion could be as high as 90%.[38]
Similar research would be required to understand the effects of a different penalty regime for poor audit.
Break-up or shake-up?
There is little doubt that a new CMA investigation would consider a break-up remedy. However, no matter what the divestments and structural changes, the inherent tension within the industry’s ‘client pays’ business model is likely to remain—that is, an auditor’s basic conflict between serving the paying client and serving the greater good.
If it were to address that conflict, the CMA would need to look into penalties and deterrence, as well as studying the effects of a break-up remedy. It is not realistic to expect the CMA to be able to fix every major issue in the market by achieving the goal of reduced concentration in FTSE 350 audit.
The quality of audit might be improved with a more disaggregated market, but this link is not certain. Moreover, it is possible that greater deterrence for bad audit would lead to an organic change in market structure: the Big Four have expertise in advising clients as to when a substantial divestment or restructuring might increase shareholder value. It seems possible that, in a world of greater deterrence, the accounting firms might look inwards using this expertise and shake up the market structure themselves.
Possibly the Big Four firms are already thinking along these lines. According to a letter from the two MPs who led the parliamentary review on Carillion, voluntary break-up scenarios are now under active consideration:
Since our report was published, Bill Michael, Chairman KPMG UK, said his firm had been thinking about break-up scenarios ‘for some time’ as the current business model of the Big Four is ‘unsustainable’. Mr Michael is quoted as saying:
‘The profession, like it or not, is an oligopoly. You can’t be all things to all men and women forever. We have to reduce the level of conflicts and demonstrate why they are manageable and why the public and all stakeholders should trust us.’
Other Big four firms have reportedly begun making preparations for a break-up.[39]
Finally, the example of cartel fines shows that they are of a different scale to audit fines, raising the question as to whether fines should be reconsidered in the audit market. Penalties for anticompetitive conduct are used for prevention, not retribution. An audit firm with consistent high quality would have a minimal incidence of fines, which would place the high-quality firm at a competitive advantage to an audit firm with lower quality.[40] If audit quality became high across the market, no firm would be faced with very substantial financial penalties, and investor perceptions as to the value of statutory audit might be restored. In summary: prevention is better than cure.
[23] Peter Kyle, Member of the Business, Energy and Industrial Strategy Committee, speaking at the pre-appointment hearing with the Government’s preferred candidate for Chair of the Competition and Markets Authority, HC 985, 24 April 2018. See Transcript of oral evidence, Question 34, p. 19.
[36] The European Commission statistics provide the percentages of fines imposed on undertakings per cartel infringement. Certain cases may comprise several infringements for which multiple counting of undertakings is considered.
You can find out more about which cookies we are using or switch them off in settings.
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
disable
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
3rd Party Cookies
This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.
Keeping this cookie enabled helps us to improve our website.
disable
Please enable Strictly Necessary Cookies first so that we can save your preferences!
|
{
"pile_set_name": "pile-cc"
}
|
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context="it.tiwiz.rxjavacrunch.part9.Part9Activity">
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Retain configuration instance (value)" />
<TextView
android:id="@+id/currentValue"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textAppearance="?android:textAppearanceLarge"
android:padding="@dimen/activity_vertical_margin"
android:gravity="center"
tools:text="10"/>
</LinearLayout>
|
{
"pile_set_name": "github"
}
|
package org.basex.query.func.validate;
import org.basex.query.*;
import org.basex.query.func.*;
import org.basex.query.value.item.*;
import org.basex.util.*;
/**
* Function implementation.
*
* @author BaseX Team 2005-20, BSD License
* @author Christian Gruen
*/
public final class ValidateXsdProcessor extends StandardFunc {
@Override
public Item item(final QueryContext qc, final InputInfo ii) {
return Str.get(ValidateXsd.IMPL[ValidateXsd.OFFSET + 1]);
}
}
|
{
"pile_set_name": "github"
}
|
I welcome comments and constructive criticism of my images so that I may improve my photography
Please click on image to enlarge.
Friday, 7 October 2011
Caterpillar and Fungi.
I D's required for this caterpillar and fungi please.The caterpillar was found on the backgarden path,so no idea what plant it came from.The fungi was found under a tall bank next to a stream in the Trough of Bowland.
Christian.Thanks for your comments,I put the caterpillar on the stick and held it up with one hand and took the photo with the other. Cliff thanks for your comments. The I D is spot on thank you very much.
|
{
"pile_set_name": "pile-cc"
}
|
Micro-Loan Program
In order to promote economic development in the City of Alamo, the Alamo EDC established the Alamo Small Business Micro-Loan Program (MLP) with assistance from USDA – Rural Development. The MLP is a self-sustaining project that works by lending money to local businesses, with the money paid back plus interest being reused.
The following documents must be submitted for a loan application:
Application form.
An executive summary with three years of financial projections.
A project budget.
A personal financial statement.
Two years of income tax returns – business and/or personal (for the most current years).
Year-end financial statement(s) from an existing organization.
Balance sheet(s) (yearly).
Profit and loss statement(s) (last quarter).
A minimum of two bids from non-related third-party vendors/contractors.
A credit report (to be conducted by AEDC), steps for loan process.
Filling out the loan application:
Must submit all documents to AEDC via mail or hand delivery.
The AEDC begins the loan application review to determine eligibility.
The applicant will be notified of eligibility status.
If eligible, the loan application will be presented to the Loan Review Committee.
A committee recommendation will be presented to the AEDC Board for final approval.
The applicant will be notified of the board’s decision to approve or deny the loan application as well as loan-specific terms, when applicable.
|
{
"pile_set_name": "pile-cc"
}
|
Playing back a meeting recording
…Let me show you how to locate and play back a meeting that you have recorded.…First, let's understand how WebEx Meetings store and prepare your meeting recordings.…The meetings are recorded on the WebEx server.…WebEx will post the recording to their…server within 24 hours of the meeting completion.…When your recording is ready, you'll receive an update on…your dashboard homepage with the playback link and the recording information.…Let me show you how that looks.…When you get this notification, you can click the link that says Play Recording.…And WebEx will play back the video for you with the WebEx network recording player.…
To locate your meeting recording manually, if…you miss the notification, the easiest thing…to do is look at the meetings space for the meeting that you recorded.…First, find the meeting in your meetings list by clicking the Meetings tab.…Click the Recent tab.…You'll note, in the list, whether it's recorded or not.…Click on the meeting title to visit the meeting space page for that meeting.…
Resume Transcript Auto-Scroll
Author
Released
6/9/2014
Connect and collaborate across the globe with WebEx Meetings. In this course, author and webinar specialist Sally Norred shows you how to use WebEx Meetings to host, run, and record online meetings. Discover how to set up an online meeting and invite attendees, work with interactivity, let attendees participate and present, and save and record a meeting. Also check out the quick tips sheets (free to all members) for a list of handy shortcuts for hosts, presenters, and attendees alike.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We study the suppression of the small-scale power spectrum due to the decay of charged matter to dark matter prior to recombination. Prior to decay, the charged particles couple to the photon-baryon fluid and participate in its acoustic oscillations. However, after decaying to neutral dark matter the photon-baryon fluid is coupled only gravitationally to the newly-created dark matter. This generically leads to suppression of power on length scales that enter the horizon prior to decay. For decay times of $\sim$$3.5$ years this leads to suppression of power on subgalactic scales, bringing the observed number of Galactic substructures in line with observation. Decay times of a few years are possible if the dark matter is purely gravitationally interacting, such as the gravitino in supersymmetric models or a massive Kaluza-Klein graviton in models with universal extra dimensions.'
author:
- Kris Sigurdson
- Marc Kamionkowski
title: 'Charged-particle decay and suppression of small-scale power'
---
The standard inflation-inspired cosmological model, with its nearly scale-invariant power spectrum of primordial perturbations, is in remarkable agreement with observation. It predicts correctly the detailed pattern of temperature anisotropies in the cosmic microwave background (CMB) [@CMB], and accurately describes the large scale clustering of matter in the Universe [@LSS]. However, on subgalactic scales there are possible problems with the standard cosmology that warrant further investigation. Namely, the model overpredicts the number of subgalactic halos by an order of magnitude compared to the 11 observed dwarf satellite galaxies of the Milky Way [@excessCDMpower]. Several possible resolutions have been proposed to this apparent discrepancy, ranging from astrophysical mechanisms that suppress dwarf-galaxy formation in subgalactic halos (see, for example, Ref. [@AstroSol]) to features in the inflaton potential that suppress small-scale power and thus reduce the predicted number of subgalactic halos [@Kamion2000].
In this *Letter*, we show that if dark matter is produced by the out-of-equilibrium decay of a long-lived charged particle, then power will be suppressed on scales smaller than the horizon at the decay epoch. Unlike some other recent proposals, which suppress small-scale power by modifying the dark-matter particle properties [@OtherMods], ours modifies the dark-matter production mechanism. In the model we discuss here, prior to decay, the charged particles couple electromagnetically to the primordial plasma and participate in its acoustic oscillations. After decay, the photon-baryon fluid is coupled only gravitationally to the neutral dark matter. This generically leads to suppression of power for scales that enter the horizon prior to decay. This suppression, reduces the amount of halo substructure on galactic scales while preserving the successes of the standard hierarchical-clustering paradigm on larger scales. Apart from the changes to the model due to the decay process, we adopt the standard flat-geometry $\Lambda$CDM cosmological model with present-day dark-matter density (in units of the critical density) $\Omega_{d}=0.25$, baryon density $\Omega_{b}=0.05$, cosmological constant $\Omega_{\Lambda}=0.70$, Hubble parameter $H_{0}=72~{\rm
km\,s^{-1} Mpc^{-1}}$, and spectral index $n=1$.
In the standard $\Lambda$CDM model the initial curvature perturbations of the Universe, presumably produced by inflation or some inflation-like mechanism, are adiabatic (perturbations in the total density but not the relative density between species) and Gaussian with a nearly scale-invariant spectrum of amplitudes. These initial perturbations grow and react under the influence of gravity and other forces, with the exact nature of their behavior dependent upon the species in question. Because dark-matter particles are, by assumption, cold and collisionless the fractional dark-matter-density perturbation $\delta_{d} \equiv \delta\rho_{d}/\rho_{d}$ can only grow under the influence of gravity. The baryonic species, being charged, are tightly coupled by Coulomb scattering to the electrons, which are themselves tightly coupled to the photons via Thomson scattering. The baryons and photons can thus be described at early times as a single baryon-photon fluid, with the photons providing most of the pressure and inertia and the baryons providing only inertia. Gravity will tend to compress this baryon-photon fluid, while the radiation pressure will support it against this compression. The result is acoustic oscillations, and the baryon density perturbation $\delta_{b} \equiv \delta\rho_{b}/\rho_{b}$ and photon density perturbation $\delta_{\gamma} \equiv \delta\rho_{\gamma}/\rho_{\gamma}$ will oscillate in time for length scales inside the horizon (on length scales larger than the horizon the pressure can have no effect). At early times these perturbations are very small and linear perturbation theory can be applied. This allows an arbitrary density field to be decomposed into a set of independently evolving Fourier modes, labeled by a wavenumber $k$. Fig. \[fig:delta\] shows the growth of dark-matter perturbations under the influence of gravity, and the oscillatory behavior of the baryon perturbation for the same wavenumber.
We choose to work in the synchronous gauge where the time slicing is fixed to surfaces of constant proper time so that particle decays proceed everywhere at the same rate. In the synchronous gauge the standard linearized evolution equations for perturbations in Fourier space are (e.g., [@Ma95])
$$\begin{aligned}
\dot{\delta}_{d} = - \theta_{d}-\frac{1}{2}\dot{h} \, ,
\quad
\dot{\theta}_{d} = -\frac{\dot{a}}{a}\theta_{d} \, ,
\label{eqn:dark_delta}\end{aligned}$$
$$\begin{aligned}
\dot{\delta}_{b} = - \theta_{b}-\frac{1}{2}\dot{h} \, ,\end{aligned}$$
$$\begin{aligned}
\dot{\theta}_{b} = -\frac{\dot{a}}{a}\theta_{b} &+ c_{s}^2k^2\delta_{b} + \frac{4\rho_{\gamma}}{3\rho_{b}} a n_{e} \sigma_{T} (\theta_{\gamma}-\theta_{b}) \, ,
\label{eqn:baryon_theta}\end{aligned}$$
$$\begin{aligned}
\dot{\delta}_{\gamma} = -\frac{4}{3}\theta_{\gamma}-\frac{2}{3}\dot{h} \, ,\end{aligned}$$
and $$\begin{aligned}
\dot{\theta}_{\gamma} = k^2 \left( \frac{1}{4}\delta_{\gamma} - \Theta_{\gamma} \right) + a n_{e}\sigma_{T}(\theta_{b}-\theta_{\gamma}) \, ,
\label{eqn:gamma_theta}\end{aligned}$$ where $\theta_{b}$, $\theta_{d}$, and $\theta_{\gamma}$ are the divergence of the baryon, dark-matter, and photon fluid velocities respectively and an overdot represents a derivative with respect to the conformal time $\eta$. Here $h$ is the trace of the spatial metric perturbations $h_{ij}$. Its evolution is described by the linearized Einstein equations, which close this system of linearized equations. The last terms on the right-hand-sides of Eqs. (\[eqn:baryon\_theta\]) and (\[eqn:gamma\_theta\]) account for Thomson scattering between baryons and photons, and are responsible for keeping them tightly coupled in the early Universe. In these equations $\sigma_{T}$ is the Thomson cross section, $n_{e}$ is the electron number density, and $c_{s}$ is the intrinsic sound speed of the baryons. During tight coupling the second moment $\Theta_{\gamma}$ of the photon distribution and other higher moments can be neglected, and the radiation can reliably be given the fluid description described above.
We now show how Eqs. (\[eqn:dark\_delta\])–(\[eqn:gamma\_theta\]) are modified by the decay of a long-lived metastable charged particle to dark matter in the early Universe. We assume that the decay is of the form $q^{\pm} \rightarrow \ell^{\pm}d$, so the decay of each charged particle $q^{\pm}$ produces a dark-matter particle $d$ and a charged lepton $\ell^{\pm}$. Denoting the decaying charged component by the subscript ‘$q$’, the background density $\rho_{q}$ evolves according to the equation, $$\begin{aligned}
\dot{\rho}_{q} = -3\frac{\dot{a}}{a}\rho_{q} - \frac{a}{\tau}\rho_{q} \, ,
\label{eqn:background_q}\end{aligned}$$ where $\tau$ is the lifetime of $q^{\pm}$. The first term just accounts for the normal $a^{-3}$ scaling of non-relativistic matter in an expanding universe, while the second leads to the expected exponential decay of the comoving density. For the dark matter we have $$\begin{aligned}
\dot{\rho}_{d} = -3\frac{\dot{a}}{a}\rho_{d} + \lambda \frac{a}{\tau}\rho_{q} \, ,
\label{eqn:background_d}\end{aligned}$$ where $\lambda=m_{d}/m_{q}$ is the ratio of the masses of dark matter particle the charged particle. The energy density in photons evolves according to $$\begin{aligned}
\dot{\rho}_{\gamma} = -4\frac{\dot{a}}{a}\rho_{\gamma} + (1-\lambda)\frac{a}{\tau}\rho_{q} \, .
\label{eqn:background_gamma}\end{aligned}$$ This last equation follows from the assumption that the produced lepton initiates an electromagnetic cascade which rapidly (compared to the expansion timescale) thermalizes with the photon distribution. In practice the last term on the right-hand-side of Eq. (\[eqn:background\_gamma\]) is negligibly small because the decay takes place during the radiation dominated era when $\rho_{\gamma} \gg \rho_{q}$. Furthermore, limits on the magnitude of $\mu$-distortions to the blackbody spectrum of the CMB constrain $|1-\lambda|$ to be a small number, as we discuss below.
Using covariant generalizations of Eqs. (\[eqn:background\_q\])–(\[eqn:background\_gamma\]) we can derive how Eqs. (\[eqn:dark\_delta\])–(\[eqn:gamma\_theta\]) are modified by the transfer of energy and momentum from the ‘$q$’ component to the dark matter during the decay process. Since the charged ‘$q$’ component and the baryons are tightly coupled via Coulomb scattering they share a common velocity $\theta_{\beta} = \theta_{b}=\theta_{q}$. This makes it useful to describe them in terms of a total charged-species component with energy density $\rho_{\beta}=\rho_{b}+\rho_{q}$, which we denote here by the subscript ‘$\beta$’. Because in the synchronous gauge the decay proceeds everywhere at the same rate this description is even more useful as $\delta_{\beta}=\delta_{b}=\delta_{q}$ is maintained at all times for adiabatic initial conditions. In terms of these ‘$\beta$’ variables, then, we have
$$\begin{aligned}
\dot{\delta}_{d} = - \theta_{d}-\frac{1}{2}\dot{h} + \lambda \frac{\rho_{q}}{\rho_{d}}\frac{a}{\tau}(\delta_{\beta}-\delta_{d}) \, ,
\label{eqn:deltadot_d_2}\end{aligned}$$
$$\begin{aligned}
\dot{\theta}_{d} = -\frac{\dot{a}}{a}\theta_{d} + \lambda \frac{\rho_{q}}{\rho_{d}}\frac{a}{\tau}(\theta_{\beta}-\theta_{d}) \, ,
\label{eqn:thetadot_d_2}\end{aligned}$$
$$\begin{aligned}
\dot{\delta}_{\beta} = - \theta_{\beta}-\frac{1}{2}\dot{h} \, ,
\label{eqn:delta_beta_2}\end{aligned}$$
$$\begin{aligned}
\dot{\theta}_{\beta} = -\frac{\dot{a}}{a}\theta_{\beta} &+ c_{s}^2k^2\delta_{\beta} + \frac{4\rho_{\gamma}}{3\rho_{\beta}} a n_{e} \sigma_{T} (\theta_{\gamma}-\theta_{\beta}) \, ,
\label{eqn:theta_beta_2}\end{aligned}$$
$$\begin{aligned}
\dot{\delta}_{\gamma} = -\frac{4}{3}\theta_{\gamma}-\frac{2}{3}\dot{h} + (1-\lambda) \frac{\rho_{q}}{\rho_{\gamma}}\frac{a}{\tau}(\delta_{\beta}-\delta_{\gamma})\, ,
\label{eqn:deltadot_gamma_2}\end{aligned}$$
and $$\begin{aligned}
\dot{\theta}_{\gamma} = k^2 \left( \frac{1}{4}\delta_{\gamma} - \Theta_{\gamma} \right) &+ a n_{e}\sigma_{T}(\theta_{\beta}-\theta_{\gamma}) \nonumber \\
&+ (1-\lambda) \frac{\rho_{q}}{\rho_{\gamma}}\frac{a}{\tau}\left(\frac{3}{4}\theta_{\beta}-\theta_{\gamma}\right) \, .
\label{eqn:thetadot_gamma_2}\end{aligned}$$
We now describe how small-scale modes that enter the horizon prior to decay are suppressed relative to those modes that enter the horizon after decay. Due to the Thomson collision terms the ‘$\beta$’ component and the photons will be tightly coupled as a ‘$\beta$’-photon fluid at early times and this fluid will support acoustic oscillations. Furthermore, Eqs. (\[eqn:deltadot\_d\_2\]) and (\[eqn:thetadot\_d\_2\]) show that the dark-matter perturbations are strongly sourced by the perturbations of the ‘$\beta$’ component prior to decay, when the ratio $\rho_{q}/\rho_{d}$ is large. Dark-matter modes that enter the horizon prior to decay will thus track the oscillations of the ‘$\beta$’-photon fluid rather than simply growing under the influence of gravity. After decay, when the ratio $\rho_{q}/\rho_{d}$ is small, the source term shuts off and dark-matter modes that enter the horizon undergo the standard growing evolution. In Fig. \[fig:delta\_tau\] we follow the evolution of the dark-matter perturbations through the epoch of decay. We modified [CMBFAST]{} [@CMBfast] to carry out these calculations.
In order to suppress power on subgalactic scales the decay lifetime must be roughly the age of the Universe when the mass enclosed in the Hubble volume is equal to a galaxy mass; this occurs when $\tau \sim$ years. In Fig. \[fig:pow\] we plot the linear power spectrum of matter density fluctuations at the present day for a charged-particle lifetime $\tau = 3.5~{\rm
yr}$ assuming a scale-invariant primordial power spectrum. We see that power is suppressed on scales smaller than $k^{-1} \sim
0.3~ {\rm Mpc}$ relative to the standard $\Lambda$CDM power spectrum. Suppression of power on these length scales reduces the expected number of subgalactic halos, bringing the predictions in line with observation [@Kamion2000] without violating constraints from the Lyman-alpha forest [@White2000]. Of course, the model reproduces the successes of the standard $\Lambda$CDM model on larger scales and in the CMB.
The requirements of the charged-particle species are that it have a comoving mass density equal to the dark-matter density today and have a lifetime of $\tau \sim$ 3.5 yr. In order to satisfy the constraint to the CMB chemical potential [@Fixsen1996], the fractional mass difference between the charged and neutral particles must be $\Delta m/m < 3.6 \times 10^{-3}$, and in order for the decay to be allowed kinematically the mass difference must be greater than the electron mass. One possibility is the SuperWIMP scenario of Ref. [@Feng2003] in which a charged particle may decay to an exclusively gravitationally interacting particle. For example, in supersymmetric models, the decay of a selectron to an electron and gravitino $\widetilde{e} \rightarrow e\,\widetilde{G}$ with $m_{\widetilde{e}} \approx m_{\widetilde{G}} > 122~{\rm TeV}$ would satisfy these constraints, as would the decay of a KK-electron to an electron and KK-graviton $e^{1} \rightarrow e\,G^{1}$ with $m_{e^{1}} \approx m_{G^{1}} > 72~{\rm TeV}$ in the case of the single universal extra dimension Kaluza-Klein (KK) model discussed in Refs. [@Appel2001; @Feng2003]. Such masses are larger than the unitary bound for thermal production [@Griest1990], but might be accommodated through nonthermal mechanisms or if the next-to-lightest partner is a squark which might then interact more strongly and thus evade this bound. There may also be viable scenarios involving nearly-degenerate charged and neutral higgsinos.
It should be noted that the recent WMAP evidence for early star formation [@Kogut2003] argues against the suppression of small-scale power, but these results are not yet conclusive. If it does turn out that traditional astrophysical mechanisms can explain the dearth of dwarf galaxies, then our arguments can be turned around to provide constraints to an otherwise inaccessible region of the parameter space for decaying dark matter [@SigKam]. Finally, if the mechanism we propose here is realized in nature, then the dearth of small-scale power, along with the detection of a non-zero CMB chemical potential, would be a powerful probe of the particle spectrum of the new physics responsible for dark matter.
KS acknowledges the support of a Canadian NSERC Postgraduate Scholarship. This work was supported in part by NASA NAG5-9821 and DoE DE-FG03-92-ER40701.
[1]{}
P. de Bernardis et al., Nature (London) [**404**]{}, 955 (2000); S. Hanany et al., Astrophys. J. Lett. [**545**]{}, L5 (2000); N. W. Halverson et al., Astrophys. J. [**568**]{}, 38 (2002); B. S. Mason et al., Astrophys. J. [**591**]{}, 540 (2003); A. Benoit et al., Astron. Astrophys. [**399**]{}, L2 (2003); J. H. Goldstein et al., astro-ph/0212517; D. N. Spergel et al., Astrophys. J. Suppl. [**148**]{}, 175 (2003).
J. A. Peacock et al., Nature (London) [**410**]{}, 169 (2001); W. J. Percival et al., [Mon. Not. Roy. Astron. Soc.]{} [**327**]{}, 1297 (2001); M. Tegmark et al., astro-ph/0310725.
G. Kauffman, S. D. M. White, and B. Guiderdoni, [Mon. Not. Roy. Astron. Soc.]{} [**264**]{}, 201 (1993); A. A. Klypin et al., [Astrophys. J.]{} [**522**]{}, 82 (1999); B. Moore et al., [Astrophys. J. Lett.]{} [**524**]{}, L19 (1999).
A. J. Benson et al., [Mon. Not. Roy. Astron. Soc.]{} [**333**]{}, 177 (2002); R. S. Somerville, [Astrophys. J.]{} [**572**]{}, L23 (2002); L. Verde, S. P. Oh, and R. Jimenez, [Mon. Not. Roy. Astron. Soc.]{} [**336**]{} 541 (2002).
M. Kamionkowski and A. R. Liddle, [Phys. Rev. Lett.]{} [**84**]{}, 4525 (2000).
C. Boehm, P. Fayet, and R. Schaeffer Phys. Lett. B [**518**]{}, 8 (2001); X. Chen, M. Kamionkowski, and X. Zhang, [Phys. Rev. D]{} [**64**]{}, 021302 (2001); X. Chen, S. Hannestad, and R. J. Scherrer, [Phys. Rev. D]{} [**65**]{}, 123515 (2002); C. Boehm et al., astro-ph/0309652; D. N. Spergel and P. J. Steinhardt, [Phys. Rev. Lett.]{} [**84**]{}, 3760 (2000).
C.-P. Ma and E. Bertschinger, [Astrophys. J.]{} [**455**]{}, 7 (1995).
U. Seljak and M. Zaldarriaga, [Astrophys. J.]{} [**469**]{} 437 (1996).
M. White and R. A. C. Croft, [Astrophys. J.]{} [**539**]{} 497 (2000).
D. J. Fixsen et al., [Astrophys. J.]{} [**473**]{}, 576 (1996).
J. L. Feng, A. Rajaraman, and F. Takayama, [Phys. Rev. Lett.]{} [**91**]{}, 011302 (2003); [Phys. Rev. D]{} [**68**]{}, 063504 (2003).
T. Appelquist, H.-C. Cheng, and B. A. Dobrescu, [Phys. Rev. D]{} [**64**]{}, 035002 (2001).
K. Griest and M. Kamionkowski [Phys. Rev. Lett.]{} [**64**]{} 615 (1990).
K. Sigurdson and M. Kamionkowski, in preparation.
A. Kogut et al., Astrophys. J. Suppl. [**148**]{}, 161 (2003).
|
{
"pile_set_name": "arxiv"
}
|
**B Grade** CNPS12X Ultimate Performance Triple Fan CPU Cooler
Below is the original description for this product, any reference to warranty is to be ignored. Warranty for this item is 90 days as with all B Grade items.
B Grade items may have been used, have damaged packaging, missing accessories or a combination of these.
Some items may have scuff marks or slight scratches but should otherwise be an operable product.
Renowned for producing some of the world best CPU coolers, Zalman have now released their newest flagship cooler, the CNPS12X. It is the world's first "out of the box" triple fan cooler and is compatible with Intel latest LGA2011 Sandy Bridge E processors.
Worlds first "out of the box" triple fan CPU coolerThere are many CPU coolers available on the market that can accommodate three fans, but to make this happen at least one additional fan needs to be purchased which add to the expense. With the Zalman CNPS12X you get three 120mm blue LED fans built into the cooler so there is no extra costs. Also all three fans run off one fan header, making powering the fans extremely easy.
Six W-DTH composite heatpipes for excellent heat transferFirst seen on the CNPS11X, composite heatpipes help transfer the heat from the CPU up to 50% faster than standard heatpipes. This helps to increase the performance of the cooler even further. The six heatpipes are U-shaped, which effectively double the heat transfer compared to none U-shaped heatpipes.At the base of the cooler (where the heatpipes make contact with the CPU) the heatpipes utilise what Zalman call Whole-Direct Touch Heatpipes (W-DTH). This allows the heatpipes to make direct contact with the CPU, another feature to help increase performance. But not only that, the area of the Direct Touch will cover the whole CPU. Even the new Intel CPUs for LGA2011 will also be covered by W-DTH.
100% nickel plated with blue LED fans for amazing aestheticsMost CPU coolers are hidden inside the computer case where they go about their business unseen. But if you like to show off the internals of the PC you may want a CPU cooler than looks the part, and boy the CNPS12X does look the part!The entire heatsink of CNPS12X is plated with "Black-Pearl" nickel for a long-term corrosion resistance, while the deep "Black-Pearl" tone, along with the high intensity from the blue LED fans helps this cooler stand head and shoulders above the rest.
|
{
"pile_set_name": "pile-cc"
}
|
#coding=utf-8
'''
Created on 2015-11-4
@author: zhangtiande
'''
from django.shortcuts import HttpResponse
from teamvision.project.models import Project,Tag
from django.contrib.auth.models import User
from business.ucenter.account_service import AccountService
class VM_AdminUser(object):
'''
classdocs
'''
def __init__(self,user,is_create=False):
self.user=user
self.is_create=is_create
self.admin=""
self.manager=""
self.default_group=""
self.set_user_group()
def user_active(self):
result="finished-check fa-check-square"
if not self.user.is_active:
result="fa-square-o unfinished-check"
return result
def user_name(self):
return self.user.email
def user_full_name(self):
result=self.user.username
if self.user.last_name and self.user.first_name:
result=self.user.last_name+self.user.first_name
return result
def user_avatar(self):
result="/static/global/images/fruit-avatar/Fruit-1.png"
if self.user.extend_info:
result=AccountService.get_avatar_url(self.user)
return result
def user_groups(self):
return self.user.groups.all()
def form_id(self):
result="user_edit_form"
if self.is_create:
result="user_create_form"
return result
def set_user_group(self):
if self.user:
if self.user.groups.all().filter(id=27):
self.admin="checked"
elif self.user.groups.all().filter(id=28):
self.manager="checked"
else:
self.default_group="checked"
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We present a study of a peculiar nebula MF16 associated with an Ultraluminous X-ray Source NGC6946 ULX-1. We use integral-field and long-slit spectral data obtained with the 6-m telescope (Russia). The nebula was for a long time considered powered by strong shocks enhancing both high-excitation and low-excitation lines. However, kinematical properties point to rather moderate expansion rates ($V_S \sim 100\div 200$[$km\,s^{-1}\,$]{}). The total power of the emission-line source exceeds by one or two orders of magnitude the power observed expansion rate can provide, that points towards the existence of an additional source of excitation and ionization. Using CLOUDY96.01 photoionization code we derive the properties of the photoionizing source. Its total UV/EUV luminosity must be about $10^{40}$ erg/s.'
date: '??? and in revised form ???'
---
Introduction {#sec:intro}
============
Quite a large number of Ultraluminous X-ray Sources (ULXs) are associated with emission-line nebulae (ULX Nebulae, ULXNe), mostly large-scale bubbles powered by shock waves [@pamir (Pakull & Mirioni, 2003)]. However, several exceptions are known like the nebula associated with HoII X-1 [@lehmann (Lehmann , 2005)], that is clearly a photoionized [H [ii]{}]{} region. Another well-known example is the nebula MF16 coincident with the ULX NGC6946 ULX1.
The attention to MF16 was first drawn by [@BF_94], who identified the object as a Supernova Remnant (SNR), according to the emission-line spectrum with bright collisionally-excited lines. It was long considered an unusually luminous SNR, because of its huge optical emission-line ($L_{H\alpha} = 1.9\times10^{39}erg\,s^{-1}$, according to [@BF_94], for the tangential size $20\times34pc$) and X-ray ($L_X = 2.5\times10^{39}erg\,s^{-1}$ in the $0.5-8$keV range, according to the luminosities given by [@RoCo]).
However, it was shown by [@RoCo], that the spectral, spatial and timing properties of the X-ray source do not agree with the suggestion of a bright SNR, but rather suppose a point source with a typical “ULX-like” X-ray spectrum: cool Multicolor Disk (MCD) and a Power Law (PL) component. So, apart from the physical nature of the object, MF16 should be considered a [*ULX nebula*]{}, one of a new class of objects, described by [@pamir].
Optical Spectroscopy {#sec:obs}
====================
All the data were obtained on the SAO 6m telescope, Russia. Two spectrographs were used: panoramic MultiPupil Fiber Spectrograph MPFS [@MPFSdesc (Afanasiev , 2001)] and SCORPIO focal reducer [@scorpio (Afanasiev & Moiseev, 2005)] in long-slit mode. The details of data reduction processes and analysis technique will be presented in [@mf16_main]. Panoramic spectroscopy has the advantage of providing unbiased flux estimates. However, SCORPIO results have much higher signal-to-noise ratio and reveal rich emission-line spectrum of [\[Fe [iii]{}\]]{}. We also confirm the estimates of the total nebula emission-line luminosities by [@bfs]. $H\beta$ line luminosity obtained from our MPFS data is $L(H\beta) = (7.2\pm0.2)\times10^{37}erg\,s^{-1}$.
Using line ratios for the integral spectrum we estimate the mean parameters of emitting gas as: $n_e \simeq 500\pm 100 \,cm^{-3}$, $T_e \simeq (1.9\pm0.2) \times 10^4 K$. Interstellar absorption is estimated as $A_V \sim 1{^{\rm m}\!\!\!.\,}3$, close to the Galactic value ($A_V^{Gal} = 1{^{\rm m}\!\!\!.\,}14$, according to [@schlegel_abs])
We confirm the estimate of the expansion rate obtained by [@dunne], coming to the conclusion that the expansion velocity is $V_S \lesssim 200$[$km\,s^{-1}\,$]{}. In this case the total emission-line luminosity can be estimated using for example the equations be [@DoSutI]:
$$\begin{array}{l}
F_{H\beta} = 7.44 \times 10^{-6} \left( \frac{V_s}{100 km\, s^{-1}} \right)^{2.41}
\times \left( \frac{n_2}{cm^{-3}}\right)
+ \\
\qquad{}
9.86 \times 10^{-6} \left( \frac{V_s}{100 km \,s^{-1}} \right)^{2.28}
\times \left( \frac{n_1}{cm^{-3}}\right) \, erg\, cm^{-2} s^{-1}
\end{array}$$
Here $V_S$ is the shock velocity and $n_1$ the pre-shock hydrogen density. If the surface area is known, one can obtain the total luminosity in $H\beta$ from here. For $V_S = 200km/s$ and $n_1 = 1cm^{-3}$ it appears to be $L(H\beta) \simeq 1.6 \times 10^{36}$[ergs s$^{-1}$]{}, that is too low compared to the observed value. So we suggest an additional source of power providing most of the energy of the optical nebula.
Photoionization Modelling
=========================
We have computed a grid of CLOUDY96.01 [@cloudy98 (Ferland , 1998)] photoionization models in order to fit MF16 spectrum avoiding shock waves. We have fixed X-ray spectrum known from [*Chandra*]{} observations [@RoCo (Roberts & Colbert, 2003)], assuming all the plasma is situated at 10pc from the central point source, and introduced a blackbody source with the temperature changing from $10^3$ to $10^6$K and integral flux densities from 0.01 to 100 $erg\,cm^{-2}\,s^{-1}$.
The best fit parameters are $\lg T(K) = 5.15\pm 0.05, F = 0.6\pm 0.1 erg\,cm^{-2}\,s^{-1}$, that suggests quite a luminous ultraviolet source: $L_{UV} =
(7.5\pm0.5) \times 10^{39} erg\,s^{-1}$.The UV source is more than 100 times brighter then what can be predicted by extrapolating the thermal component of the best-fit model for X-ray data [@RoCo (Roberts & Colbert, 2003)].
Ultraluminous UV sources?
=========================
At least for one source we have indications that its X-ray spectrum extends in the EUV region. It is interesting to analyse the implications in the frameworks of two most popular hypotheses explaining the ULX phenomenon.
For the standard disk of [@ss73] the inner temperature scales as:
$$T_{in} \simeq 1~keV\, \left(\frac{M}{M_\odot}\right)^{-1/4} \left(\frac{\dot{M}}{\dot{M}_{cr}}\right)^{1/4}$$
In Fig. \[fig:seds\] we present the reconstructed Spectral Energy Distribution of NGC6946 ULX-1 including optical identification by [@bfs] and the best-fit blackbody from our model. For comparison, a set of MCD SEDs for IMBHs accreting at 1% of critical rate is shown. To explain the high EUV luminosity and roughly flat SED in the EUV region, a rather high IMBH mass is needed, $M \gtrsim 10^4 $[$M_\odot$]{}.
For supercritical disk this relation breaks [@poutanen (Poutanen , 2006)], and the outcoming radiation becomes much softer, except for the X-rays escaping along the disk axis [@superkarpov (Fabrika , 2007)]. Most part of the luminosity is supposed to be reprocessed into EUV and UV quanta, creating the nearly-flat SED of NGC6946 ULX1. In optical/UV range contribution of the donor star may become significant.
![NGC6946 ULX1 SED reconstruction. Optical source $d$ [@bfs (Blair , 2000)] is shown by an asterisk, and the upward arrow above indicates the unabsorbed optical luminosity: it is the lower estimate because only Galactic absorption was taken into account, $A_V =
1{^{\rm m}\!\!\!.\,}14$ according to [@schlegel_abs]. Dashed line represents the best-fit blackbody from our CLOUDY fitting. Thin solid lines are MCD models for accreting IMBHs with infinite outer disk radii. Mass accretion rate was set everywhere to $0.01 \dot{M}_{cr}$. []{data-label="fig:seds"}](abolmasov_fig1.eps){width="\textwidth"}
In [@mf16_main] we make estimates for the detectability of ULXs with GALEX, coming to the conclusion that at least some of them (the sources with lower Galactic absorption) may be bright enough targets even for low-resolution spectroscopy.
Conclusions
===========
We conclude that MF16 is most likely a dense shell illuminated from inside. This can be a certain stage of the evolution of a ULXN, when the central source is bright and the shell itself rather compact. We suggest that ULXs must be luminous EUV sources as well in some cases, and may be also luminous UV sources.
This work was supported by the RFBR grants NN 05-02-19710, 04-02-16349, 06-02-16855.
Abolmasov, P., Fabrika, S., Sholukhova, O. & Afanasiev, V. 2005 in[*Science Perspectives for 3D Spectroscopy* ]{}, ed. M. Kissler-Patig, M. M. Roth. & J. R. Walsh (Springer Berlin / Heidelberg)
Abolmasov, P., Fabrika, S., Sholukhova, O. 2007 [*in preparation*]{}
Afanasiev V.L., Dodonov S.N., Moiseev A.V., 2001, in [ *Stellar dynamics: from classic to modern*]{}, eds. Osipkov L.P., Nikiforov I.I., Saint Petersburg, 103
Afanasiev, V., Moiseev, A., 2005 Astronomy Letters, 31, 194
Begelman, M. C. 2002, [*ApJ*]{}, 568, L97
Blair, W. P., Fesen, R. A. 1994 [*ApJ*]{}, 424, L10371.8
Blair, W. P., Fesen, R. A., Schlegel, E. M. 2001 [*The Astronomical Journal*]{}, 121, 1497
Colbert, E. J. M., Miller, E. C. 2005 in [*The Tenth Marcel Grossmann Meeting*]{}. Eds.: Mário Novello, Santiago Perez Bergliaffa, Remo Ruffini. Singapore: World Scientific Publishing.Part A, p. 530
Dopita, M. A., Sutherland, R. S. 1996 [*ApJSS*]{}, 102, 161
Dunne, B. C., Gruendl, R. A., Chu, Y.-H. 2000, [*AJ*]{}, 119, 1172
van Dyk, S. D., Sramek, R. A., Weiler, K. W. 1994, apj, 425, 77
Fabian, A. C., Terlevich, R. 1996 [*MNRAS*]{}, 280, L5
Fabrika, S., Mescheryakov, A., 2001, In [*Galaxies and their Constituents at the Highest Angular Resolutions* ]{}, Proceedings of IAU Symposium N205, R. T. Schilizzi (Ed.), p. 268, astro-ph/0103070
Fabrika, S. [ *Supercritical disk and jets of SS433* ]{} 2004, *ARAA*, vol. 12
Fabrika, S., Abolmasov, P., Sholukhova, O. 2005, in [*Science Perspectives for 3D Spectroscopy* ]{}, eds. Kissler-Patig, M., Roth., M. M. & Walsh, J. R.
Fabrika, S., Karpov, S., Abolmasov, P. 2007 [*in preparation*]{}
Ferland, G. J. Korista, K.T. Verner, D.A. Ferguson, J.W. Kingdon, J.B. Verner, E.M. 1998, [*PASP*]{}, 110, 761
King, A. R., Davies, M. B., Ward, M. J., Fabbiano, G., Elvis, M. 2001, å, 552, 109
Lehmann, I., Becker, T., Fabrika, S., Roth, M., Miyaji, T., Afanasiev, V., Sholukhova, O., Sánchez, S., Greiner, J., Hasinger, G., Constantini, E., Surkov., A, Burenkov, A. 2005 å, 431, 847
Liu, J.-F., Bregman, N. 2005 [*ApJSS*]{}, 157, 59L
Matonick, D. M., Fesen, R. A., 1997 [*ApJSS*]{}, 112, 49
Osterbrock, D. E. “Astrophysics of Gaseous Nebulae” 1974, San Francisco, eds. W. H. Freeman and Company
Pakull, M. W., Mirioni, L. 2003 RevMexAA (Serie de Conferencias), 15, 197
Poutanen, J., Fabrika, S., Butkevich, A., Abolmasov, P. 2006 [*in press*]{}
Roberts, T. P., Colbert, E. J. M. 2003 [*MNRAS*]{}, 341, 49
Schlegel, D. J., Finkbeiner, P. F., Davis, M. 1998, [*ApJ*]{}, 500, 525
Shakura, N. I., Sunyaev, R. A. 1973, å, 24, 337
Swartz, A. D., Ghosh, K. K., Tennant, A. F., Wu, K., 2004 [*ApJSS*]{}, 154, 519
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We analyze heavy quark free energies in $2$-flavor QCD at finite temperature and the corresponding heavy quark potential at zero temperature. Static quark anti-quark sources in color singlet, octet and color averaged channels are used to probe thermal modifications of the medium. The temperature dependence of the running coupling, $\alpha_{qq}(r,T)$, is analyzed at short and large distances and is compared to zero temperature as well as quenched calculations. In parts we also compare our results to recent findings in $3$-flavor QCD. We find that the characteristic length scale below which the running coupling shows almost no temperature dependence is almost twice as large as the Debye screening radius. Our analysis supports recent findings which suggest that $\chi_c$ and $\psi\prime$ are suppressed already at the (pseudo-) critical temperature and thus give a probe for quark gluon plasma production in heavy ion collision experiments, while $J/\psi$ may survive the transition and will dissolve at higher temperatures.'
author:
- Olaf Kaczmarek
- Felix Zantow
bibliography:
- 'paper.bib'
title: |
Static quark anti-quark interactions in zero and finite temperature QCD.\
I. Heavy quark free energies, running coupling and quarkonium binding
---
Introduction
============
The study of the fundamental forces between quarks and gluons is an essential key to the understanding of QCD and the occurrence of different phases which are expected to show up when going from low to high temperatures ($T$) and/or baryon number densities. For instance, at small or vanishing temperatures quarks and gluons get confined by the strong force while at high temperatures asymptotic freedom suggests a quite different QCD medium consisting of rather weakly coupled quarks and gluons, the so-called quark gluon plasma (QGP) [@PLP]. On quite general grounds it is therefore expected that the interactions get modified by temperature. For the analysis of these modifications of the strong forces the change in free energy due to the presence of a static quark anti-quark pair separated by a distance $r$ in a QCD-like thermal heat bath has often been used since the early work [@McLerran:1980pk; @McLerran:1981pb]. In fact, the static quark anti-quark free energy which is obtained from Polyakov loop correlation functions calculated at finite temperature plays a similar important role in the discussion of properties of the strong force as the static quark potential does at zero temperature.
The properties of this observable (at $T=0$: potential, at $T\neq0$: free energy) at short and intermediate distances ($rT\;\lsim\;1$) is important for the understanding of in-medium modifications of heavy quark bound states. A quantitative analysis of heavy quark free energies becomes of considerable importance for the discussion of possible signals for the quark gluon plasma formation in heavy ion collision experiments [@Matsui:1986dk; @RR]. For instance, recent studies of heavy quarkonium systems within potential models use the quark anti-quark free energy to define an appropriate finite temperature potential which is considered in the non-relativistic Schrödinger equation [@Digal:2001iu; @Digal:2001ue; @Wong:2001kn; @Wong:2001uu]. Such calculations, however, do not quite match the results of direct lattice calculations of the quarkonium dissociation temperatures which have been obtained so far only for the pure gauge theory [@Datta:2003ww; @Asakawa:2003re]. It was pointed out [@Kaczmarek:2002mc] that the free energy ($F$) of a static quark anti-quark pair can be separated into two contributions, the internal energy ($U$) and the entropy ($S$). The separation of the entropy contribution from the free energy, [*i.e.*]{} the variable $U=F+TS$, could define an appropriate effective potential at finite temperature[^1] [@Kaczmarek:2002mc; @Zantow:2003ui], $V_{\text{eff}}(r,T)\equiv
U$, to be used as input in model calculations and might explain in parts the quantitative differences found when comparing solutions of the Schrödinger equation with direct calculations of spectral functions [@Datta:2003ww; @Asakawa:2003re]. First calculations which use the internal energy obtained in our calculations [@Shuryak:2004tx; @Wong:2004kn; @Brown:2004qi; @Park:2005nv] support this expectation. Most of these studies consider so far quenched QCD. Using potentials from the quenched theory, however, will describe the interaction of a heavy quark anti-quark pair in a thermal medium made up of gluons only. It is then important to understand how these results might change for the case of a thermal heat bath which also contains dynamical quarks.
On the other hand, it is the large distance property of the heavy quark interaction which is important for our understanding of the bulk properties of the QCD plasma phase, [*e.g.*]{} the screening property of the quark gluon plasma [@Kaczmarek:1999mm; @Kaczmarek:2004gv], the equation of state [@Beinlich:1997ia; @Karsch:2000ps] and the order parameter (Polyakov loop) [@Kaczmarek:2003ph; @Kaczmarek:2002mc; @Dumitru:2004gd; @Dumitru:2003hp]. In all of these studies deviations from perturbative calculations and the ideal gas behavior are expected and were indeed found at temperatures which are only moderately larger than the deconfinement temperature. This calls for quantitative non-perturbative calculations. Also in this case most of todays discussions of the bulk thermodynamic properties of the QGP and its apparent deviations from the ideal gas behavior rely on results obtained in lattice studies of the pure gauge theory, although several qualitative differences are to be expected when taking into account the influence of dynamical fermions; for instance, the phase transition in full QCD will appear as an crossover rather than a ’true’ phase transition with related singularities in thermodynamic observables. Moreover, in contrast to a steadily increasing confinement interaction in the quenched QCD theory, in full QCD the strong interaction below deconfinement will show a qualitatively different behavior at large quark anti-quark separations. Due to the possibility of pair creation the stringlike interaction between the two test quarks can break leading to a constant potential and/or free energy already at temperatures below deconfinement [@DeTar:1998qa].
Thus it is quite important to extend our recently developed concepts for the analysis of the quark anti-quark free energies and internal energies in pure gauge theory [@Kaczmarek:2002mc; @Kaczmarek:2003dp; @Kaczmarek:2004gv; @Phd] to the more complex case of QCD with dynamical quarks, and to quantify the qualitative differences which will show up between pure gauge theories and QCD.
$\beta$ $T/T_c$ \# conf. $\beta$ $T/T_c$ \# conf.
--------- --------- ---------- --------- --------- ----------
3.52 0.76 2000 3.72 1.16 2000
3.55 0.81 3000 3.75 1.23 1000
3.58 0.87 3500 3.80 1.36 1000
3.60 0.90 2000 3.85 1.50 1000
3.63 0.96 3000 3.90 1.65 1000
3.65 1.00 4000 3.95 1.81 1000
3.66 1.02 4000 4.00 1.98 4000
3.68 1.07 3600 4.43 4.01 1600
3.70 1.11 2000
: Sample sizes at each $\beta$ value and the temperature in units of the (pseudo-) critical temperature $T_c$.
\[tab:configs\]
For our study of the strong interaction in terms of the quark anti-quark free energies in full QCD lattice configurations were generated for $2$-flavor QCD ($N_f$=2) on $16^3\times 4$ lattices with bare quark mass $ma$=0.1, [*i.e.*]{} $m/T$=0.4, corresponding to a ratio of pion to rho masses ($m_{\pi}/m_{\rho}$) at the (pseudo-) critical temperature of about $0.7$ ($a$ denotes the lattice spacing) [@Karsch:2000kv]. We have used Symanzik improved gauge and p4-improved staggered fermion actions. This combination of lattice actions is known to reduce the lattice cut-off effects in Polyakov loop correlation functions at small quark anti-quark separations seen as an improved restoration of the broken rotational symmetry. For any further details of the simulations with these actions see [@Allton:2002zi; @Allton:2003vx]. In Table \[tab:configs\] we summarize our simulation parameters, [*i.e.*]{} the lattice coupling $\beta$, the temperature $T/T_c$ in units of the pseudo critical temperature and the number of configurations used at each $\beta$-value. The pseudo critical coupling for this action is $\beta_c=3.649(2)$ [@Allton:2002zi]. To set the physical scale we use the string tension, $\sigma a^2$, measured in units of the lattice spacing, obtained from the large distance behavior of the heavy quark potential calculated from smeared Wilson loops at zero temperature [@Karsch:2000kv]. This is also used to define the temperature scale and $a\sqrt{\sigma}$ is used for setting the scale for the free energies and the physical distances. For the conversion to physical units, $\sqrt{\sigma}=420$MeV is used. For instance, we get $T_c=202(4)$ MeV calculated from $T_c/\sqrt{\sigma}=0.48(1)$ [@Karsch:2000kv]. In parts of our analysis of the quark anti-quark free energies we are also interested in the flavor and finite quark mass dependence. For this reason we also compare our $2$-flavor QCD results to the todays available recent findings in quenched ($N_f$=0) [@Kaczmarek:2002mc; @Kaczmarek:2004gv] and $3$-flavor QCD ($m_\pi/m_\rho\simeq0.4$ [@Peterpriv]) [@Petreczky:2004pz]. Here we use $T_c=270$ MeV for quenched and $T_c=193$ MeV [@Petreczky:2004pz] for the $3$-flavor case.
Our results for the color singlet quark anti-quark free energies, $F_1$, and color averaged free energies, $F_{av}$, are summarized in Fig. \[fes\] as function of distance at several temperatures close to the transition. At distances much smaller than the inverse temperature ($rT\ll1$) the dominant scale is set by distance and the QCD running coupling will be controlled by the distance. In this limit the thermal modification of the strong interaction will become negligible and the finite temperature free energy will be given by the zero temperature heavy quark potential (solid line). With increasing quark anti-quark separation, however, thermal effects will dominate the behavior of the finite temperature free energies ($rT\gg1$). Qualitative and quantitative differences between quark anti-quark free energy and internal energy will appear and clarify the important role of the entropy contribution still present in free energies. The quark anti-quark internal energy will provide a different look on the inter-quark interaction and thermal modifications of the finite temperature quark anti-quark potential. Further details of these modifications on the quark anti-quark free and internal energies will be discussed.
This paper is organized as follows: We start in section \[sect0\] with a discussion of the zero temperature heavy quark potential and the coupling. Both will be calculated from $2$-flavor lattice QCD simulations. We analyze in section \[secfreee\] the thermal modifications on the quark anti-quark free energies and discuss quarkonium binding. Section \[seccon\] contains our summary and conclusions. A detailed discussion of the quark anti-quark internal energy and entropy will be given separately [@pap2].
The zero temperature heavy quark potential and coupling {#sect0}
=======================================================
Heavy quark potential at $T=0$
------------------------------
For the determination of the heavy quark potential at zero temperature, $V(r)$, we have used the measurements of large smeared Wilson loops given in [@Karsch:2000kv] for the same simulation parameters ($N_f$=2 and $ma=0.1$) and action. To eliminate the divergent self-energy contributions we matched these data for all $\beta$-values (different $\beta$-values correspond to different values of the lattice spacing $a$) at large distances to the bosonic string potential, $$\begin{aligned}
V(r) &=& - \frac{\pi}{12}\frac{1}{r} + \sigma r \nonumber\\
&\equiv&-\frac{4}{3}\frac{\alpha_{\text{str}}}{r}+\sigma r\;,
\label{string-cornell}\end{aligned}$$ where we already have separated the Casimir factor so that $\alpha_{\text{str}}\equiv\pi/16$. In this normalization any divergent contributions to the lattice potential are eliminated uniquely. In Fig. \[peik\] we show our results together with the heavy quark potential from the string picture (dashed line). One can see that the data are well described by Eq. (\[string-cornell\]) at large distances, [*i.e.*]{} $r\sqrt{\sigma}\;\gsim\;0.8$, corresponding to $r\;\gsim\;0.4$ fm. At these distances we see no major difference between the 2-flavor QCD potential obtained from Wilson loops and the quenched QCD potential which can be well parameterized within the string model already for $r\;\gsim\;0.4$ fm [@Necco:2001xg; @Luscher:2002qv]. In fact, we also do not see any signal for string breaking in the zero temperature QCD heavy quark potential. This is expected due to the fact that the Wilson loop operator used here for the calculation of the $T=0$ potential has only small overlap with states where string breaking occurs [@Bernard:2001tz; @Pennanen:2000yk]. Moreover, the distances for which we analyze the data for the QCD potential are below $r\;\lsim\;1.2$ fm at which string breaking is expected to set in at zero temperature and similar quark masses [@Pennanen:2000yk].
The coupling at $T=0$ {#couplt=0}
---------------------
Deviations from the string model and from the pure gauge potential, however, are clearly expected to become apparent in the 2-flavor QCD potential at small distances and may already be seen from the short distance part in Fig. \[peik\]. These deviations are expected to arise from an asymptotic weakening of the QCD coupling, [*i.e.*]{} $\alpha=\alpha(r)$, and to some extent also due to the effect of including dynamical quarks, [*i.e.*]{} from leading order perturbation theory one expects $$\begin{aligned}
\alpha(r) \simeq \frac{1}{8\pi} \frac{1}{\beta_0 \log \left(1/(r \Lambda_{\text{ QCD}})\right)}\;,
\label{runningcoupling}\end{aligned}$$ with $$\begin{aligned}
\beta_0 = \frac{33-2N_f}{48 \pi^2}\;,\end{aligned}$$ where $N_f$ is the number of flavors and $\Lambda_{\text{QCD}}$ denotes the corresponding QCD-$\Lambda$-scale. The data in Fig. \[peik\](b) show a slightly steeper slope at distances below $r\sqrt{\sigma}\simeq0.5$ compared to the pure gauge potential given in Ref. [@Necco:2001xg] indicating that the QCD coupling gets stronger in the entire distance range analyzed here when including dynamical quarks. This is in qualitative agreement with (\[runningcoupling\]). To include the effect of a stronger Coulombic part in the QCD potential we test the Cornell parameterization, $$\begin{aligned}
\frac{V(r)}{\sqrt{\sigma}} = -\frac{4}{3}\frac{\alpha}{r\sqrt{\sigma}} + r \sqrt{\sigma}
\label{t=0ansatz}\;,\end{aligned}$$ with a free parameter $\alpha$. From a best-fit analysis of Eq. (\[t=0ansatz\]) to the data ranging from $0.2\;\lsim\;r\sqrt{\sigma}\;\lsim\;2.6$ we find $$\begin{aligned}
\alpha&=&0.212 (3)\;.\label{res}\end{aligned}$$ This already may indicate that the logarithmic weakening of the coupling with decreasing distance will not too strongly influence the properties of the QCD potential at these distances, [*i.e.*]{} at $r\;\gsim\;0.1$ fm. However, the value of $\alpha$ is moderately larger than $\alpha_{\text{str}}\;\simeq\;0.196$ introduced above. To compare the relative size of $\alpha$ in full QCD to $\alpha$ in the quenched theory we again have performed a best-fit analysis of the quenched zero temperature potential given in [@Necco:2001xg] using the Ansatz given in Eq. (\[t=0ansatz\]) and a similar distance range. Here we find $\alpha_{\text{quenched}} = 0.195(1)$ which is again smaller than the value for the QCD coupling but quite comparable to $\alpha_{\text{str}}$. In earlier studies of the heavy quark potentials in pure gauge theories and full QCD even larger values for the couplings were reported [@Glassner:1996xi; @Allton:1998gi; @Aoki:1998sb; @Bali:2000vr; @AliKhan:2001tx; @Aoki:2002uc]. To avoid here any confusions concerning the value of $\alpha$ we should stress that $\alpha$ should not be mixed with some value for the QCD coupling constant $\alpha_{QCD}$, it simply is a fit parameter indicating the ’average strength’ of the Coulomb part in the Cornell potential. The QCD coupling could be identified properly only in the entire perturbative distance regime and will be a running coupling, [*i.e.*]{} $\alpha_{\text{QCD}}=\alpha_{\text{QCD}}(r)$.
When approaching the short distance perturbative regime the Cornell form will overestimate the value of the coupling due to the perturbative logarithmic weakening of the latter, $\alpha_{\text{QCD}}=\alpha_{\text{QCD}}(r)$. To analyze the short distance properties of the QCD potential and the coupling in more detail, [*i.e.*]{} for $r\;\lsim\;0.4$ fm, and to firmly establish here the onset of its perturbative weakening with decreasing distance, it is customary to do so using non-perturbative definitions of running couplings. Following the discussions on the running of the QCD coupling [@Bali:1992ru; @Peter:1997me; @Schroder:1998vy; @Necco:2001xg; @Necco:2001gh], it appears most convenient to study the QCD force, [*i.e.*]{} $dV(r)/dr$, rather than the QCD potential. In this case one defines the QCD coupling in the so-called $qq$-scheme, $$\begin{aligned}
\alpha_{qq}(r)&\equiv&\frac{3}{4}r^2\frac{dV(r)}{dr}\;.
\label{alp_qq}\end{aligned}$$ In this scheme any undetermined constant contribution to the heavy quark potential cancels out. Moreover, the large distance, non-perturbative confinement contribution to $\alpha_{qq}(r)$ is positive and allows for a smooth matching of the perturbative short distance coupling to the non-perturbative large distance confinement signal. In any case, however, in the non-perturbative regime the value of the coupling will depend on the observable used for its definition.
We have calculated the derivatives of the potential with respect to the distance, $dV(r)/dr$, by using finite difference approximations for neighboring distances on the lattice for each $\beta$-value separately. Our results for $\alpha_{qq}(r)$ as a function of distance in physical units for 2-flavor QCD are summarized in Fig. \[peiks\]. The symbols for the $\beta$-values are chosen as in Fig. \[peik\](a). We again show in that figure the corresponding line for the Cornell fit (solid line). At large distances, $r\;\gsim\;0.4$ fm, the data clearly mimic the non-perturbative confinement part of the QCD force, $\alpha_{qq}(r)\simeq3r^2\sigma/4$. We also compare our data to the recent high statistics calculation in pure gauge theory (thick solid line) [@Necco:2001xg]. These data are available for $r\;\gsim\;0.1$ fm and within the statistics of the QCD data no significant differences could be identified between the QCD and pure gauge data for $r\;\gsim\;0.4$ fm. At smaller distances ($r\;\lsim\;0.4$ fm), however, the data show some enhancement compared to the coupling in quenched QCD. The data below $0.1$ fm, moreover, fall below the large distance Cornell fit. This may indicate the logarithmic weakening of the coupling. At smaller distances than $0.1$ fm we therefore expect the QCD potential to be influenced by the weakening of the coupling and $\alpha_{qq}(r)$ will approach values clearly smaller than $\alpha$ deduced from the Cornell Ansatz. Unfortunately we can, at present, not go to smaller distances to clearly demonstrate this behavior with our data in 2-flavor QCD. Moreover, at small distances cut-off effects may also influence our analysis of the coupling and more detailed studies are required here. Despite these uncertainties, however, in earlier studies of the coupling in pure gauge theory [@Necco:2001xg; @Necco:2001gh; @Kaczmarek:2004gv] it is shown that the perturbative logarithmic weakening becomes already important at distances smaller than $0.2$ fm and contact with perturbation theory could be established.
As most of our lattice data for the finite temperature quark anti-quark free energies do not reach distances smaller than $0.1$ fm we use in the following the Cornell form deduced in (\[t=0ansatz\]) as reference to the zero temperature heavy quark potential.
Quark anti-quark free energy {#secfreee}
============================
We will analyze here the temperature dependence of the change in free energy due to the presence of a heavy (static) quark anti-quark pair in a 2-flavor QCD heat bath. The static quark sources are described by the Polyakov loop, $$\begin{aligned}
L(\vec{x})&=&\frac{1}{3}{{\rm Tr}}W(\vec{x})\;,\label{pol}\end{aligned}$$ with $$\begin{aligned}
W(\vec{x}) = \prod_{\tau=1}^{N_\tau} U_0(\vec{x},\tau)\;,\label{loop}\end{aligned}$$ where we already have used the lattice formulation with $U_0(\vec{x},\tau) \in
SU(3)$ being defined on the lattice link in time direction. The change in free energy due to the presence of the static color sources in color singlet ($F_1$) and color octet ($F_8$) states can be calculated in terms of Polyakov loop correlation functions [@McLerran:1981pb; @Philipsen:2002az; @Nadkarni:1986as; @Nadkarni:1986cz], $$\begin{aligned}
e^{-F_1(r)/T+C}&=&\frac{1}{3} {{\rm Tr}}\langle W(\vec{x}) W^{\dagger}(\vec{y}) \rangle
\label{f1}\;,\\
e^{-F_8(r)/T+C}&=&\frac{1}{8}\langle {{\rm Tr}}W(\vec{x}) {{\rm Tr}}W^{\dagger}(\vec{y})\rangle- \nonumber \\
&&
\frac{1}{24} {{\rm Tr}}\langle W(\vec{x}) W^{\dagger}(\vec{y}) \rangle\; ,
\label{f8}\end{aligned}$$ where $r=|\vec{x}-\vec{y}|$. As it stands, the correlation functions for the color singlet and octet free energies are gauge dependent quantities and thus gauge fixing is needed to define them properly. Here, we follow [@Philipsen:2002az] and fix to Coulomb gauge. In parts we also consider the so-called color averaged free energy defined through the manifestly gauge independent correlation function of two Polyakov loops, $$\begin{aligned}
e^{-F_{\bar q q}(r)/T+C}&=&\frac{1}{9}\langle {{\rm Tr}}W(\vec{x}) {{\rm Tr}}W^{\dagger}(0) \rangle \nonumber\\
&=&\langle L(\vec{x})L^\dagger(\vec{y})\rangle\; .
\label{fav}\end{aligned}$$
The constant $C$ appearing in (\[f1\]), (\[f8\]) and (\[fav\]) also includes divergent self-energy contributions which require renormalization. Following [@Kaczmarek:2002mc] the free energies have been normalized such that the color singlet free energy approaches the heavy quark potential (solid line) at the smallest distance available on the lattice, $F_1(r/a=1, T)=V(r)$. In Sec. \[renormalization\] we will explain the connection of this procedure to the the renormalized Polyakov loop and show the resulting renormalization constants in Table \[tab:ren\].
Some results for the color singlet, octet and averaged quark anti-quark free energies are shown in Fig. \[saos\] for one temperature below and one temperature above deconfinement, respectively. The free energies calculated in different color channels coincide at large distances and clearly show the effects from string breaking below and color screening above deconfinement. The octet free energies above $T_c$ are repulsive for all distances while below $T_c$ the distances analyzed here are not small enough to show the (perturbatively) expected repulsive short distance part. Similar results are obtained at all temperatures analyzed here. In the remainder of this section we study in detail the thermal modifications of these free energies from short to large distances. We begin our analysis of the free energies at small distances in Sec. \[couplatt\] with a discussion of the running coupling which leads to the renormalization of the free energies in Sec. \[renormalization\]. The separation of small and large distances which characterizes sudden qualitative changes in the free energy will be discussed in Sec. \[secshort\]. Large distance modifications of the quark anti-quark free energy will be studied in Sec. \[colorscreening\] at temperatures above and in Sec. \[stringbreaking\] at temperatures below deconfinement.
Our analysis of thermal modifications of the strong interaction will mainly be performed for the color singlet free energy. In this case a rather simple Coulombic $r$-dependence is suggested by perturbation theory at $T=0$ and short distances as well as for large distances at high temperatures. In particular, a proper $r$-dependence of $F_{\bar q q}$ is difficult to establish [@Kaczmarek:2002mc]. This maybe is attributed to contributions from higher excited states [@Jahn:2004qr] or to the repulsive contributions from states with static charges fixed in an octet configuration.
The running coupling at $T\neq0$ {#couplatt}
--------------------------------
We extend here our studies of the coupling at zero temperature to finite temperatures below and above deconfinement following the conceptual approach given in [@Kaczmarek:2004gv]. In this case the appropriate observable is the color singlet quark anti-quark free energy and its derivative. We use the perturbative short and large distance relation from one gluon exchange [@Nadkarni:1986as; @Nadkarni:1986cz; @McLerran:1981pb], [*i.e.*]{} in the limit $r\Lambda_{\text{QCD}}\ll1$ zero temperature perturbation theory suggests $$\begin{aligned}
F_1(r,T)\;\equiv\;V(r)&\simeq&-\frac{4}{3}\frac{\alpha(r)}{r}\;,\label{alp_rT1}\end{aligned}$$ while high temperature perturbation theory, [*i.e.*]{} $rT\gg1$ and $T$ well above $T_c$, yields $$\begin{aligned}
F_1(r,T)&\simeq&-\frac{4}{3}\frac{\alpha(T)}{r}e^{-m_D(T)r}\;.\label{alp_rT2}\end{aligned}$$ In both relations we have neglected any constant contributions to the free energies which, in particular, at high temperatures will dominate the large distance behavior of the free energies. Moreover, we already anticipated here the running of the couplings with the expected dominant scales $r$ and $T$ in both limits. At finite temperature we define the running coupling in analogy to $T=0$ as (see [@Kaczmarek:2002mc; @Kaczmarek:2004gv]),$$\begin{aligned}
\alpha_{qq}(r,T)&\equiv&\frac{3}{4}r^2 \frac{dF_1(r,T)}{dr}\;.\label{alp_rT}\end{aligned}$$ With this definition any undetermined constant contributions to the free energies are eliminated and the coupling defined here at finite temperature will recover the coupling at zero temperature defined in (\[alp\_qq\]) in the limit of small distances. Therefore $\alpha_{qq}(r,T)$ will show the (zero temperature) weakening in the short distance perturbative regime. In the large distance limit, however, the coupling will be dominated by Eq. (\[alp\_rT2\]) and will be suppressed by color screening, $\alpha_{qq}(r,T)\simeq\alpha(T)\exp(-m_D(T)r)$, $rT\gg1$. It thus will exhibit a maximum at some intermediate distance. Although in the large distance regime $\alpha_{qq}(r,T)$ will be suppressed by color screening and thus non-perturbative effects will strongly control the value of $\alpha_{qq}(r,T)$, in this limit the temperature dependence of the coupling, $\alpha(T)$, can be extracted by directly comparing the singlet free energy with the high temperature perturbative relation above deconfinement. Results from such an analysis will be given in Sec. \[colorscreening\].
We calculated the derivative, $dF_1/dr$, of the color singlet free energies with respect to distance by using cubic spline approximations of the $r$-dependence of the free energies for each temperature. We then performed the derivatives on basis of these splines. Our results for $\alpha_{qq}(r,T)$ calculated in this way are shown in Fig. \[couplt\] and are compared to the coupling at zero temperature discussed already in Sec. \[couplt=0\]. Here the thin solid line corresponds to the coupling in the Cornell Ansatz deduced in Eq. (\[t=0ansatz\]). We again show in this figure the results from $SU(3)$-lattice (thick line) and perturbative (dashed line) calculations at zero temperature from [@Necco:2001gh; @Necco:2001xg]. The strong $r$-dependence of the running coupling near $T_c$ observed already in pure gauge theory [@Kaczmarek:2004gv] is also visible in 2-flavor QCD. Although our data for 2-flavor QCD do not allow for a detailed quantitative analysis of the running coupling at smaller distances, the qualitative behavior is in quite good agreement with the recent quenched results. At large distances the running coupling shows a strong temperature dependence which sets in at shorter separations with increasing temperature. At temperatures close but above $T_c$, $\alpha_{qq}(r,T)$ coincides with $\alpha_{qq}(r)$ already at separations $r\;\simeq\;0.4$ fm and clearly mimics here the confinement part of $\alpha_{qq}(r)$. This is also apparent in quenched QCD [@Kaczmarek:2004gv]. Remnants of the confinement part of the QCD force may survive the deconfinement transition and could play an important role for the discussion of non-perturbative aspects of quark anti-quark interactions at temperatures moderately above $T_c$ [@Shuryak:2004tx; @Brown:2004qi]. A clear separation of the different effects usually described by the concept of color screening ($T\;\gsim\;T_c$) and effects usually described by the concept of string-breaking ($T\;\lsim\;T_c$) is difficult to establish at temperatures in the close vicinity of the confinement deconfinement crossover.
We also analyzed the size of the maximum that the running coupling $\alpha_{qq}(r,T)$ at fixed temperature exhibits at a certain distance, $r_{max}$, [*i.e.*]{} we identify a temperature dependent coupling, $\tilde{\alpha}_{qq}(T)$, defined as $$\begin{aligned}
\tilde{\alpha}_{qq}(T)&\equiv&\alpha_{qq}(r_{max},T)\;.\label{alp_Tdef} \end{aligned}$$ The values for $r_{max}$ will be discussed in Sec \[secshort\] (see Fig. \[onset\]). Values for $\tilde{\alpha}_{qq}(T)$ are also available in pure gauge theory [@Kaczmarek:2004gv] at temperatures above deconfinement[^2]. Our results for $\tilde{\alpha}_{qq}(T)$ in $2$-flavor QCD and pure gauge theory are shown in Fig. \[alp\_qqT\] as function of temperature, $T/T_c$. At temperatures above deconfinement we cannot identify significant differences between the data from pure gauge and 2-flavor QCD[^3]. Only at temperatures quite close but above the phase transition small differences between full and quenched QCD become visible in $\tilde{\alpha}_{qq}(T)$. Nonetheless, the value of $\tilde{\alpha}_{qq}(T)$ drops from about $0.5$ at temperatures only moderately larger than the transition temperature, $T\;\gsim\;1.2T_c$, to a value of about $0.3$ at $2T_c$. This change in $\tilde{\alpha}_{qq}(T)$ with temperature calculated in $2$-flavor QCD does not appear to be too dramatic and might indeed be described by the $2$-loop perturbative coupling, $$\begin{aligned}
g_{\text{2-loop}}^{-2}(T)=2\beta_0\ln\left(\frac{\mu T}
{\Lambda_{\overline{MS}}}\right)+\frac{\beta_1}{\beta_0}
\ln\left(2\ln\left(\frac{\mu T}{\Lambda_{\overline{MS}}}\right)\right),\nonumber\\
\label{2loop}\end{aligned}$$ with $$\begin{aligned}
\beta_0&=&\frac{1}{16\pi^2}\left(11-\frac{2N_f}{3}\right)\;,\nonumber\\
\beta_1&=&\frac{1}{(16\pi^2)^2}\left(102-\frac{38N_f}{3}\right)\;,\nonumber\end{aligned}$$ assuming vanishing quark masses. In view of the ambiguity in setting the scale in perturbation theory, $\mu T$, we performed a best-fit analysis to fix the scale for the entire temperature range, $1.2\;\lsim\;T/T_c\;\lsim\;2$. We find here $\mu=1.14(2)\pi$ with $T_c/\Lambda_{\overline{MS}}=0.77(21)$ using $T_c\simeq202(4)$ MeV [@Karsch:2000ps] and $\Lambda_{\overline{MS}}\simeq
261(17)$ MeV [@Gockeler:2005rv], which is still in agreement with the lower limit of the range of scales one commonly uses to fix perturbative couplings, $\mu=\pi,...,4\pi$. This is shown by the solid line (fit) in Fig. \[alp\_qqT\] including the error band estimated through $\mu=\pi$ to $\mu=4\pi$ and the error on $T_c/\Lambda_{\overline{MS}}$ (dotted lines). We will turn back to a discussion of the temperature dependence of the coupling above deconfinement in Sec. \[colorscreening\].
At temperatures in the vicinity and below the phase transition temperature, $T\;\lsim\;1.2T_c$, the behavior of $\tilde{\alpha}_{qq}(T)$ is, however, quite different from the perturbative logarithmic change with temperature. The values for $\tilde{\alpha}_{qq}(T)$ rapidly grow here with decreasing temperature and approach non-perturbatively large values. This again shows that $\tilde{\alpha}_{qq}(r,T)$ mimics the confinement part of the zero temperature force still at relatively large distances and that this behavior persists up to temperatures close but above deconfinement. This again demonstrates the persistence of confinement forces at $T\;\gsim\;T_c$ and intermediate distances and demonstrates the difficulty to separate clearly the different effects usually described by color screening and string breaking in the vicinity of the phase transition. We note here, however, that similar to the coupling in quenched QCD [@Kaczmarek:2004gv] the coupling which describes the short distance Coulombic part in the free energies is almost temperature independent in this temperature regime, [*i.e.*]{} even at relatively large distances the free energies shown in Fig. \[fes\] show no or only little temperature dependence below deconfinement.
Renormalization of the quark anti-quark free energies and Polyakov loop {#renormalization}
-----------------------------------------------------------------------
On the lattice the expectation value of the Polyakov loop and its correlation functions suffer from linear divergences. This leads to vanishing expectation values in the continuum limit, $a\to0$, at all temperatures. To become a meaningful physical observable a proper renormalization is required [@Kaczmarek:2002mc; @Dumitru:2004gd; @deForcrand:2001nd]. We follow here the conceptual approach suggested in [@Kaczmarek:2002mc; @Phd] and extend our earlier studies in pure gauge theory to the present case of 2-flavor QCD. First experiences with this renormalization method in full QCD were already reported in [@Kaczmarek:2003ph; @Petreczky:2004pz].
In the limit of short distances, $r\ll1/T$, thermal modifications of the quark anti-quark free energy become negligible and the running coupling is controlled by distance only. Thus we can fix the free energies at small distances to the heavy quark potential, $F_1(r\ll1/T,T)\simeq V(r)$, and the renormalization group equation (RGE) will lead to $$\begin{aligned}
\lim_{r\to0}T\frac{dF_1(r,T)}{dT}&=&0\;,\label{RGEf}\end{aligned}$$ where we already have assumed that the continuum limit, $a\to0$, has been taken. On basis of the analysis of the coupling in Sec. \[couplatt\] and experiences with the quark anti-quark free energy in pure gauge theory [@Kaczmarek:2002mc; @Kaczmarek:2004gv] we assume here that the color singlet free energies in 2-flavor QCD calculated on finite lattices with temporal extent $N_\tau=4$ already have approached appropriate small distances, $r\ll1/T$, allowing for renormalization.
The (renormalized) color singlet quark anti-quark free energies, $F_1(r,T)$, and the heavy quark potential, $V(r)$ (line), were already shown in Fig. \[fes\](a) as function of distance at several temperatures close to the phase transition. From that figure it can be seen that the quark anti-quark free energy fixed at small distances approaches finite, temperature dependent plateau values at large distances signaling color screening ($T\;\gsim\;T_c$) and string breaking ($T<T_c$). These plateau values, $F_\infty(T)\equiv
F_1(r\to\infty,T)$, are decreasing with increasing temperature in the temperature range analyzed here. In general it is expected that $F_\infty(T)$ will continue to increase at smaller temperature and will smoothly match $V(r\equiv\infty)$ [@Digal:2001iu] at zero temperature while it will become negative at high temperature and asymptotically is expected to become proportional to $g^3T$ [@Gava:1981qd; @Kaczmarek:2002mc]. The plateau value of the quark anti-quark free energy at large distances can be used to define non-perturbatively the renormalized Polyakov loop [@Kaczmarek:2002mc], [ *i.e.*]{} $$\begin{aligned}
L^{\text{ren}}(T)&=&\exp\left(-\frac{F_\infty(T)}{2T}\right)\;.\label{renPloop}\end{aligned}$$ As the unrenormalized free energies approach $|\langle L\rangle|^2$ at large distances, this may be reinterpreted in terms of a renormalization constant that has been determined by demanding (\[RGEf\]) to hold at short distances [@Dumitru:2003hp; @Zantow:2003uh], $$\begin{aligned}
L^{\text{ren}}&\equiv&|\langle \left(Z(g,m)\right)^{N_\tau} L \rangle|\;.\end{aligned}$$ The values for $Z(g,m)$ for our simulation parameters are summarized in Table \[tab:ren\]. The normalization constants for the free energies appearing in (\[f1\]-\[fav\]) are then given by $$\begin{aligned}
C=-2 N_\tau Z(g,m).\end{aligned}$$ An analysis of the renormalized Polyakov loop expectation value in high temperature perturbation theory [@Gava:1981qd] suggests at (resummed) leading order[^4], the behavior $$\begin{aligned}
L^{\text{ren}}(T)&\simeq&1+\frac{2}{3}\frac{m_D(T)}{T}\alpha(T)\label{lrenpert}\;\end{aligned}$$ in the fundamental representation. Thus high temperature perturbation theory suggests that the limiting value at infinite temperature, $L^{\text{ren}}(T\to\infty)=1$ is approached from above. An expansion of (\[renPloop\]) then suggests $F_\infty(T)\simeq-\frac{4}{3}m_D(T)\alpha(T)\simeq-{\cal O}(g^3T)$. We thus expect $F_\infty(T)\to-\infty$ in the high temperature limit.
$\beta$ $Z(g,m)$ $T/T_c$ $L^{\text{ren}}(T)$
--------- ----------- --------- ---------------------
3.52 1.333(19) 0.76 0.033(2)
3.55 1.351(10) 0.81 0.049(2)
3.60 1.370(08) 0.90 0.093(2)
3.63 1.376(07) 0.96 0.160(3)
3.65 1.376(07) 1.00 0.241(5)
3.66 1.375(06) 1.02 0.290(5)
3.68 1.370(06) 1.07 0.398(7)
3.72 1.374(02) 1.16 0.514(3)
3.75 1.379(02) 1.23 0.575(2)
3.80 1.386(01) 1.36 0.656(2)
3.85 1.390(01) 1.50 0.722(2)
3.90 1.394(01) 1.65 0.779(1)
3.95 1.396(13) 1.81 0.828(3)
4.00 1.397(01) 1.98 0.874(1)
4.43 1.378(01) 4.01 1.108(2)
: Renormalization constants, $Z(g,m)$, versus $\beta$ and the renormalized Polyakov loop, $L^{\text{ren}}$, versus $T/T_c$ for 2-flavor QCD with quark mass $m/T=0.4$.
\[tab:ren\]
To avoid here any fit to the complicated $r$- and $T$-dependence of the quark anti-quark free energy we estimate the value of $F_\infty(T)$ from the quark anti-quark free energies at the largest separation available on a finite lattice, $r=N_\sigma/2$. As the free energies in this renormalization scheme coincide at large distances in the different color channels we determine $F_\infty(T)$ from the color averaged free energies, [*i.e.*]{} $F_\infty(T)\equiv F_{\bar q q}(r=N_\sigma/2,T)$. This is a manifestly gauge invariant quantity. In Fig. \[renpol\] we show the results for $L^{\text{ren}}$ in 2-flavor QCD (filled symbols) compared to the quenched results (open symbols) obtained in [@Kaczmarek:2002mc]. In quenched QCD $L^{\text{ren}}$ is zero below $T_c$ as the quark anti-quark free energy signals permanent confinement, [*i.e.*]{} $F_\infty(T\;\lsim\; T_c)=\infty$ in the infinite volume limit, while it jumps to a finite value just above $T_c$. The singularity in the temperature dependence of $L^{\text{ren}}(T)$ located at $T_c$ clearly signals the first order phase transition in $SU(3)$ gauge theory. The renormalized Polyakov loop in $2$-flavor QCD, however, is no longer zero below $T_c$. Due to string breaking the quark anti-quark free energies approach constant values leading to non-zero values of $L^{\text{ren}}$. Although the renormalized Polyakov loop calculated in full QCD is no longer an order parameter for the confinement deconfinement phase transition, it still shows a quite different behavior in the two phases and a clear signal for a qualitative change in the vicinity of the transition. Above deconfinement $L^{\text{ren}}(T)$ yields finite values also in quenched QCD.
In the temperature range $1\;\lsim\;T/T_c\;\lsim\;2$ we find that in 2-flavor QCD $L^{\text{ren}}$ lies below the results in quenched QCD. This, however, may change at higher temperatures. The value for $L^{\text{ren}}$ at $4T_c$ is larger than unity and we find indication for $L^{\text{ren}}_{\mathrm{2-flavor}}(4T_c)\;\gsim\;L^{\text{ren}}_{\mathrm{quenched}}(4T_c)$. The properties of $L^{\text{ren}}$, however, clearly depend on the relative normalization of the quark anti-quark free energies in quenched and full QCD.
Short vs. large distances {#secshort}
-------------------------
Having discussed the quark anti-quark free energies at quite small distances where no or only little temperature effects influence the behavior of the free energies and at quite large distances where aside from $T$ no other scale controls the free energy, we now turn to a discussion of medium effects at intermediate distances. The aim is to gain insight into distance scales that can be used to quantify at which distances temperature effects in the quark anti-quark free energies set in and may influence the in-medium properties of heavy quark bound states in the quark gluon plasma.
It can be seen from Fig. \[fes\](a) that the color singlet free energy changes rapidly from the Coulomb-like short distance behavior to an almost constant value at large distances. This change reflects the in-medium properties of the heavy quark anti-quark pair, [*i.e.*]{} the string-breaking property and color screening. To characterize this rapid onset of in-medium modifications in the free energies we introduced in Ref. [@Kaczmarek:2002mc] a scale, $r_{med}$, defined as the distance at which the value of the $T=0$ potential reaches the value $F_\infty(T)$, [*i.e.*]{} $$\begin{aligned}
V(r_{med})&\equiv&F_\infty(T)\;.\label{rmed}\end{aligned}$$ As $F_\infty(T)$ is a gauge invariant observable this relation provides a non-perturbative, gauge invariant definition of the scale $r_{med}$. While in pure gauge theory the color singlet free energies signal permanent confinement at temperatures below $T_c$ leading to a proper definition of this scale only above deconfinement, in full QCD it can be deduced in the whole temperature range. On the other hand, the change in the coupling $\alpha_{qq}(r,T)$ as function of distance at fixed temperature mimics the qualitative change in the interaction when going from small to large distances and the coupling exhibits a maximum at some intermediate distance. The location of this maximum, $r_{max}$, can also be used to identify a scale that characterizes separation between the short distance vacuumlike and the large distance medium modified interaction between the static quarks [@Kaczmarek:2004gv]. Due to the rapid crossover from short to large distance behavior (see Fig. \[fes\](a)) it should be obvious that $r_{med}$ and $r_{max}$ define similar scales, however, by construction $r_{max}\;\lsim\;r_{med}$.
To gain important information about the flavor and quark mass dependence of our analysis of the scales in QCD, we also took data for $F_\infty(T)$ from Ref. [@Petreczky:2004pz] at smaller quark mass, $m_\pi/m_\rho\simeq0.4$ [@Peterpriv], and calculated $r_{med}$ in $3$-flavor QCD with respect to the parameterization of $V(r)$ given in [@Petreczky:2004pz]. It is interesting to note here that a study of the flavor and quark mass dependence of $r_{med}$ and $r_{max}$ is independent of any undetermined and maybe flavor and/or quark mass dependent overall normalization of the corresponding $V(r)$ at zero temperature. Our results for $r_{max}$ ($N_f$=0,2) and $r_{med}$ ($N_f$=0,2,3) are summarized in Fig. \[onset\] as function of $T/T_c$. It can be seen that the value $r_{max}\simeq 0.6$ fm is approached almost in common in quenched and $2$-flavor QCD at the phase transition and it commonly drops to about $0.25$ fm at temperatures about $2T_c$. No or only little differences between $r_{max}$ calculated from pure gauge and $2$-flavor QCD could be identified at temperatures above deconfinement. The temperature dependence of $r_{med}$ is similar to that of $r_{max}$ and again we see no major differences between pure gauge ($N_f$=0) and QCD ($N_f$=2,3) results. In the vicinity of the transition temperature and above both scales almost coincide. In fact, above deconfinement the flavor and finite quark mass dependence of $r_{med}$ appears quite negligible. At high temperature we expect $r_{med}\simeq1/gT$ [@Kaczmarek:2002mc] while in terms of $r_{max}$ we found agreement with $r_{max}=0.48(1)$ fm $T_c/T$ (solid lines) at temperatures ranging up to $12T_c$ [@Kaczmarek:2004gv]. Note that both scales clearly lie well above the smallest distance attainable by us on the lattice, $rT\equiv1/N_\tau=1/4$. This distance is shown by the lower dashed line in Fig. \[onset\].
At temperatures below deconfinement $r_{max}$ and $r_{med}$ rapidly increase and fall apart when going to smaller temperatures. In fact, at temperatures below deconfinement we clearly see difference between $r_{med}$ calculated in $2$- and $3$-flavor QCD. To some extend this is expected due to the smaller quark mass used in the $3$-flavor QCD study as the string breaking energy gets reduced. It is, however, difficult to clearly separate here a finite quark mass effect from flavor dependence. In both cases $r_{med}$ approaches, already at $T\simeq0.8T_c$, quite similar values to those reported for the distance where string breaking at $T=0$ is expected at similar quark masses. In $2$-flavor QCD at $T=0$ and quark mass $m_\pi/m_\rho\simeq0.7$ the string is expected to break at about $1.2-1.4$ fm [@Pennanen:2000yk] while at smaller quark mass, $m_\pi/m_\rho\simeq0.4$ it might break earlier [@Bernard:2001tz].
In contrast to the complicated $r$- and $T$-dependence of the free energy at intermediate distances high temperature perturbation theory suggests a color screened Coulomb behavior for the singlet free energy at large distances. To analyze this in more detail we show in Fig. \[screeningf\] the subtracted free energies, $r(F_1(\infty,T)-F_1(r,T))$. It can be seen that this quantity indeed decays exponentially at large distances, $rT\;\gsim\;1$. This allows us to study the temperature dependence of the parameters $\alpha(T)$ and $m_D(T)$ given in Eq. (\[alp\_rT2\]). At intermediate and small distances, however, deviations from this behavior are expected and can clearly be seen and are to some extent due to the onset of the $r$-dependence of the coupling at small distances. These deviations from the simple exponential decay become important already below some characteristic scale, $r_d$, which we can roughly identify here as $r_dT\simeq0.8\;-\;1$. This scale which defines a lower limit for the applicability of high temperature perturbation theory is shown by the upper dashed line in Fig. \[onset\] ($r_dT=1$). It lies well above the scales $r_{med}$ and $r_{max}$ which characterize the onset of medium modifications on the quark anti-quark free energy.
Screening properties above deconfinement and the coupling {#colorscreening}
---------------------------------------------------------
### Screening properties and quarkonium binding
We follow here the approach commonly used [@Attig:1988ey; @Nakamura:2004wr; @Kaczmarek:2004gv] and define the non-perturbative screening mass, $m_D(T)$, and the temperature dependent coupling, $\alpha(T)$, from the exponential fall-off of the color singlet free energies at large distances, $rT\;\gsim\;0.8\;$-$\;1$. A consistent definition of screening masses, however, is accompanied by a proper definition of the temperature dependent coupling and only at sufficiently high temperatures contact with perturbation theory is expected [@Kaczmarek:2004gv; @Laine:2005ai]. A similar discussion of the color averaged quark anti-quark free energy is given in Refs. [@Kaczmarek:1999mm; @Petreczky:2001pd; @Zantow:2001yf].
We used the Ansatz (\[alp\_rT2\]) to perform a best-fit analysis of the large distance part of the color singlet free energies, [*i.e.*]{} we used fit functions with the Ansatz $$\begin{aligned}
F_1(r,T)-F_1(r=\infty,T) = - \frac{4a(T)}{3r}e^{-m(T)r},
\label{screenfit}\end{aligned}$$ where the two parameters $a(T)$ and $m(T)$ are used to estimate the coupling $\alpha(T)$ and the Debye mass $m_D(T)$, respectively. The fit-range was chosen with respect to our discussion in Sec. \[secshort\], [*i.e.*]{} $rT\;\gsim\;0.8\;$-$\;1$, where we varied the lower fit limit within this range and averaged over the resulting values. The temperature dependent coupling $\alpha(T)$ defined here will be discussed later. Our results for the screening mass, $m_D(T)/T$, are summarized in Fig. \[screenmass\] as function of $T/T_c$ and are compared to the results obtained in pure gauge theory [@Kaczmarek:2004gv]. The data obtained from our $2$-flavor QCD calculations are somewhat larger than in quenched QCD. Although we are not expecting perturbation theory to hold at these small temperatures, this enhancement is in qualitative agreement with leading order perturbation theory, [*i.e.*]{} $$\begin{aligned}
\frac{m_D(T)}{T}&=&\left(1 + \frac{N_f}{6}\right)^{1/2}\;g(T)\;.\label{LOscreen}\end{aligned}$$ However, using the $2$-loop formula (\[2loop\]) to estimate the temperature dependence of the coupling leads to significantly smaller values for $m_D/T$ even when setting the scale by $\mu=\pi$ which commonly is used as an upper bound for the perturbative coupling. We therefore follow [@Kaczmarek:2004gv; @Kaczmarek:1999mm] and introduce a multiplicative constant, $A$, [*i.e.*]{} we allow for a non-perturbative correction defined as $$\begin{aligned}
\frac{m_D(T)}{T}&\equiv& A\;\left(1 + \frac{N_f}{6}\right)^{1/2}\;g_{2-loop}(T)\;,\end{aligned}$$ and fix this constants by best agreement with the non-perturbative data for $m_D(T)/T$ at temperatures $T\;\gsim\;1.2$. Here the scale in the perturbative coupling is fixed by $\mu=2\pi$. This analysis leads to $A=1.417(19)$ and is shown as solid line with error band (dotted lines). Similar results were already reported in [@Kaczmarek:1999mm; @Kaczmarek:2004gv] for screening masses in pure gauge theory. Using the same fit range, i.e. $T=1.2T_c\;-\;4.1T_c$, for the quenched results, we obtain $A=1.515(17)$. To avoid here any confusion concerning $A$ we note that its value will crucially depend on the temperature range used to determine it. When approaching the perturbative high temperature limit, $A\to 1$ is expected.
It is interesting to note here that the difference in $m_D/T$ apparent in Fig. \[screenmass\] between $2$-flavor QCD and pure gauge theory disappears when converting $m_D(T)$ to physical units. This is obvious from Fig. \[screenradius\] which shows the Debye screening radius, $r_D\equiv1/m_D$. In general $r_D$ is used to characterize the distance at which medium modifications of the quark anti-quark interaction become dominant. It often is used to describe the screening effects in phenomenological inter-quark potentials at high temperatures. From perturbation theory one expects that the screening radius will drop like $1/gT$. A definition of a screening radius, however, will again depend on the ambiguities present in the non-perturbative definition of a screening mass, $m_D(T)$. A different quantity that characterizes the onset of medium effects, $r_{med}$, has already been introduced in Sec. \[secshort\]; this quantity is also expected to drop like $1/(gT)$ at high temperatures and could be considered to give an upper limit for the screening radius [@Kaczmarek:2004gv]. In Fig. \[screenradius\] we compare both length scales as function of temperature, $T/T_c$, and compare them to the findings in quenched QCD [@Kaczmarek:2002mc; @Kaczmarek:2004gv]. It can be seen that in the temperature range analyzed here $r_D(T)\;<\;r_{med}(T)$ and no or only little differences between the results from quenched ($N_f$=0) and full ($N_f$=2,3) QCD could be identified. Again we stress that in the perturbative high temperature limit differences are expected to arise as expressed by Eq. (\[LOscreen\]).
It is important to realize that at distances well below $r_{med}$ medium effects become suppressed and the color singlet free energy almost coincides with the zero temperature heavy quark potential (see Fig. \[fes\](a)). In particular, the screening radius estimated from the inverse Debye mass corresponds to distances which are only moderately larger than the smallest distance available in our calculations (compare with the lower dotted line in Fig. \[onset\]). In view of the almost temperature independent behavior of the color singlet free energies at small distances (Fig. \[fes\](a)) it could be misleading to quantify the dominant screening length of the medium in terms of $r_D\equiv1/m_D$. On the other hand the color averaged free energies show already strong temperature dependence at distances similar to $r_D$ (see Fig. \[fes\](b)).
Following [@Karsch:2005ex] we also included in Fig. \[screenradius\] the mean charge radii of the most prominent charmonium states, $J/\psi$, $\chi_c$ and $\psi\prime$, as horizontal lines. These lines characterize the averaged separation $r$ which enters the effective potential in potential model calculations. It thus is reasonable to expect that the temperature at which these radii equal $r_{med}$ could give a rough estimate for the onset of thermal effects in the charmonium states. It appears quite reasonable from this view that $J/\psi$ indeed may survive the phase transition [@Asakawa:2003re; @Datta:2003ww], while $\chi_c$ and $\psi\prime$ are supposed to show significant thermal modifications at temperatures close to the transition. Recent potential model calculations support this analysis [@Wong:2004kn]. The wave functions for these states, however, will also reach out to larger distances [@Jacobs:1986gv] and this estimate can only be taken as a first indication for the relevant temperatures. Further details on this issue including also bottomonium states have been given in Ref. [@Karsch:2005ex]. We will turn again to a discussion of thermal modifications of quarkonium states in Ref. [@pap2] using finite temperature quark anti-quark energies.
### Temperature dependence of $\alpha_s$
We finally discuss here the temperature dependence of the QCD coupling, $\alpha(T)$, extracted from the fits used to determine also $m_D$, [*i.e.*]{} from Eq. (\[screenfit\]). From fits of the free energies above deconfinement we find the values shown in Fig. \[cTcomp\] as function of $T/T_c$ given by the filled circles. We again show in this figure also the temperature dependent coupling $\tilde{\alpha}_{qq}(T)$ introduced in Sec. \[couplatt\]. It can clearly be seen that the values for both couplings are quite different, $\tilde{\alpha}_{qq}(T)\;\gsim\;\alpha(T)$, at temperatures close but above deconfinement while this difference rapidly decreases with increasing temperature. This again demonstrates the ambiguity in defining the coupling in the non-perturbative temperature range due to the different non-perturbative contributions to the observable used for its definition [@Kaczmarek:2004gv]. In fact, at temperatures close to the phase transition temperature we find quite large values for $\alpha(T)$, [*i.e.*]{} $\alpha(T)\simeq 2\; -\; 3$ in the vicinity of $T_c$, while it drops rapidly to values smaller than unity, [*i.e.*]{} $\alpha(T)\;\lsim\;1$ already at temperatures $T/T_c\;\gsim\;1.5$. A similar behavior was also found in [@Kaczmarek:2004gv] for the coupling in pure gauge theory (open symbols). In fact, no or only a marginal enhancement of the values calculated in full QCD compared to the values in quenched QCD could be identified here at temperatures $T\;\lsim\;1.5T_c$. We stress again that the large values for $\alpha(T)$ found here should not be confused with the coupling that characterizes the short distance Coulomb part of $F_1(r,T)$. The latter is almost temperature independent at small distances and can to some extent be described by the zero temperature coupling.
String breaking below deconfinement {#stringbreaking}
-----------------------------------
We finally discuss the large distance properties of the free energies below $T_c$. In contrast to the quark anti-quark free energy in quenched QCD where the string between the quark anti-quark pair cannot break and the free energies are linearly rising at large separations, in full QCD the string between two static color charges can break due to the possibility of spontaneously generating $q\bar q$-pairs from the vacuum. Therefore the quark anti-quark free energy reaches a constant value also below $T_c$. In Fig. \[fes\] this behavior is clearly seen.
The distances at which the quark anti-quark free energies approach an almost constant value move to smaller separations at higher temperatures. This can also be seen from the temperature dependence of $r_{med}$ in Fig. \[onset\] at temperature below $T_c$. By construction $r_{med}$ describes a distance which can be used to estimate a lower limit for the distance where the string breaking will set in. An estimate of the string breaking radius at $T=0$ can be calculated from the lightest heavy-light meson, $r_{\text{breaking}}\simeq1.2-1.4$ fm [@Pennanen:2000yk] and is shown on the left side in Fig. \[onset\] within the dotted band. It can be seen that $r_{med}$ in $2$-flavor QCD does indeed approach such values at temperatures $T\;\lsim\;0.8T_c$. This suggests that the dependence on temperature in $2$-flavor QCD is small below the smallest temperature analyzed here, $0.76T_c$. This can also be seen from the behavior of $F_\infty(T)$ shown in Fig. \[string1\] (see also Fig. \[fes\](a)) compared to the value commonly expected at $T=0$. We use $V(r_{\text{breaking}})\simeq1000-1200$ MeV as reference to the zero temperature string breaking energy with quark mass $m_\pi/m_\rho\simeq0.7$. This estimate is shown on the left side in Fig. \[string1\] as the dotted band. A similar behavior is expected for the free energies in $3$-flavor QCD and smaller quark mass, $m_\pi/m_\rho\simeq0.4$. As seen also in Fig. \[string1\] the values for $F_\infty(T)$ are smaller than in $2$-flavor QCD and larger quark mass. This may indicate that string breaking sets in at smaller distances for smaller quark masses. However, in [@Karsch:2000kv] no mass dependence (in the color averaged free energies) was observed below the quark mass analyzed by us ($m/T$=0.4). At present it is, however, difficult to judge whether the differences seen for $2$- and $3$-flavor QCD for $T/T_c<1$ are due to quark mass or flavor dependence of the string breaking. Although $F_\infty(T)$ still is close to $V(r_{\text{breaking}})$ at $T\sim0.8T_c$, it rapidly drops to about half of this value in the vicinity of the phase transition, $F_\infty(T_c)\simeq575$ MeV. This value is almost the same in $2$- and $3$-flavor QCD; we find $F_\infty^{N_f=2}(T_c)\simeq575(15)$ MeV and $F_\infty^{N_f=3}(T_c)\simeq548(20)$ MeV. It is interesting to note that also the values of $F_\infty(T)$ in quenched QCD ($N_f$=0) approach a similar value at temperatures just above $T_c$. We find $F_\infty(T_c^+)\simeq481(4)$ MeV where $T_c^+\equiv1.02T_c$ denotes the closest temperature above $T_c$ analyzed in quenched QCD. Of course, the value for $F_\infty(T_c^+)$ will increase when going to temperatures even closer to $T_c$. The flavor and quark mass dependence of $F_\infty(T)$ including also higher temperatures will be discussed in more detail in Ref. [@pap2].
Summary and Conclusions {#seccon}
=======================
Our analysis of the zero temperature heavy quark potential, $V(r)$, calculated in $2$-flavor lattice QCD using large Wilson loops [@Karsch:2000kv] shows no signal for string breaking at distances below $1.3$ fm. This is quite consistent with earlier findings [@Bernard:2001tz; @Pennanen:2000yk]. The $r$-dependence of $V(r)$ becomes comparable to the potential from the bosonic string picture already at distances larger than $0.4$ fm. Similar findings have also been reported in lattice studies of the potential in quenched QCD [@Necco:2001xg; @Luscher:2002qv]. At those distances, $0.4$ fm$\;\lsim\;r\;\lsim\;1.5$ fm, we find no or only little differences between lattice data for the potential in quenched ($N_f$=0) given in Ref. [@Necco:2001xg] and full ($N_f$=2) QCD. At smaller distances, however, deviations from the large distance Coulomb term predicted by the string picture, $\alpha_{\text{str}}\simeq0.196$, are found here when performing best fit analysis with a free Cornell Ansatz. We find $\alpha\simeq0.212(3)$ which could describe the data down to $r\;\gsim\;0.1$ fm. By analyzing the coupling in the $qq$-scheme defined through the force, $dV(r)/dr$, small enhancement compared to the coupling in quenched QCD is found for $r\;\lsim\;0.4$ fm. At distances substantially smaller than $0.1$ fm the logarithmic weakening of the coupling enters and will dominate the $r$-dependence of $V(r)$. The observed running of the coupling may already signal the onset of the short distance perturbative regime. This is also evident from quenched QCD lattice studies of $V(r)$ [@Necco:2001gh].
The running coupling at finite temperature defined in the $qq$-scheme using the derivative of the color singlet quark anti-quark free energy, $dF_1(r,T)/dr$, shows only little qualitative and quantitative differences when changing from pure gauge [@Kaczmarek:2004gv] to full QCD at temperatures well above deconfinement. Again, at small distances the running coupling is controlled by distance and becomes comparable to $\alpha_{qq}(r)$ at zero temperature. The properties of $\alpha_{qq}(r,T)$ at temperatures in the vicinity of the phase transition are to large extent controlled by the confinement signal at zero temperature. A clear separation of the different effects usually described by the concepts of color screening ($T\;\gsim\;T_c$) and the concept of string breaking ($T\;\lsim\;T_c$) is difficult in the crossover region. Remnants of the confinement part of the QCD forces may in parts dominate the non-perturbative properties of the QCD plasma at temperatures only moderately larger than $T_c$. This supports similar findings in recent studies of the quark anti-quark free energies in quenched QCD [@Kaczmarek:2004gv].
The properties of the quark anti-quark free energy and the coupling at small distances thus again allow for non-perturbative renormalization of the free energy and Polyakov loop [@Kaczmarek:2002mc]. The crossover from confinement to deconfinement is clearly signaled by the Polyakov loop through a rapid increase at temperatures close to $T_c$. String breaking dominates the quark anti-quark free energies at temperatures well below deconfinement in all color channels leading to finite values of the Polyakov loop. The string breaking energy, $F_\infty(T)$, and the distance where string breaking sets in, are decreasing with increasing temperatures. The plateau value $F_\infty(T)$ approaches about $95\%$ of the value one usually estimates at zero temperature, $V(r_{\text{breaking}})\simeq1.1$ GeV [@Pennanen:2000yk; @Bernard:2001tz], already for $T\;\simeq\;0.8T_c$. We thus expect that the change in quark anti-quark free energies is only small when going to smaller temperatures and the quark anti-quark free energy, $F_1(r,T)$, will show only small differences from the heavy quark potential at $T=0$, $V(r)$. Significant thermal modifications on heavy quark bound states can thus be expected only for temperatures above $0.8T_c$. Our analysis of $r_{med}$ suggests indeed a qualitative similar behavior for the free energies in $3$-flavor QCD. This can also be seen from the behavior of $r_{med}$ shown in Fig. \[screenradius\].
At temperatures well above the (pseudo-) critical temperature, [*i.e.*]{} $1.2\;\lsim\;T/T_c\;\lsim\;4$, no or only little qualitative differences in the thermal properties of the quark anti-quark free energies calculated in quenched ($N_f$=0) and full ($N_f$=2,3) QCD could be established here when converting the observables to physical units. Color screening clearly dominates the quark anti-quark free energy at large distances and screening masses, which are non-perturbatively determined from the exponential fall-off of the color singlet free energies, could be extracted (for $N_f$=2). In accordance with earlier findings in quenched QCD [@Kaczmarek:1999mm; @Kaczmarek:2004gv] we find substantially larger values for the screening masses than given by leading order perturbation theory. The values of the screening masses, $m_D(T)$, again show only marginal differences as function of $T/T_c$ compared to the values found in quenched QCD (see also Fig. \[screenradius\]). The large screening mass defines a rather small screening radius, $r_D\equiv1/m_D$, which refers to a length scale where the singlet free energy shows almost no deviations from the heavy quark potential at zero temperature. It thus might be misleading to quantify the length scale of the QCD plasma where temperature effects dominate thermal modifications on heavy quark bound states with the observable $r_D\equiv1/m_D$ in the non-perturbative temperature regime close but above $T_c$. On the other hand the color averaged free energies show indeed strong temperature dependence at distances which could be characterized by $1/m_D$. In view of color changing processes as a mechanism for direct quarkonium dissociation [@Kharzeev:1994pz] the discussion of the color averaged free energy could become important.
We have also compared $r_D$ and $r_{med}$ in Fig. \[screenradius\] to the expected mean squared charge radii of some charmonium states. It is reasonable that the temperatures at which these radii equal $r_{med}$ give a first indication of the temperature at which thermal modifications become important in the charmonium states. It appears thus quite reasonable that $J/\psi$ will survive the transition while $\chi_c$ and $\psi\prime$ are expected to show strong thermal effects at temperatures in the vicinity of the transition and this may support recent findings [@Wong:2004kn; @Asakawa:2003re; @Petreczky:2003js]. Of course the wave functions of these states will also reach out to larger distances and thus our analysis can only be taken as a first indication of the relevant temperatures. We will turn back to this issue in Ref. [@pap2]. The analysis of bound states using, for instance, the Schrödinger equation will do better in this respect. It can, however, clearly be seen from Fig. \[screenradius\] that although $r_{med}(T_c)\simeq0.7$ fm is approached almost in common for $N_f$=0,2,3, it falls apart for $N_f$=2,3 at smaller temperatures. It thus could be difficult to determine suppression patterns from free energies for quarkonium states which are substantially larger than $0.7$ fm independently from $N_f$ and/or finite quark mass.
The analysis presented here has been performed for a single quark mass value that corresponds to a pion mass of about $770$ MeV ($m_\pi/m_\rho\simeq0.7$). In Ref. [@Karsch:2000kv], however, no major quark mass effects were visible in color averaged free energies below this quark mass value. The comparisons of $r_{med}$ and $F_\infty(T)$ calculated in $2$-flavor ($m_\pi/m_\rho\simeq0.7$) with results calculated in $3$-flavor QCD ($m_\pi/m_\rho\simeq0.4$ [@Peterpriv]) supports this property. While at temperatures above deconfinement no or only little differences in this observable can be identified, at temperatures below $T_c$ differences can be seen. To what extend these are due to the smaller quark masses used in the $3$-flavor case or whether these differences reflect a flavor dependence of the string breaking distance requires further investigation. The present analysis was carried out on one lattice size ($16^3\times 4$) and therefore performing an extrapolation to the continuum limit could not be done with the current data. However the analysis of the quenched free energies [@Kaczmarek:2002mc; @Kaczmarek:2004gv],where no major differences between the $N_\tau=4$ and $N_\tau=8$ results were visible, and the use of improved actions suggests that cut-off effects might be small. Despite these uncertainties and the fact that parts of our comparisons to results from quenched QCD are on a qualitative level, we find quite important information for the study of heavy quark bound states in the QCD plasma phase. At temperatures well above $T_c$, [*i.e.*]{} $1.2\;\lsim\;T/T_c\;\lsim\;4$, no or only little differences appear between results calculated in quenched and QCD. This might suggest that using thermal parameters extracted from free or internal energy in quenched QCD as input for model calculations of heavy quark bound states [@Shuryak:2004tx; @Wong:2004kn] is a reasonable approximation. Furthermore this also supports the investigation of heavy quarkonia in quenched lattice QCD calculations using the analysis of spectral functions [@Datta:2003ww; @Asakawa:2003re; @Asakawa:2002xj]. On the other hand, however, most of our $2$- and $3$-flavor QCD results differ from quenched calculations at temperatures in the vicinity and below the phase transition. Due to these qualitative differences, results from quenched QCD could make a discussion of possible signals for the quark gluon plasma production in heavy ion collision experiments complicated when temperatures and/or densities close to the transition become important.
We thank the Bielefeld-Swansea collaboration for providing us their configurations with special thanks to S. Ejiri. We would like to thank E. Laermann and F. Karsch for many fruitful discussions. F.Z. thanks P. Petreczky for his continuous support. We thank K. Petrov and P. Petreczky for sending us the data of Ref. [@Petreczky:2004pz]. This work has partly been supported by DFG under grant FOR 339/2-1 and by BMBF under grant No.06BI102 and partly by contract DE-AC02-98CH10886 with the U.S. Department of Energy. At an early stage of this work F.Z. has been supported through a stipend of the DFG funded graduate school GRK881. Some of the results discussed in this article were already presented in proceeding contributions [@Kaczmarek:2003ph; @Kaczmarek:2005uv; @Kaczmarek:2005uw].
[^1]: While a definition of the quark anti-quark potential can be given properly at zero temperature using large Wilson loops, at finite temperature a definition of the thermal modification of an appropriate potential energy between the quark anti-quark pair is complicated [@Karsch:2005ex].
[^2]: In pure gauge theory $r_{max}$ and $\tilde{\alpha}_{qq}(T)$ would be infinite below $T_c$.
[^3]: Note here, however, the change in temperature scale from $T_c=202$ MeV in full and $T_c=270$ MeV in quenched QCD.
[^4]: In Ref. [@Gava:1981qd] the Polyakov loop expectation value is calculated in pure gauge theory and the Debye mass, $m_D(T)/T=\sqrt{N_c/3}g(T)$, enters here through the resummation of the gluon polarization tensor. When changing from pure gauge to full QCD quark loops will contribute to the polarization tensor. In this case resummation will lead to the Debye mass given in (\[LOscreen\]). Thus the flavor dependence in Eq. (\[lrenpert\]) at this level is given only by the Debye mass.
|
{
"pile_set_name": "arxiv"
}
|
import sqlite3
import time
import datetime
conn = sqlite3.connect('master.db')
c = conn.cursor()
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS tennis(player TEXT, Pinnacle REAL, WillHill REAL, betThreeSixFive REAL, Bookmaker REAL, BetOnline REAL, TheGreekSportsbook REAL, JustBet REAL, SportsInteraction REAL, WagerWeb REAL, FiveDimes REAL)')
"""
Columns are:
player
betfairBack
betfairLay
williamhill
ladbrokes
"""
def dynamic_data_entry(column,entry):
c.execute("INSERT INTO tennis(" + column + ") VALUES(?)",
(str(entry),))
conn.commit()
#The real function will have to be "updating"
def update(player,column,entry):
c.execute('SELECT * FROM tennis')
c.execute("UPDATE tennis SET " + column + " = " + str(entry) + " WHERE player = '" + player + "'")
conn.commit()
def read_from_db(player):
c.execute("SELECT * FROM tennis WHERE player = '" + player + "'")
# data = c.fetchone()
# print data
#for row in c.fetchall():
# print row[1:]
return list(c.fetchall())
create_table()
#c.close()
#conn.close()
|
{
"pile_set_name": "github"
}
|
package x509util
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"testing"
)
func TestCreateCertificateRequest(t *testing.T) {
r := rand.Reader
priv, err := rsa.GenerateKey(r, 1024)
if err != nil {
t.Fatal(err)
}
template := CertificateRequest{
CertificateRequest: x509.CertificateRequest{
Subject: pkix.Name{
CommonName: "test.acme.co",
Country: []string{"US"},
},
},
ChallengePassword: "foobar",
}
derBytes, err := CreateCertificateRequest(r, &template, priv)
if err != nil {
t.Fatal(err)
}
out, err := x509.ParseCertificateRequest(derBytes)
if err != nil {
t.Fatalf("failed to create certificate request: %s", err)
}
if err := out.CheckSignature(); err != nil {
t.Errorf("failed to check certificate request signature: %s", err)
}
challenge, err := ParseChallengePassword(derBytes)
if err != nil {
t.Fatalf("failed to parse challengePassword attribute: %s", err)
}
if have, want := challenge, template.ChallengePassword; have != want {
t.Errorf("have %s, want %s", have, want)
}
}
|
{
"pile_set_name": "github"
}
|
Tag Archive for interact
Question by Jon: What is the best way to interact with or select job recruiters?
I work primarily as a front end web developer and get calls from multiple recruitment firms. I do a fair bit of project work, so this is a recurring thing for me. Currently, if they have a position that I’m a fit for I reflexively ask them to present me for that position. Is that an optimal strategy? If one company submits me, another cannot, so I want to be submitted by the best company. Should I therefore be taking steps to identify which recruiting firms are the most effective, give preference to local firms, find which firms ask the target employer for a smaller fee, select (somehow) for firms that have a better relationship with the target client, check out a firm’s reputation online before submittal, etc. Or is all of that wasted work?
I’m looking for things that are efficient yet effective. Thanks for any tips!
Yes, I know I can work with several agencies for different jobs. However I CAN’T work with several agencies for the SAME JOB. Not in California, in any case. Double submissions get thrown out, or else the second agency to submit is rejected.
Best answer:
Answer by BillAs a freelancer you can usually work “through” several agencies. This creates competition between them to find you more work. Good Luck
|
{
"pile_set_name": "pile-cc"
}
|
Sultan Abdulhamid II
by kirbydog13 | March 27th, 2012
I was searching for a photo Sultan Abdulhamid II and recognized he was wearing a Fez on his head. Compared to the headpiece of the Qizilbash, it is similar. Perhaps it has a historical purpose or perhaps it is just a coincidence.
|
{
"pile_set_name": "pile-cc"
}
|
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout android:layout_width="match_parent"
android:layout_height="match_parent"
xmlns:android="http://schemas.android.com/apk/res/android" >
<Button
android:id="@+id/btn_crash_restart"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="重启App"
android:layout_alignParentTop="true"
/>
<TextView
android:id="@+id/tv_crash_info"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:singleLine="false"
android:ellipsize="none"
android:gravity="left"
android:inputType="textMultiLine"
android:layout_below="@id/btn_crash_restart"
/>
</RelativeLayout>
|
{
"pile_set_name": "github"
}
|
def extractStartlingSurprisesAtEveryStep(item):
vol, chp, frag, postfix = extractVolChapterFragmentPostfix(item['title'])
if not (chp or vol or frag) or 'preview' in item['title'].lower():
return None
if 'bu bu jing xin' in item['tags']:
return buildReleaseMessageWithType(item, 'Bu Bu Jing Xin', vol, chp, frag=frag, postfix=postfix)
return False
|
{
"pile_set_name": "github"
}
|
---
abstract: 'Directed graphical models provide a useful framework for modeling causal or directional relationships for multivariate data. Prior work has largely focused on identifiability and search algorithms for directed acyclic graphical (DAG) models. In many applications, feedback naturally arises and directed graphical models that permit cycles occur. In this paper we address the issue of identifiability for general directed cyclic graphical (DCG) models satisfying the Markov assumption. In particular, in addition to the faithfulness assumption which has already been introduced for cyclic models, we introduce two new identifiability assumptions, one based on selecting the model with the fewest edges and the other based on selecting the DCG model that entails the maximum number of d-separation rules. We provide theoretical results comparing these assumptions which show that: (1) selecting models with the largest number of d-separation rules is strictly weaker than the faithfulness assumption; (2) unlike for DAG models, selecting models with the fewest edges does not necessarily result in a milder assumption than the faithfulness assumption. We also provide connections between our two new principles and minimality assumptions. We use our identifiability assumptions to develop search algorithms for small-scale DCG models. Our simulation study supports our theoretical results, showing that the algorithms based on our two new principles generally out-perform algorithms based on the faithfulness assumption in terms of selecting the true skeleton for DCG models.'
bibliography:
- 'reference\_DCG.bib'
---
[****]{}
--------------------------------------------- --
Gunwoong Park$^1$Garvesh Raskutti$^{1,2,3}$
--------------------------------------------- --
----------------------------------------------------------------------
$^1$ Department of Statistics, University of Wisconsin-Madison
$^2$ Department of Computer Science, University of Wisconsin-Madison
$^3$ Wisconsin Institute for Discovery, Optimization Group
----------------------------------------------------------------------
**Keywords:** Directed graphical Models, Identifiability, Faithfulness, Feedback loops.
Introduction {#SecInt}
============
A fundamental goal in many scientific problems is to determine causal or directional relationships between variables in a system. A well-known framework for representing causal or directional relationships are directed graphical models. Most prior work on directed graphical models has focused on directed acyclic graphical (DAG) models, also referred to as Bayesian networks which are directed graphical models with no directed cycles. One of the core problems is determining the underlying DAG $G$ given the data-generating distribution $\mathbb{P}$.
A fundamental assumption in the DAG framework is the *causal Markov condition* (CMC) (see e.g., [@lauritzen1996graphical; @Spirtes2000]). While the CMC is broadly assumed, in order for a directed graph $G$ to be identifiable based on the distribution $\mathbb{P}$, additional assumptions are required. For DAG models, a number of identifiability and minimality assumptions have been introduced [@Glymour1987; @Spirtes2000] and the connections between them have been discussed [@Zhang2013]. In particular, one of the most widely used assumptions for DAG models is the *causal faithfulness condition* (CFC) which is sufficient for many search algorithms. However the CFC has been shown to be extremely restrictive, especially in the limited data setting [@Uhler2013]. In addition two minimality assumptions, the P-minimality and SGS-minimality assumptions have been introduced. These conditions are weaker than the CFC but do not guarantee model identifiability [@Zhang2013]. On the other hand, the recently introduced sparsest Markov representation (SMR) and frugality assumptions [@forster2015frugal; @Raskutti2013; @van2013ell] provide an alternative that is milder than the CFC and is sufficient to ensure identifiability. The main downside of the [SMR]{} and frugality assumptions relative to the CFC is that the [SMR]{} and frugality assumptions are sufficient conditions for model identifiability only when exhaustive searches over the DAG space are possible [@Raskutti2013], while the CFC is sufficient for polynomial-time algorithms [@Glymour1987; @Spirtes1991; @Spirtes2000] for learning equivalence class of sparse graphs.
While the DAG framework is useful in many applications, it is limited since feedback loops are known to often exist (see e.g., [@Richardson1996; @Richardson1995]). Hence, directed graphs with directed cycles [@Spirtes2000] are more appropriate to model such feedback. However learning directed cyclic graphical (DCG) models from data is considerably more challenging than learning DAG models [@Richardson1996; @Richardson1995] since the presence of cycles poses a number of additional challenges and introduces additional non-identifiability. Consequently there has been considerably less work focusing on directed graphs with feedback both in terms of identifiability assumptions and search algorithms. [@Spirtes1995] discussed the CMC, and [@Richardson1996; @Richardson1995] discussed the CFC for DCG models and introduced the polynomial-time cyclic causal discovery (CCD) algorithm [@Richardson1996] for recovering the Markov equivalence class for DCGs. Recently, [@claassen2013learning] introduced the FCI$+$ algorithm for recovering the Markov equivalence class for sparse DCGs, which also assumes the CFC. As with DAG models, the CFC for cyclic models is extremely restrictive since it is more restrictive than the CFC for DAG models. In terms of learning algorithms that do not require the CFC, additional assumptions are typically required. For example [@mooij2011causal] proved identifiability for bivariate Gaussian cyclic graphical models with additive noise which does not require the CFC while many approaches have been studied for learning graphs from the results of interventions on the graph (e.g., [@hyttinen2010causal; @hyttinen2012causal; @hyttinen2012learning; @hyttinen2013experiment; @hyttinen2013discovering]). However, these additional assumptions are often impractical and it is often impossible or very expensive to intervene many variables in the graph. This raises the question of whether milder identifiability assumptions can be imposed for learning DCG models.
In this paper, we address this question in a number of steps. Firstly, we adapt the [SMR]{} and frugality assumptions developed for DAG models to DCG models. Next we show that unlike for DAG models, the adapted [SMR]{} and frugality assumptions are not strictly weaker than the CFC. Hence we consider a new identifiability assumption based on finding the Markovian DCG entailing the maximum number of d-separation rules (MDR) which we prove is strictly weaker than the CFC and recovers the Markov equivalence class for DCGs for a strict superset of examples compared to the CFC. We also provide a comparison between the [MDR]{}, [SMR]{} and frugality assumptions as well as the minimality assumptions for both DAG and DCG models. Finally we use the [MDR]{} and [SMR]{} assumptions to develop search algorithms for small-scale DCG models. Our simulation study supports our theoretical results by showing that the algorithms induced by both the [SMR]{} and [MDR]{} assumptions recover the Markov equivalence class more reliably than state-of-the art algorithms that require the CFC for DCG models. We point out that the search algorithms that result from our identifiability assumptions require exhaustive searches and are not computationally feasible for large-scale DCG models. However, the focus of this paper is to develop the weakest possible identifiability assumption which is of fundamental importance for directed graphical models.
The remainder of the paper is organized as follows: Section \[SecPriorWork\] provides the background and prior work for identifiability assumptions for both DAG and DCG models. In Section \[SecSMRFrugality\] we adapt the [SMR]{} and frugality assumptions to DCG models and provide a comparison between the [SMR]{} assumption, the CFC, and the minimality assumptions. In Section \[SecMaxDSep\] we introduce our new [MDR]{} principle, finding the Markovian DCG that entails the maximum number of d-separation rules and provide a comparison of the new principle to the CFC, [SMR]{}, frugality, and minimality assumptions. Finally in Section \[SecSimulation\], we use our identifiability assumptions to develop a search algorithm for learning small-scale DCG models, and provide a simulation study that is consistent with our theoretical results.
Prior work on directed graphical models {#SecPriorWork}
=======================================
In this section, we introduce the basic concepts of directed graphical models pertaining to model identifiability. A directed graph $G = (V,E)$ consists of a set of vertices $V$ and a set of directed edges $E$. Suppose that $V=\{1,2,\dots ,p\}$ and there exists a random vector $(X_1, X_2,\cdots,X_p)$ with probability distribution $\mathbb{P}$ over the vertices in $G$. A directed edge from a vertex $j$ to $k$ is denoted by $(j,k)$ or $j\to k$. The set $\mbox{pa}(k)$ of *parents* of a vertex $k$ consists of all nodes $j$ such that $(j,k)\in E$. If there is a directed path $j\to \cdots \to k$, then $k$ is called a *descendant* of $j$ and $j$ is an *ancestor* of $k$. The set $\mbox{de}(k)$ denotes the set of all descendants of a node $k$. The *non-descendants* of a node $k$ are $\mbox{nd}(k) = V\setminus (\{k\}\cup \mbox{de}(k))$. For a subset $S\subset V$, we define $\mbox{an}(S)$ to be the set of nodes $k$ that are in $S$ or are ancestors of a subset of nodes in $S$. Two nodes that are connected by an edge are called *adjacent*. A triple of nodes $(j,k,\ell)$ is an *unshielded triple* if $j$ and $k$ are adjacent to $\ell$ but $j$ and $k$ are not adjacent. An unshielded triple $(j,k,\ell)$ forms a *v-structure* if $j\to \ell$ and $k \to \ell$. In this case $\ell$ is called a *collider*. Furthermore, let $\pi$ be an undirected path $\pi$ between $j$ and $k$. If every collider on $\pi$ is in $\mbox{an}(S)$ and every non-collider on an undirected path $\pi$ is not in $S$, an undirected path $\pi$ from $j$ to $k$ *d-connects* $j$ and $k$ given $S \subset V\setminus\{j,k\}$ and $j$ is *d-connected* to $k$ given $S$. If a directed graph $G$ has no undirected path $\pi$ that d-connects $j$ and $k$ given a subset $S$, then $j$ is *d-separated* from $k$ given $S$:
For disjoint sets of vertices $j, k \in V$ and $S \subset V \setminus\{j,k\}$, $j$ is *d-connected* to $k$ given $S$ if and only if there is an undirected path $\pi$ between $j$ and $k$, such that
- If there is an edge between $a$ and $b$ on $\pi$ and an edge between $b$ and $c$ on $\pi$, and $b \in S$, then $b$ is a collider between $a$ and $c$ relative to $\pi$.
- If $b$ is a collider between $a$ and $c$ relative to $\pi$, then there is a descendant $d$ of $b$ and $d \in S$.
Finally, let $X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S$ with $S \subset V\setminus\{j, k\}$ denote the conditional independence (CI) statement that $X_j$ is conditionally independent (as determined by $\mathbb{P}$) of $X_k$ given the set of variables $X_S = \{ X_{\ell} \mid \ell \in S\}$, and let $X_j {\!\perp\!\!\!\!\not\perp\!}X_k \mid X_S$ denote conditional dependence. The *Causal Markov condition* associates CI statements of $\mathbb{P}$ with a directed graph $G$.
\[Def:CMC\] A probability distribution $\mathbb{P}$ over a set of vertices $V$ satisfies the *Causal Markov condition* with respect to a (acyclic or cyclic) graph $G = (V, E)$ if for all $(j, k, S)$, $j$ is d-separated from $k$ given $S \subset V \setminus \{j,k\}$ in $G$, then $$\begin{aligned}
X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S ~~\textrm{ according to $\mathbb{P}$}.
\end{aligned}$$
The CMC applies to both acyclic and cyclic graphs (see e.g., [@Spirtes2000]). However not all directed graphical models satisfy the CMC. In order for a directed graphical model to satisfy the CMC, the joint distribution of a model should be defined by the *generalized factorization* [@Lauritzen1990].
\[Def:GenFac\] The joint distribution of $X_S$, $f(X_S)$ *factors according to directed graph* $G$ with vertices $V$ if and only if for every subset $S$ of $V$, $$f(X_{\mbox{an}(S)}) = \prod_{j \in \mbox{an}(S)} g_j (X_{j},X_{\mbox{pa}(j)})$$ where $g_j$ is a non-negative function.
[@Spirtes1995] showed that the generalized factorization is a necessary and sufficient condition for directed graphical models to satisfy the CMC. For DAG models, $g_j(\cdot)$’s must correspond to a conditional probability distribution function whereas for graphical models with cycles, $g_j(\cdot)$’s need only be non-negative functions. As shown by [@Spirtes1995], a concrete example of a class of cyclic graphs that satisfy the factorization above is structural linear DCG equation models with additive independent errors. We will later use linear DCG models in our simulation study.
In general, there are many directed graphs entailing the same d-separation rules. These graphs are *Markov equivalent* and the set of Markov equivalent graphs is called a *Markov equivalence class* (MEC) [@Richardson1995; @udea1991equivalence; @Spirtes2000; @verma1992algorithm]. For example, consider two 2-node graphs, $G_1: X_1 \rightarrow X_2$ and $G_2: X_1 \leftarrow X_2$. Then both graphs are Markov equivalent because they both entail no d-separation rules. Hence, $G_1$ and $G_2$ belong to the same [MEC]{} and hence it is impossible to distinguish two graphs by d-separation rules. The precise definition of the [MEC]{} is provided here.
Two directed graphs $G_1$ and $G_2$ are *Markov equivalent* if any distribution which satisfies the CMC with respect to one graph satisfies the CMC with respect to the other, and vice versa. The set of graphs which are Markov equivalent to $G$ is denoted by $\mathcal{M}(G)$.
The characterization of Markov equivalence classes is different for DAGs and DCGs. For DAGs, [@udea1991equivalence] developed an elegant characterization of Markov equivalence classes defined by the *skeleton* and *v-structures*. The skeleton of a DAG model consists of the edges without directions.
However for DCGs, the presence of feedback means the characterization of the [MEC]{} for DCGs is considerably more involved. [@Richardson1996] provides a characterization. The presence of directed cycles changes the notion of adjacency between two nodes. In particular there are *real* adjacencies that are a result of directed edges in the DCG and *virtual* adjacencies which are edges that do not exist in the data-generating DCG but can not be recognized as a non-edge from the data. The precise definition of real and virtual adjacencies are as follows.
\[Def:Adj\] Consider a directed graph $G = (V,E)$.
- For any $j, k \in V$, $j$ and $k$ are *really adjacent* in $G$ if $j \rightarrow k$ or $j \leftarrow k$.
- For any $j, k \in V$, $j$ and $k$ are *virtually adjacent* if $j$ and $k$ have a common child $\ell$ such that $\ell$ is an ancestor of $j$ or $k$.
Note that a virtual adjacency can only occur if there is a cycle in the graph. Hence, DAGs have only real edges while DCGs can have both real edges and virtual edges. Figure \[Fig:Sec2a\] shows an example of a DCG with a virtual edge. In Figure \[Fig:Sec2a\], a pair of nodes $(1,4)$ has a virtual edge (dotted line) because the triple $(1,4,2)$ forms a v-structure and the common child $2$ is an ancestor of $1$. This virtual edge is created by the cycle, $1 \rightarrow 2 \rightarrow 3 \rightarrow 1$.
\(A) at (1.5,1.1)[$1$]{}; (B) at (0.35,0.55) [$3$]{}; (C) at (1.5,0) [$2$]{}; (D) at (3,0.0) [$4$]{}; (B) edge \[bend right = -25, shorten >=1pt, shorten <=1pt \] node\[above\] [ ]{} (A); (C) edge \[bend right = -25, shorten >=1pt, shorten <=1pt\] node\[above\] [ ]{} (B); (D) edge \[bend right = 0, shorten >=1pt, shorten <=1pt\] node\[above\] [ ]{} (C); (A) edge \[bend right = -25, shorten >=1pt, shorten <=1pt\] node\[above\] (C); (A) to node\[above right\] [virtual]{} (D);
Virtual edges generate different types of relationships involving unshielded triples: (1) an unshielded triple $(j,k,\ell)$ (that is $j-\ell-k$) is called a *conductor* if $\ell$ is an ancestor of $j$ or $k$; (2) an unshielded triple $(j,k,\ell)$ is called a *perfect non-conductor* if $\ell$ is a descendant of the common child of $j$ and $k$; and (3) an unshielded triple $(j,k,\ell)$ is called an *imperfect non-conductor* if the triple is not a conductor or a perfect non-conductor.
Intuitively, the concept of (1) a conductor is analogous to the notion of a non v-structure in DAGs because for example suppose that an unshielded triple $(j,k,\ell)$ is a conductor, then $j$ is d-connected to $k$ given any set $S$ which does not contain $\ell$. Moreover, (2) a perfect non-conductor is analogous to a v-structure because suppose that $(j,k,\ell)$ is a perfect non-conductor, then $j$ is d-connected to $k$ given any set $S$ which contains $\ell$. However, there is no analogous notion of an imperfect non-conductor for DAG models. We see throughout this paper that this difference creates a major challenge in inferring DCG models from the underlying distribution $\mathbb{P}$. As shown by [@Richardson1994] (Cyclic Equivalence Theorem), a necessary (but not sufficient) condition for two DCGs to belong to the same [MEC]{} is that they share the same real plus virtual edges and the same (1) conductors, (2) perfect non-conductors and (3) imperfect non-conductors. However unlike for DAGs, this condition is not sufficient for Markov equivalence. A complete characterization of Markov equivalence is provided in [@Richardson1994; @Richardson1995] and since it is quite involved, we do not include here.
Even if we weaken the goal to inferring the [MEC]{} for a DAG or DCG, the CMC is insufficient for discovering the true [MEC]{} $\mathcal{M}(G^*)$ because there are many graphs satisfying the CMC, which do not belong to $\mathcal{M}(G^*)$. For example, any fully-connected graph always satisfies the CMC because it does not entail any d-separation rules. Hence, in order to identify the true [MEC]{} given the distribution $\mathbb{P}$, stronger identifiability assumptions that force the removal of edges are required.
Faithfulness and minimality assumptions
---------------------------------------
In this section, we discuss prior work on identifiability assumptions for both DAG and DCG models. To make the notion of identifiability and our assumptions precise, we need to introduce the notion of a true data-generating graphical model $(G^*, \mathbb{P})$. All we observe is the distribution (or samples from) $\mathbb{P}$, and we know the graphical model $(G^*, \mathbb{P})$ satisfies the CMC. Let $CI(\mathbb{P})$ denote the set of conditional independence statements corresponding to $\mathbb{P}$. The graphical model $(G^*, \mathbb{P})$ is *identifiable* if the Markov equivalence class of the graph $\mathcal{M}(G^*)$ can be uniquely determined based on $CI(\mathbb{P})$. For a directed graph $G$, let $E(G)$ denote the set of directed edges, $S(G)$ denote the set of edges without directions, also referred to as the skeleton, and $D_{sep}(G)$ denote the set of d-separation rules entailed by $G$.
One of the most widely imposed identifiability assumptions for both DAG and DCG models is the *causal faithfulness condition* (CFC) [@Spirtes2000] also referred to as the stability condition in [@Pearl2014]. A directed graph is *faithful* to a probability distribution if there is no probabilistic independence in the distribution that is not entailed by the CMC. The CFC states that the graph is faithful to the true probability distribution.
\[Def:CFC\] Consider a directed graphical model $(G^*, \mathbb{P})$. A graph $G^*$ is *faithful* to $\mathbb{P}$ if and only if for any $j,k \in V$ and any subset $S \subset V \setminus \{j,k\}$, $$j \textrm{ d-separated from } k \mid S \iff X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S \textrm{ according to $\mathbb{P}$}.$$
While the CFC is sufficient to guarantee identifiability for many polynomial-time search algorithms [@claassen2013learning; @Glymour1987; @hyttinen2012causal; @Richardson1996; @Richardson1995; @Spirtes2000] for both DAGs and DCGs, the CFC is known to be a very strong assumption (see e.g., [@forster2015frugal; @Raskutti2013; @Uhler2013]) that is often not satisfied in practice. Hence, milder identifiability assumptions have been considered.
Minimality assumptions, notably the *P-minimality* [@pearl2000] and SGS-minimality [@Glymour1987] assumptions are two such assumptions. The P-minimality assumption asserts that for directed graphical models satisfying the CMC, graphs that entail more d-separation rules are preferred. For example, suppose that there are two graphs $G_1$ and $G_2$ which are not Markov equivalent. $G_1$ is *strictly preferred* to $G_2$ if $D_{sep}(G_2) \subset D_{sep}(G_1)$. The P-minimality assumption asserts that no graph is strictly preferred to the true graph $G^*$. The SGS-minimality assumption asserts that there exists no proper sub-graph of $G^*$ that satisfies the CMC with respect to the probability distribution $\mathbb{P}$. To define the term sub-graph precisely, $G_1$ is a sub-graph of $G_2$ if $E(G_1) \subset E(G_2)$ and $E(G_1) \neq E(G_2)$. [@Zhang2013] proved that the SGS-minimality assumption is weaker than the P-minimality assumption which is weaker than the CFC for both DAG and DCG models. While [@Zhang2013] states the results for DAG models, the result easily extends to DCG models.
\[Thm:Sec2a\] If a directed graphical model $(G^*, \mathbb{P})$ satisfies
- the CFC, it satisfies the P-minimality assumption.
- the P-minimality assumption, it satisfies the SGS-minimality assumption.
Sparsest Markov Representation (SMR) for DAG models
---------------------------------------------------
While the minimality assumptions are milder than the CFC, neither the P-minimality nor SGS-minimality assumptions imply identifiability of the MEC for $G^*$. Recent work by [@Raskutti2013] developed the *sparsest Markov representation* (SMR) assumption and a slightly weaker version later referred to as *frugality* assumption [@forster2015frugal] which applies to DAG models. The [SMR]{} assumption which we refer to here as the identifiable [SMR]{} assumption states that the true DAG model is the graph satisfying the CMC with the fewest edges. Here we say that a DAG $G_1$ is *strictly sparser* than a DAG $G_2$ if $G_1$ has *fewer* edges than $G_2$.
\[Def:SMR\] A DAG model $(G^*,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| < |S(G)|$ for every DAG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$.
The identifiable SMR assumption is strictly weaker than the CFC while also ensuring a method known as the Sparsest Permutation (SP) algorithm [@Raskutti2013] recovers the true MEC. Hence the identifiable SMR assumption guarantees identifiability of the MEC for DAGs. A slightly weaker notion which we refer to as the weak SMR assumption does not guarantee model identifiability.
\[Def:Fru\] A DAG model $(G^* ,\mathbb{P})$ satisfies the weak [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| \leq |S(G)|$ for every DAG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$.
A comparison of [SMR]{}/frugality to the CFC and the minimality assumptions for DAG models is provided in [@Raskutti2013] and [@forster2015frugal].
\[Thm:Sec2b\] If a DAG model $(G^*, \mathbb{P})$ satisfies
- the CFC, it satisfies the identifiable [SMR]{} assumption and consequently weak [SMR]{} assumption.
- the weak [SMR]{} assumption, it satisfies the P-minimality assumption and consequently the SGS-minimality assumption.
- the identifiable [SMR]{} assumption, $G^*$ is identifiable up to the true MEC $\mathcal{M}(G^*)$.
It is unclear whether the [SMR]{}/frugality assumptions apply naturally to DCG models since the success of the [SMR]{} assumption relies on the local Markov property which is known to hold for DAGs but not DCGs [@Richardson1994]. In this paper, we investigate the extent to which these identifiability assumptions apply to DCG models and provide a new principle for learning DCG models.
Based on this prior work, a natural question to consider is whether the identifiable and weak [SMR]{} assumptions developed for DAG models apply to DCG models and whether there are similar relationships between the CFC, identifiable and weak [SMR]{}, and minimality assumptions. In this paper we address this question by adapting both identifiable and weak [SMR]{} assumptions to DCG models. One of the challenges we address is dealing with the distinction between real and virtual edges in DCGs. We show that unlike for DAG models, the identifiable [SMR]{} assumption is not necessarily a weaker assumption than the CFC. Consequently, we introduce a new principle which is the maximum d-separation rule (MDR) principle which chooses the directed Markov graph with the greatest number of d-separation rules. We show that our [MDR]{} principle is strictly weaker than the CFC and stronger than the P-minimality assumption, while also guaranteeing model identifiability for DCG models. Our simulation results complement our theoretical results, showing that the [MDR]{} principle is more successful than the CFC in terms of recovering the true [MEC]{} for DCG models.
Sparsity and [SMR]{} for DCG models {#SecSMRFrugality}
===================================
In this section, we extend notions of sparsity and the [SMR]{} assumptions to DCG models. As mentioned earlier, in contrast to DAGs, DCGs can have two different types of edges which are real and virtual edges. In this paper, we define the *sparsest* DCG as the graph with the fewest *total edges* which are virtual edges plus real edges. The main reason we choose total edges rather than just real edges is that all DCGs in the same Markov equivalence class (MEC) have the same number of total edges [@Richardson1994]. However, the number of real edges may not be the same among the graphs even in the same [MEC]{}. For example in Figure \[Fig:Sec3a\], there are two different [MECs]{} and each [MEC]{} has two graphs: $G_1, G_2 \in \mathcal{M}(G_1)$ and $G_3, G_4 \in \mathcal{M}(G_3)$. $G_1$ and $G_2$ have $9$ total edges but $G_3$ and $G_4$ has $7$ total edges. On the other hand, $G_1$ has $6$ real edges, $G_2$ has $9$ real edges, $G_3$ has $5$ real edges, and $G_4$ has $7$ real edges (a bi-directed edge is counted as 1 total edge). For a DCG $G$, let $S(G)$ denote the *skeleton* of $G$ where $(j,k) \in S(G)$ is a real or virtual edge.
(M1) at (3.1,0.6) [$\mathcal{M}(G_1)$]{}; (M2) at (11.15,0.6) [$\mathcal{M}(G_3)$]{};
\(A) at (0,0) [$1$]{}; (B) at (1.3,0) [$2$]{}; (C) at (2.6,0) [$3$]{}; (D) at (1.3,1) [$4$]{}; (E) at (1.3,2) [$5$]{}; (G1) at (1.3,-1.0) [$G_1$]{};
(A2) at (3.6,0) [$1$]{}; (B2) at (4.9,0) [$2$]{}; (C2) at (6.2,0) [$3$]{}; (D2) at (4.9,1) [$4$]{}; (E2) at (4.9,2) [$5$]{}; (G2) at (4.9,-1.0) [$G_2$]{};
(A3) at (8.0, 0) [$1$]{}; (B3) at (9.3,0) [$2$]{}; (C3) at (10.7,0) [$3$]{}; (D3) at (9.3,1) [$4$]{}; (E3) at (9.3,2) [$5$]{}; (G3) at (9.3,-1.0) [$G_3$]{};
(A4) at (11.7,0) [$1$]{}; (B4) at (13.0,0) [$2$]{}; (C4) at (14.3,0) [$3$]{}; (D4) at (13.0,1) [$4$]{}; (E4) at (13.0,2) [$5$]{}; (G4) at (13.0,-1.0) [$G_4$]{};
\(A) edge \[right=25\] node\[above\] [ ]{} (D); (B) edge \[left =25\] node\[above\] [ ]{} (D); (C) edge \[left =25\] node\[above\] [ ]{} (D); (D) edge \[left =25\] node\[above\] [ ]{} (E); (E) edge \[bend right =35\] node\[above\] [ ]{} (A); (E) edge \[bend left =35\] node\[above\] [ ]{} (C); (A) edge \[-, dotted,thick \] node\[above\] [ ]{} (B); (B) edge \[-, dotted,thick \] node\[above\] [ ]{} (C); (A) edge \[-, dotted,thick, bend right =30 \] node\[above\] [ ]{} (C);
(A2) edge \[right=25\] node\[above\] [ ]{} (D2); (B2) edge \[left =15\] node\[above\] [ ]{} (D2); (C2) edge \[left =25\] node\[above\] [ ]{} (D2); (D2) edge \[left =25\] node\[above\] [ ]{} (E2); (E2) edge \[bend right =35\] node\[above\] [ ]{} (A2); (E2) edge \[bend left =35\] node\[above\] [ ]{} (C2); (A2) edge \[ \] node\[above\] [ ]{} (B2); (B2) edge \[ \] node\[above\] [ ]{} (C2); (A2) edge \[bend right =30 \] node\[above\] [ ]{} (C2);
(A3) edge \[right=25\] node\[above\] [ ]{} (D3); (B3) edge \[bend left =15\] node\[above\] [ ]{} (D3); (D3) edge \[bend left =15\] node\[above\] [ ]{} (B3); (C3) edge \[left =25\] node\[above\] [ ]{} (D3); (A3) edge \[-,dotted,thick\] node\[above\] [ ]{} (B3); (C3) edge \[-,dotted,thick\] node\[above\] [ ]{} (B3); (A3) edge \[bend right =45\] node\[above\] [ ]{} (C3); (D3) edge \[left =45\] node\[above\] [ ]{} (E3);
(A4) edge \[right=25\] node\[above\] [ ]{} (D4); (B4) edge \[left =25\] node\[above\] [ ]{} (D4); (C4) edge \[left =25\] node\[above\] [ ]{} (D4); (A4) edge \[left =25\] node\[above\] [ ]{} (B4); (A4) edge \[bend right =45\] node\[above\] [ ]{} (C4); (C4) edge \[left =45\] node\[above\] [ ]{} (B4); (D4) edge \[left =45\] node\[above\] [ ]{} (E4);
Using this definition of the skeleton $S(G)$ for a DCG $G$, the definitions of the identifiable and weak [SMR]{} assumptions carry over from DAG to DCG models. For completeness, we re-state the definitions here.
\[DefSMRDCG\] A DCG model $(G^* ,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| < |S(G)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$.
\[DefFruDCG\] A DCG model $(G^* ,\mathbb{P})$ satisfies the weak [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| \leq |S(G)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$.
Both the [SMR]{} and SGS minimality assumptions prefer graphs with the fewest total edges. The main difference between the SGS-minimality assumption and the [SMR]{} assumptions is that the SGS-minimality assumption requires that there is no DCGs with a *strict subset* of edges whereas the [SMR]{} assumptions simply require that there are no DCGs with *fewer* edges.
Unfortunately as we observe later unlike for DAG models, the identifiable [SMR]{} assumption is not weaker than the CFC for DCG models. Therefore, the identifiable [SMR]{} assumption does not guarantee identifiability of [MECs]{} for DCG models. On the other hand, while the weak [SMR]{} assumption may not guarantee uniqueness, we prove it is a strictly weaker assumption than the CFC. We explore the relationships between the CFC, identifiable and weak [SMR]{}, and minimality assumptions in the next section.
Comparison of SMR, CFC and minimality assumptions for DCG models {#SubSecSMR}
----------------------------------------------------------------
Before presenting our main result in this section, we provide a lemma which highlights the important difference between the [SMR]{} assumptions for graphical models with cycles compared to DAG models. Recall that the [SMR]{} assumptions involve counting the number of edges, whereas the CFC and P-minimality assumption involve d-separation rules. First, we provide a fundamental link between the presence of an edge in $S(G)$ and d-separation/connection rules.
\[Lem:Sec3a\] For a DCG $G$, $(j,k) \in S(G)$ if and only if $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$.
First, we show that if $(j,k) \in S(G)$ then $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$. By the definition of d-connection/separation, there is no subset $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$. Second, we prove that if $(j,k) \notin S(G)$ then there exists $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$. Let $S = \mbox{an}(j) \cup \mbox{an}(k)$. Then $S$ has no common children or descendants, otherwise $(j,k)$ are virtually adjacent. Then there is no undirected path between $j$ and $k$ conditioned on the union of ancestors of $j$ and $k$, and therefore $j$ is d-separated from $k$ given $S$. This completes the proof.
Note that the above statement is true for real or virtual edges and not real edges alone. We now state an important lemma which shows the key difference in comparing the [SMR]{} assumptions to other identifiability assumptions (CFC, P-minimality, SGS-minimality) for graphical models with cycles, which does not arise for DAG models.
\[Lem:Sec3b\]
- For any two DCGs $G_1$ and $G_2$, $D_{sep}(G_1) \subseteq D_{sep}(G_2)$ implies $S(G_2) \subseteq S(G_1)$.
- There exist two DCGs $G_1$ and $G_2$ such that $S(G_1) = S(G_2)$, but $D_{sep}(G_1)$ $\neq$ $D_{sep}(G_2)$ and $D_{sep}(G_1) \subset D_{sep}(G_2)$. For DAGs, no two such graphs exist.
We begin with the proof of (a). Suppose that $S(G_1)$ is not a sub-skeleton of $S(G_2)$, meaning that there exists a pair $(j,k) \in S(G_1)$ and $(j,k) \notin S(G_2)$. By Lemma \[Lem:Sec3a\], $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$ in $G_1$ while there exists $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$ entailed by $G_2$. Hence it is contradictory that $D_{sep}(G_1) \subset D_{sep}(G_2)$. For (b), we refer to the example in Figure \[Fig:Sec3b\]. In Figure \[Fig:Sec3b\], the unshielded triple $(1, 4, 2)$ is a conductor in $G_1$ and an imperfect non-conductor in $G_2$ because of a reversed directed edge between $4$ and $5$. By the property of a conductor, $1$ is not d-separated from $4$ given the empty set for $G_1$. In contrast for $G_2$, $1$ is d-separated from $4$ given the empty set. Other d-separation rules are the same for both $G_1$ and $G_2$.
\(A) at(0,0) [$1$]{}; (B) at(1.5,0) [$2$]{}; (C) at(3,0) [$3$]{}; (D) at(4.5,0) [$4$]{}; (E) at(3,1.5) [$5$]{}; (G1) at(3, -1.2) [$G_1$]{};
(A2) at(7,0) [$1$]{}; (B2) at(8.5,0) [$2$]{}; (C2) at(10,0) [$3$]{}; (D2) at(11.5,0) [$4$]{}; (E2) at(10,1.5) [$5$]{}; (G1) at(10, -1.2) [$G_2$]{};
\(A) edge \[right=25\] node\[above\] [ ]{} (B); (B) edge \[bend left =25\] node\[above\] [ ]{} (C); (C) edge \[bend left =25\] node\[above\] [ ]{} (B); (D) edge \[left =25\] node\[above\] [ ]{} (C); (B) edge \[left =25\] node\[above\] [ ]{} (E); (C) edge \[left =25\] node\[above\] [ ]{} (E); (E) edge \[left =25, color= red\] node\[above\] [ ]{} (D); (A) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (C); (B) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (D);
(A2) edge \[right=25\] node\[above\] [ ]{} (B2); (B2) edge \[bend left =25\] node\[above\] [ ]{} (C2); (C2) edge \[bend left =25\] node\[above\] [ ]{} (B2); (D2) edge \[left =25\] node\[above\] [ ]{} (C2); (B2) edge \[left =25\] node\[above\] [ ]{} (E2); (C2) edge \[left =25\] node\[above\] [ ]{} (E2); (D2) edge \[left =25, color= red\] node\[above\] [ ]{} (E2); (A2) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (C2); (B2) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (D2);
Lemma \[Lem:Sec3b\] (a) holds for both DAGs and DCGs, and allows us to conclude a subset-superset relation between edges in the skeleton and d-separation rules in a graph $G$. Part (b) is where there is a key difference DAGs and directed graphs with cycles. Part (b) asserts that there are examples in which the edge set in the skeleton may be totally equivalent, yet one graph entails a strict superset of d-separation rules.
Now we present the main result of this section which compares the identifiable and weak [SMR]{} assumptions with the CFC and P-minimality assumption.
\[Thm:Sec3a\] For DCG models,
- the weak [SMR]{} assumption is weaker than the CFC.
- there exists a DCG model $(G, \mathbb{P})$ satisfying the CFC that does not satisfy the identifiable [SMR]{} assumption.
- the identifiable [SMR]{} assumption is stronger than the P-minimality assumption.
- there exists a DCG model $(G, \mathbb{P})$ satisfying the weak [SMR]{} assumption that does not satisfy the P-minimality assumption.
<!-- -->
- The proof for (a) follows from Lemma \[Lem:Sec3b\] (a). If a DCG model $(G^*, \mathbb{P})$ satisfies the CFC, then for any graph $G$ such that $(G, \mathbb{P})$ satisfies the CMC, $D_{sep}(G) \subseteq D_{sep}(G^*)$. Hence based on Lemma \[Lem:Sec3b\] (a), $S(G^*) \subseteq S(G)$ and $(G^*,\mathbb{P})$ satisfies the weak [SMR]{} assumption.
- We refer to the example in Figure \[Fig:Sec3b\] where $(G_2, \mathbb{P})$ satisfies the CFC and fails to satisfy the identifiable [SMR]{} assumption because $S(G_1) = S(G_2)$ and $(G_1, \mathbb{P})$ satisfies the CMC.
- The proof for (c) again follows from Lemma \[Lem:Sec3b\] (a). Suppose that a DCG model $(G^*, \mathbb{P})$ fails to satisfy the P-minimality assumption. This implies that there exists a DCG $G$ such that $(G, \mathbb{P})$ satisfies the CMC, $G \notin \mathcal{M}(G^*)$ and $D_{sep}(G^*) \subset D_{sep}(G)$. Lemma \[Lem:Sec3b\] (a) implies $S(G) \subseteq S(G^*)$. Hence $G^*$ cannot have the fewest edges uniquely, therefore $(G^*, \mathbb{P})$ fails to satisfy the identifiable [SMR]{} assumption.
- We refer to the example in Figure \[Fig:Sec3b\] where $(G_1,\mathbb{P})$ satisfies the weak [SMR]{} assumption and fails to satisfy the P-minimality assumption. Further explanation is given in Figure \[Fig:App2\] in the appendix.
Theorem \[Thm:Sec3a\] shows that if a DCG model $(G, \mathbb{P})$ satisfies the CFC, the weak [SMR]{} assumption is satisfied whereas the identifiable [SMR]{} assumption is not necessarily satisfied. For DAG models, the identifiable [SMR]{} assumption is strictly weaker than the CFC and the identifiable [SMR]{} assumption guarantees identifiability of the true [MEC]{}. However, Theorem \[Thm:Sec3a\] (b) implies that the identifiable [SMR]{} assumption is not strictly weaker than the CFC for DCG models. On the other hand, unlike for DAG models, the weak [SMR]{} assumption does not imply the P-minimality assumption for DCG models, according to (d). In Section \[SecSimulation\], we implement an algorithm that uses the identifiable [SMR]{} assumption and the results seem to suggest that on average for DCG models, the identifiable [SMR]{} assumption is weaker than the CFC.
New principle: Maximum d-separation rules (MDR) {#SecMaxDSep}
===============================================
In light of the fact that the identifiable [SMR]{} assumption does not lead to a strictly weaker assumption than the CFC, we introduce the maximum d-separation rules (MDR) assumption. The [MDR]{} assumption asserts that $G^*$ entails more d-separation rules than any other graph satisfying the CMC according to the given distribution $\mathbb{P}$. We use $CI(\mathbb{P})$ to denote the conditional independence (CI) statements corresponding to the distribution $\mathbb{P}$.
A DCG model $(G^* ,\mathbb{P})$ satisfies the maximum *d-separation* rules (MDR) assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|D_{sep}(G)| < |D_{sep}(G^*)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$.
There is a natural and intuitive connection between the MDR assumption and the P-minimality assumption. Both assumptions encourage DCGs to entail more d-separation rules. The key difference between the P-minimality assumption and the MDR assumption is that the P-minimality assumption requires that there is no DCGs that entail a *strict superset* of d-separation rules whereas the MDR assumption simply requires that there are no DCGs that entail a *greater number* of d-separation rules.
Comparison of [MDR]{} to CFC and minimality assumptions for DCGs {#SubSecMDROcc}
----------------------------------------------------------------
In this section, we provide a comparison of the MDR assumption to the CFC and P-minimality assumption. For ease of notation, let $\mathcal{G}_{M}(\mathbb{P})$ and $\mathcal{G}_{F}(\mathbb{P})$ denote the set of Markovian DCG models satisfying the MDR assumption and CFC, respectively. In addition, let $\mathcal{G}_{P}(\mathbb{P})$ denote the set of DCG models satisfying the P-minimality assumption.
\[Thm:Sec4a\] Consider a DCG model $(G^*, \mathbb{P})$.
- If $\mathcal{G}_F(\mathbb{P}) \neq \emptyset$, then $\mathcal{G}_F (\mathbb{P}) = \mathcal{G}_{M}(\mathbb{P})$. Consequently if $(G^*, \mathbb{P})$ satisfies the CFC, then $\mathcal{G}_F(\mathbb{P}) = \mathcal{G}_{M}(\mathbb{P}) = \mathcal{M}(G^*)$.
- There exists a distribution $\mathbb{P}$ for which $\mathcal{G}_F(\mathbb{P}) = \emptyset$ while $(G^*, \mathbb{P})$ satisfies the [MDR]{} assumption and $\mathcal{G}_{M}(\mathbb{P}) = \mathcal{M}(G^*)$.
- $\mathcal{G}_{M}(\mathbb{P}) \subseteq \mathcal{G}_{P}(\mathbb{P})$.
- There exists a distribution $\mathbb{P}$ for which $\mathcal{G}_{M}(\mathbb{P}) = \emptyset$ while $(G^*, \mathbb{P})$ satisfies the P-minimality assumption and $\mathcal{G}_{P}(\mathbb{P}) \supseteq \mathcal{M}(G^*)$.
<!-- -->
- Suppose that $(G^*, \mathbb{P})$ satisfies the CFC. Then $CI(\mathbb{P})$ corresponds to the set of d-separation rules entailed by $G^*$. Note that if $(G, \mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$, then $CI(\mathbb{P})$ is a superset of the set of d-separation rules entailed by $G$ and therefore $D_{sep}(G) \subset D_{sep}(G^*)$. This allows us to conclude that graphs belonging to $\mathcal{M}(G^*)$ should entail the maximum number of d-separation rules among graphs satisfying the CMC. Furthermore, based on the CFC $\mathcal{G}_F(\mathbb{P}) = \mathcal{M}(G^*)$ which completes the proof.
- Suppose that $(G^*,\mathbb{P})$ fails to satisfy the P-minimality assumption. By the definition of the P-minimality assumption, there exists $(G,\mathbb{P})$ satisfying the CMC such that $G \notin \mathcal{M}(G^*)$ and $D_{sep}(G^*) \subset D_{sep}(G)$. Hence, $G^*$ entails strictly less d-separation rules than $G$, and therefore $(G^*,\mathbb{P})$ violates the [MDR]{} assumption.
- For (b) and (d), we refer to the example in Figure $\ref{fig:Sec4a}$. Suppose that $X_1$, $X_2$, $X_3$, $X_4$ are random variables with distribution $\mathbb{P}$ with the following CI statements: $$\label{CIrelations}
CI(\mathbb{P}) = \{X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_3 \mid X_2;~X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid X_1, X_3;~X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_2 \mid X_4\}.$$
We show that $(G_1, \mathbb{P})$ satisfies the MDR assumption but not the CFC, whereas $(G_2, \mathbb{P})$ satisfies the P-minimality assumption but not the MDR assumption. Any graph satisfying the CMC with respect to $\mathbb{P}$ must only entail a subset of the three d-separation rules: $\{X_1~\mbox{d-sep}~X_3 \mid X_2; X_2~\mbox{d-sep} $ $X_4 \mid X_1,X_3;~X_1~\mbox{d-sep}~X_2 \mid X_4 \}$. Clearly $D_{sep}(G_1) = \{X_1 ~\mbox{d-sep} ~X_3 \mid X_2; ~X_2 ~\mbox{d-sep} ~X_4 \mid X_1, X_3\}$, therefore $(G_1, \mathbb{P})$ satisfies the CMC. It can be shown that no graph entails any subset containing two or three of these d-separation rules other than $G_1$. Hence no graph follows the CFC with respect to $\mathbb{P}$ since there is no graph that entails all three d-separation rules and $(G_1, \mathbb{P})$ satisfies the MDR assumption because no graph entails more or as many d-separation rules as $G_1$ entails, and satisfies the CMC with respect to $\mathbb{P}$.
- Note that $G_2$ entails the sole d-separation rule, $D_{sep}(G_2) = \{X_1~\mbox{d-sep}~X_2 \mid X_4\}$ and it is clear that $(G_2, \mathbb{P})$ satisfies the CMC. If $(G_2, \mathbb{P})$ does not satisfy the P-minimality assumption, there exists a graph $G$ such that $(G,\mathbb{P})$ satisfies the CMC and $D_{sep}(G_2) \subsetneq D_{sep}(G)$. It can be shown that no such graph exists. Therefore, $(G_2, \mathbb{P})$ satisfies the P-minimality assumption. Clearly, $(G_2, \mathbb{P})$ fails to satisfy the [MDR]{} assumption because $G_1$ entails more d-separation rules.
\(A) at (0,0) [$X_1$]{}; (B) at (2,0) [$X_2$]{}; (C) at (2,-1.5) [$X_3$]{}; (D) at (0,-1.5) [$X_4$]{}; (G1) at(1, -2.5) [$G_1$]{};
\(A) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (B); (B) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C); (C) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (D); (A) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (D);
(A2) at (5,0) [$X_1$]{}; (B2) at (7,0) [$X_2$]{}; (C2) at (7,-1.5) [$X_3$]{}; (D2) at (5,-1.5) [$X_4$]{}; (G1) at(6, -2.5) [$G_2$]{};
(A2) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C2); (B2) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (D2); (B2) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C2); (D2) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (C2); (D2) edge \[shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (A2);
Theorem \[Thm:Sec4a\] (a) asserts that whenever the set of DCG models satisfying the CFC is not empty, it is equivalent to the set of DCG models satisfying the [MDR]{} assumption. Part (b) claims that there exists a distribution in which no DCG model satisfies the CFC, while the set of DCG models satisfying the [MDR]{} assumption consists of its [MEC]{}. Hence, (a) and (b) show that the [MDR]{} assumption is strictly superior to the CFC in terms of recovering the true [MEC]{}. Theorem \[Thm:Sec4a\] (c) claims that any DCG models satisfying the [MDR]{} assumption should lie in the set of DCG models satisfying the P-minimality assumption. (d) asserts that there exist DCG models satisfying the P-minimality assumption but violating the [MDR]{} assumption. Therefore, (c) and (d) prove that the [MDR]{} assumption is strictly stronger than the P-minimality assumption.
Comparison between the [MDR]{} and [SMR]{} assumptions {#SubSecMDRSMR}
------------------------------------------------------
Now we show that the [MDR]{} assumption is neither weaker nor stronger than the [SMR]{} assumptions for both DAG and DCG models.
\[Lem:Sec4a\]
- There exists a DAG model satisfying the identifiable [SMR]{} assumption that does not satisfy the [MDR]{} assumption. Further, there exists a DAG model satisfying the [MDR]{} assumption that does not satisfy the weak [SMR]{} assumption.
- There exists a DCG model that is not a DAG that satisfies the same conclusion as (a).
Our proof for Lemma \[Lem:Sec4a\] involves us constructing two sets of examples, one for DAGs corresponding to (a) and one for cyclic graphs corresponding to (b). For (a), Figure $\ref{fig:Sec4c}$ displays two DAGs, $G_1$ and $G_2$ which are clearly not in the same [MEC]{}. For clarity, we use red arrows to represent the edges/directions that are different between the graphs. We associate the same distribution $\mathbb{P}$ to each DAG where $CI(\mathbb{P})$ is provided in Appendix \[Proof:lemma(a)\]. With this $CI(\mathbb{P})$, both $(G_1, \mathbb{P})$ and $(G_2, \mathbb{P})$ satisfy the CMC (explained in Appendix \[Proof:lemma(a)\]). The main point of this example is that $(G_2,\mathbb{P})$ satisfies the identifiable and weak [SMR]{} assumptions whereas $(G_1,\mathbb{P})$ satisfies the [MDR]{} assumption, and therefore two different graphs are determined depending on the given identifiability assumption with respect to the same $\mathbb{P}$. A more detailed proof that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the [SMR]{} assumption is provided in Appendix \[Proof:lemma(a)\].
(Y1) at (0,0) [$X_1$]{}; (Y2) at (2,1.7) [$X_2$]{}; (Y3) at (2,0) [$X_3$]{}; (Y4) at (2,-1.7) [$X_4$]{}; (Y5) at (4,0) [$X_5$]{}; (Y12) at (2,-2.7) [$G_1$]{};
(Y1) edge \[right =-35\] node\[above\] [ ]{} (Y3); (Y2) edge \[bend right = 30, color = red\] node\[above\] [ ]{} (Y1); (Y2) edge \[bend right =-35\] node\[above\] [ ]{} (Y4); (Y2) edge \[bend right = -30\] node\[above\] [ ]{} (Y5); (Y4) edge \[bend right = 30 \] node\[above\] [ ]{} (Y5); (Y5) edge \[bend right =35, color = red\] node\[above\] [ ]{} (Y1); (Y5) edge \[right =35\] node\[above\] [ ]{} (Y3);
(X1) at (7,0) [$X_1$]{}; (X2) at (9,1.7) [$X_2$]{}; (X3) at (9,0) [$X_3$]{}; (X4) at (9,-1.7) [$X_4$]{}; (X5) at (11,0) [$X_5$]{}; (X12) at (9,-2.7) [$G_2$]{};
(X1) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X2); (X1) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X1); (X2) edge \[bend right =-30\] node\[above\] [ ]{} (X5); (X5) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =30\] node\[above\] [ ]{} (X5);
(Z1) at (0, 0) [$X_1$]{}; (Z2) at (3, 3) [$X_2$]{}; (Z3) at (3, 0) [$X_3$]{}; (Z4) at (3, -3) [$X_4$]{}; (Z5) at (6, 0) [$X_5$]{}; (Z6) at (1.5, 1.5) [$X_6$]{}; (Z7) at (3, 1.5) [$X_7$]{}; (Z8) at (4.5, 1.5) [$X_8$]{}; (Z9) at (1.5,-1.5) [$X_9$]{}; (Z10) at (3, -1.5) [$X_{10}$]{}; (Z11) at (4.5,-1.5) [$X_{11}$]{};
(K1) at (1.7, 4.2) [$~Y~$]{} ; (Z12) at (3,-4) [$G_1$]{};
(Z1) edge \[bend right=-35\] node\[above\] [ ]{} (Z2); (Z3) edge \[right=25, color =red\] node\[above\] [ ]{} (Z1); (Z1) edge \[ bend right= 35\] node\[above\] [ ]{} (Z4); (Z2) edge \[bend right= -35\] node\[above\] [ ]{} (Z5); (Z5) edge \[right=25, color = red\] node\[above\] [ ]{} (Z3); (Z4) edge \[bend right=35\] node\[above\] [ ]{} (Z5);
(Z2) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z8);
(Z3) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z8);
(Z3) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z11);
(Z4) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z11);
(Z2) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z2) edge \[-, dotted, color =red, thick, bend right=10\] node\[above\] [ ]{} (K1); (K1) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z1) edge \[bend right=-25\] node\[above\] [ ]{} (K1); (K1) edge \[bend right=-45\] node\[above\] [ ]{} (Z5);
(K2) at (10.7, 4.2) [$~Y~$]{} ;
(Y1) at (9,0) [$X_1$]{}; (Y2) at (12,3) [$X_2$]{}; (Y3) at (12,0) [$X_3$]{}; (Y4) at (12,-3) [$X_4$]{}; (Y5) at (15,0) [$X_5$]{}; (Y6) at (10.5,1.5) [$X_6$]{}; (Y7) at (12,1.5) [$X_7$]{}; (Y8) at (13.5,1.5) [$X_8$]{}; (Y9) at (10.5,-1.5) [$X_9$]{};
(Y10) at (12,-1.5) [$X_{10}$]{}; (Y11) at (13.5,-1.5) [$X_{11}$]{}; (Y12) at (12,-4) [$G_2$]{};
(Y1) edge \[bend right=-35\] node\[above left\] [ ]{} (Y2); (Y1) edge \[right=25\] node\[above\] [ ]{} (Y3); (Y1) edge \[bend right=35\] node\[below left\] [ ]{} (Y4); (Y2) edge \[bend right=-35\] node\[above right\] [ ]{} (Y5); (Y5) edge \[right=25, color = red\] node\[above\] [ ]{} (Y3); (Y4) edge \[bend right=35\] node\[below right\] [ ]{} (Y5);
(Y2) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y8);
(Y3) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y8);
(Y3) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y11);
(Y4) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y1) edge \[bend right= 25, color = red\] node\[ above left \] [ ]{} (Y5);
(Y1) edge \[bend right=-25\] node\[above\] [ ]{} (K2); (K2) edge \[bend right=-45\] node\[above\] [ ]{} (Y5);
For (b), Figure \[fig:Sec4d\] displays two DCGs $G_1$ and $G_2$ which do not belong to the same [MEC]{}. Once again red arrows are used to denote the edges (both real and virtual) that are different between the graphs. We associate the same distribution $\mathbb{P}$ with conditional independent statements $CI(\mathbb{P})$ (provided in Appendix \[Proof:lemma(b)\]) to each graph such that both $(G_1,\mathbb{P})$ and $(G_2,\mathbb{P})$ satisfy the CMC (explained in Appendix \[Proof:lemma(b)\]). Again, the main idea of this example is that $(G_1,\mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption. A detailed proof that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption can be found in Appendix \[Proof:lemma(b)\].
Intuitively, the reason why fewer edges does not necessarily translate to entailing more d-separation rules is that the placement of edges relative to the rest of the graph and what additional paths they allow affects the total number of d-separation rules entailed by the graph.
In summary, the flow chart in Figure \[Flowchart\] shows how the CFC, SMR, MDR and minimality assumptions are related for both DAG and DCG models:
\[-latex ,node distance = 2 cm and 3cm ,on grid , state/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.7cm, minimum height=10mm, draw, black , text=black , minimum width =1 cm]{}, state2/.style =[ rectangle, rounded corners, top color =white , bottom color=white , dotted, thick, text centered,text width= 2.0cm, minimum height=10mm, draw, white , text=black , minimum width =1 cm]{}, state3/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.7cm, minimum height=15mm, draw, black , text=black , minimum width =1 cm]{}, state4/.style =[ minimum height= 2mm, minimum width = 2mm]{}, state5/.style =[ rectangle, rounded corners, top color =white , bottom color=white , thick, text centered,text width= 2.6cm, minimum height=15mm, draw, black , text=black , minimum width =2.6cm]{}, state6/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.6cm, minimum height=15mm, draw, black , text=black , minimum width =1.6cm]{}, state7/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.0cm, minimum height=15mm, draw, black , text=black , minimum width =1.0cm]{}, label/.style=[thick, minimum size= 2mm]{} \] (A) at (0,10) [CFC]{}; (B) at (-2.2,7.7) [MDR]{}; (Z) at (2.2,7.7) [SMR]{}; (D) at (-2.2,5.4) [P-min]{}; (E) at ( 2.2,5.4) [SGS-min]{}; (G) at (0,4.2) [Directed Acyclic Graph (DAG)]{};
(An1) at (-1.2,10) ; (An2) at ( 1.2,10) ; (An3) at (-2.2,8.5) ; (An4) at ( 2.2,8.5) ;
(An1) edge \[bend right =30\] node\[above left\] [Thm \[Thm:Sec4a\] (a) ]{} (An3); (An2) edge \[bend left =30\] node\[above right\] [Thm \[Thm:Sec2b\] ]{} (An4);
\(B) edge \[shorten <= 2pt, shorten >= 2pt\] node\[left\] [Thm \[Thm:Sec4a\] (c) ]{} (D); (B) to node\[below\] [Lem \[Lem:Sec4a\] (a) ]{} (Z); (Z) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below right\] [Thm \[Thm:Sec2a\] ]{} (D); (D) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below\] [Thm \[Thm:Sec2a\] ]{} (E); (Z) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below\] [ ]{} (E);
(A2) at (7.9, 10) [CFC]{}; (B2) at (5.4, 7.7) [MDR]{}; (Z2) at (10.1,7.7) ; (C2) at (9.45,7.7) [Identifiable SMR ]{}; (K2) at (11.00,7.7) [Weak SMR]{}; (D2) at (5.4, 5.4) [P-min]{}; (E2) at (10.1, 5.4) [SGS-min]{}; (G) at (7.9, 4.2) [Directed Cyclic Graph (DCG)]{};
(Bn1) at ( 6.9,9.9) ; (Bn2) at ( 9.0,10) ; (Bn3) at ( 5.5,8.5) ; (Bn4) at ( 9.7 ,8.5) ; (Bn9) at ( 11.30,8.5) ; (Bn10)at ( 9.70,6.9) ;
(Bn1) edge \[bend right =35\] node\[above left\] [ Thm \[Thm:Sec4a\] (a) ]{} (Bn3); (Bn2) edge \[bend left =10, shorten >= 3pt\] node\[below left\] [ Thm \[Thm:Sec3a\] (d) ]{} (C2); (Bn2) edge \[bend right =-35\] node\[auto\] [ Thm \[Thm:Sec3a\] (a) ]{} (K2); (B2) edge \[shorten <= 2pt, shorten >= 2pt\] node\[left\] [Thm \[Thm:Sec4a\] (c) ]{} (D2);
(B2) to node\[below \] [ Lem \[Lem:Sec4a\] (b) ]{} (C2); (D2) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below\] [Thm \[Thm:Sec2a\] ]{} (E2); (C2) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below right\] [Thm \[Thm:Sec3a\] (c) ]{} (D2); (Z2) edge \[shorten <= 2pt, shorten >= 2pt\] node\[below right\] [ ]{} (E2); ;
Simulation results {#SecSimulation}
==================
In Sections \[SecSMRFrugality\] and \[SecMaxDSep\], we proved that the [MDR]{} assumption is strictly weaker than the CFC and stronger than the P-minimality assumption for both DAG and DCG models, and the identifiable [SMR]{} assumption is stronger than the P-minimality assumption for DCG models. In this section, we support our theoretical results with numerical experiments on small-scale Gaussian linear DCG models (see e.g., [@Spirtes1995]) using the generic Algorithm \[algorithm\]. We also provide a comparison of Algorithm \[algorithm\] to state-of-the-art algorithms for small-scale DCG models in terms of recovering the skeleton of a DCG model.
Step 1: Find all conditional independence statements $\widehat{CI}(\mathbb{P})$ using a conditional independence test Step 2: Find the set of graphs $\widehat{\mathcal{G}}$ satisfying the given identifiability assumption $\widehat{\mathcal{M}}(G) \gets \emptyset$ $\widehat{S}(G) \gets \emptyset$
DCG model and simulation setup
------------------------------
Our simulation study involves simulating DCG models from $p$-node random Gaussian linear DCG models where the distribution $\mathbb{P}$ is defined by the following linear structural equations: $$\label{eq:GGM}
(X_1,X_2,\cdots,X_p)^T = B^T (X_1,X_2,\cdots,X_p)^T + \epsilon$$ where $B \in \mathbb{R}^{p \times p}$ is an edge weight matrix with $B_{jk} = \beta_{jk}$ and $\beta_{jk}$ is a weight of an edge from $X_j$ to $X_k$. Furthermore, $\epsilon \sim \mathcal{N}(\mathbf{0}_{p}, I_p)$ where $\mathbf{0}_{p} = (0,0,\cdots,0)^T \in \mathbb{R}^{p}$ and $I_p \in \mathbb{R}^{p \times p}$ is the identity matrix.
The matrix $B$ encodes the DCG structure since if $\beta_{jk}$ is non-zero, $X_j \to X_k$ and the pair $(X_j, X_k)$ is *really adjacent*, otherwise there is no directed edge from $X_j$ to $X_k$. In addition if there is a set of nodes $S = (s_1, s_2,\cdots,s_t)$ such that the product of $\beta_{j s_1}, \beta_{k s_1}, \beta_{s_1 s_2}, \cdots, \beta_{s_t j}$ is non-zero, the pair $(X_j, X_k)$ is *virtually adjacent*. Note that if the graph is a DAG, we would need to impose the constraint that $B$ is upper triangular; however for DCGs we impose no such constraints.
We present simulation results for two sets of models, DCG models where edges and directions are determined randomly, and DCG models whose edges have a specific graph structure. For the set of random DCG models, the simulation was conducted using $100$ realizations of 5-node random Gaussian linear DCG models where we impose sparsity by assigning a probability that each entry of the matrix $B$ is non-zero and we set the expected neighborhood size range from $1$ (sparse graph) to $4$ (fully connected graph) depending on the non-zero edge weight probability. Furthermore the non-zero edge weight parameters were chosen uniformly at random from the range $\beta_{jk} \in [-1, -0.25] \cup [0.25, 1]$ which ensures the edge weights are bounded away from $0$.
We also ran simulations using $100$ realizations of a 5-node Gaussian linear DCG models with specific graph structures, namely trees, bipartite graphs, and cycles. Figure \[fig:Sec5g\] shows examples of skeletons of these special graphs. We generate these graphs as follows: First, we set the skeleton for our desired graph based on Figure. \[fig:Sec5g\] and then determine the non-zero edge weights which are chosen uniformly at random from the range $\beta_{jk} \in [-1, -0.25] \cup [0.25, 1]$. Second, we repeatedly assign a randomly chosen direction to each edge until every graph has at least one possible directed cycle. Therefore, the bipartite graphs always have at least one directed cycle. However, tree graphs have no cycles because they have no cycles in the skeleton. For cycle graphs, we fix the directions of edges to have a directed cycle $X_1 \to X_2 \to \cdots \to X_5 \to X_1$.
\(A) at (0,0) [$X_1$]{}; (B) at (1.5, 1) [$X_2$]{}; (C) at (1.5,-1) [$X_3$]{}; (D) at (3, 2) [$X_4$]{}; (E) at (3, 0) [$X_5$]{}; (G1) at(1.5,-2) [Tree (1)]{};
\(A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (B); (A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C); (B) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (D); (B) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E);
\(A) at (5.7, 0.5) [$X_1$]{}; (B) at (4.2, 0.5) [$X_2$]{}; (C) at (7.2, 0.5) [$X_3$]{}; (D) at (5.7, 2.0) [$X_4$]{}; (E) at (5.7, -1.0) [$X_5$]{}; (G1) at(5.7, -2.0) [Tree (2)]{};
\(A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (B); (A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C); (A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (D); (A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E);
(A2) at (8.7,0.5) [$X_1$]{}; (B2) at (10.2, 2) [$X_2$]{}; (C2) at (10.2, 0.5) [$X_3$]{}; (D2) at (10.2,-1) [$X_4$]{}; (E2) at (11.7, 0.5) [$X_5$]{}; (G1) at (10.2, -2) [Bipartite]{};
(A2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (B2); (A2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C2); (A2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (D2); (B2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E2); (C2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E2); (D2) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E2);
\(A) at (13.0, 1.0) [$X_1$]{}; (B) at (14.4, 2.0) [$X_2$]{}; (C) at (15.5, 0.5) [$X_3$]{}; (D) at (14.7,-1.0) [$X_4$]{}; (E) at (13.0,-0.5) [$X_5$]{}; (G1) at(14.2, -2.0) [Cycle]{};
\(A) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (B); (B) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above\] [ ]{} (C); (C) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (D); (D) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (E); (E) edge \[-, shorten <= 1pt, shorten >= 1pt\] node\[above \][ ]{} (A);
Comparison of assumptions
-------------------------
In this section we provide a simulation comparison between the SMR, MDR, CFC and minimality assumptions. The CI statements were estimated based on $n$ independent samples drawn from $\mathbb{P}$ using Fisher’s conditional correlation test with significance level $\alpha = 0.001$. We detected all directed graphs satisfying the CMC and we measured what proportion of graphs in the simulation satisfy each assumption (CFC, [MDR]{}, identifiable [SMR]{}, P-minimality).
In Figures \[fig:Sec5a\], \[fig:Sec5b\] and \[fig:Sec5e\], we simulated how restrictive each identifiability assumption (CFC, [MDR]{}, identifiable [SMR]{}, P-minimality) is for random DCG models and specific graph structures with sample sizes $n \in \{100, 200, 500, 1000\}$ and expected neighborhood sizes from $1$ (sparse graph) to $4$ (fully connected graph). As shown in Figures \[fig:Sec5b\] and \[fig:Sec5e\], the proportion of graphs satisfying each assumption increases as sample size increases because of fewer errors in CI tests. Furthermore, there are more DCG models satisfying the [MDR]{} assumption than the CFC and less DCG models satisfying the [MDR]{} assumption than the P-minimality assumption for all sample sizes and different expected neighborhood sizes. We can also see similar relationships between the CFC, identifiable [SMR]{} and P-minimality assumptions. The simulation study supports our theoretical result that the [MDR]{} assumption is weaker than the CFC but stronger than the P-minimality assumption, and the identifiable [SMR]{} assumption is stronger than the P-minimality assumption. Although there are no theoretical guarantees that the identifiable [SMR]{} assumption is stronger than the [MDR]{} assumption and weaker than the CFC, Figures \[fig:Sec5a\] and \[fig:Sec5b\] represent that the identifiable [SMR]{} assumption is substantially stronger than the [MDR]{} assumption and weaker than the CFC on average.
[.30]{}
[.30]{}
[.30]{}
[.10]{}
[.30]{}
[.30]{}
[.30]{}
[.10]{}
[.30]{}
[.30]{}
[.30]{}
[.10]{}
Comparison to state-of-the-art algorithms
-----------------------------------------
In this section, we compare Algorithm \[algorithm\] to state-of-the-art algorithms for small-scale DCG models in terms of recovering the skeleton $S(G)$ for the graph. This addresses the issue of how likely Algorithm \[algorithm\] based on each assumption is to recover the skeleton of a graph compared to state-of-the-art algorithms.
Once again we used Fisher’s conditional correlation test with significance level $\alpha = 0.001$ for Step 1) of Algorithm \[algorithm\], and we used the MDR and identifiable [SMR]{} assumptions for Step 2). For comparison algorithms, we used the state-of-the-art GES algorithm [@chickering2002finding] and the FCI$+$ algorithms [@claassen2013learning] for small-scale DCG models. We used the R package ’pcalg’ [@Kalisch2012] for the FCI$+$ algorithm, and ’bnlearn’ [@scutari2009learning] for the GES algorithm.
[.30]{}
[.30]{}
[.30]{}
[.10]{}
[.30]{}
[.30]{}
[.30]{}
[.10]{}
Figures \[fig:Sec5c\] and \[fig:Sec5d\] show recovery rates of skeletons for random DCG models with sample sizes $n \in \{100, 200, 500, 1000\}$ and expected neighborhood sizes from $1$ (sparse graph) to $4$ (fully connected graph). Our simulation results show that the accuracy increases as sample size increases because of fewer errors in CI tests. Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions outperforms the FCI$+$ algorithm on average. For dense graphs, we see that the GES algorithm out-performs other algorithms because the GES algorithm often prefers dense graphs. However, the GES algorithm is not theoretically consistent and cannot recover directed graphs with cycles while other algorithms are designed for recovering DCG models (see e.g., Figure \[fig:Sec5f\]).
[.30]{}
[.30]{}
[.30]{}
[.10]{}
Figure \[fig:Sec5f\] shows the accuracy for each type of graph (Tree, Cycle, Bipartite) using Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions and the GES and the FCI$+$ algorithms. Simulation results show that Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions are favorable in comparison to the FCI+ and GES algorithms for small-scale DCG models.
Acknowledgement {#acknowledgement .unnumbered}
===============
GP and GR were both supported by NSF DMS-1407028 over the duration of this project.
Appendix
========
Examples for Theorem \[Thm:Sec3a\] (d) {#examples-for-theoremthmsec3a-d .unnumbered}
--------------------------------------
\(A) at(0,0) [$X_1$]{}; (B) at(2.0,0) [$X_2$]{}; (C) at(4, 0) [$X_3$]{}; (D) at(6,0) [$X_4$]{}; (E) at(4,2) [$X_5$]{}; (G1) at(4, -1.2) [$G_1$]{};
(A2) at(8,0) [$X_1$]{}; (B2) at(10,0) [$X_2$]{}; (C2) at(12,0) [$X_3$]{}; (D2) at(14,0) [$X_4$]{}; (E2) at(12,2) [$X_5$]{}; (G1) at(12, -1.2) [$G_2$]{};
\(A) edge \[right=25\] node\[above\] [ $\alpha_1$ ]{} (B); (B) edge \[bend left =25\] node\[below\] [ $\alpha_3$ ]{} (C); (C) edge \[bend left =25\] node\[below\] [ $\alpha_5$ ]{} (B); (D) edge \[left =25\] node\[above\] [ $\alpha_4$ ]{} (C); (B) edge \[left =25\] node\[above left\] [ $-\alpha_3 \alpha_7$ ]{} (E); (C) edge \[left =25\] node\[left\] [ $\alpha_7$ ]{} (E); (E) edge \[left =25, color= red\] node\[above right\] [ $\alpha_2$]{} (D);
(A2) edge \[right=25\] node\[above\] [ ]{} (B2); (B2) edge \[bend left =25\] node\[below\] [ ]{} (C2); (C2) edge \[bend left =25\] node\[below\] [ ]{} (B2); (D2) edge \[left =25\] node\[above\] [ ]{} (C2); (B2) edge \[left =25\] node\[above left\] [ ]{} (E2); (C2) edge \[left =25\] node\[left\] [ ]{} (E2); (D2) edge \[left =25, color= red\] node\[above right\] [ ]{} (E2);
Suppose that $(G_1,\mathbb{P})$ is a Gaussian linear DCG model with specified edge weights in Figure \[Fig:App2\]. With this choice of distribution $\mathbb{P}$ based on $G_1$ in Figure \[Fig:App2\], we have a set of CI statements which are the same as the set of d-separation rules entailed by $G_1$ and an additional set of CI statements, $CI(\mathbb{P}) \supset \{ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 |~ \emptyset \textrm{, or } X_5,~ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 |~ \emptyset \textrm{, or } X_4\}$.
It is clear that $(G_2, \mathbb{P})$ satisfies the CMC, $D_{sep}(G_1) \subset D_{sep}(G_2)$ and $D_{sep}(G_1) \neq D_{sep}(G_2)$ (explained in Section \[SecSMRFrugality\]). This implies that $(G_1, \mathbb{P})$ fails to satisfy the P-minimality assumption.
Now we prove that $(G_1, \mathbb{P})$ satisfies the weak [SMR]{} assumption. Suppose that $(G_1, \mathbb{P})$ does not satisfy the weak [SMR]{} assumption. Then there exists a $G$ such that $(G,\mathbb{P})$ satisfies the CMC and has fewer edges than $G_1$. By Lemma \[Lem:Sec3b\], if $(G, \mathbb{P})$ satisfies the CFC, $G$ satisfies the weak [SMR]{} assumption. Note that $G_1$ does not have edges between $(X_1, X_4)$ and $(X_1, X_5)$. Since the only additional conditional independence statements that are not entailed by $G_1$ are $\{ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 |~ \emptyset \textrm{, or } X_5,~ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 |~ \emptyset \textrm{, or } X_4\}$, no graph that satisfies the CMC with respect to $\mathbb{P}$ can have fewer edges than $G_1$. This leads to a contradiction and hence $(G_1, \mathbb{P})$ satisfies the weak [SMR]{} assumption.
Proof of Lemma \[Lem:Sec4a\] (a) {#Proof:lemma(a)}
---------------------------------
(Y1) at (0,0) [$X_1$]{}; (Y2) at (2,1.7) [$X_2$]{}; (Y3) at (2,0) [$X_3$]{}; (Y4) at (2,-1.7) [$X_4$]{}; (Y5) at (4,0) [$X_5$]{}; (Y12) at (2,-2.7) [$G_1$]{};
(Y1) edge \[right =-35\] node\[above\] [ ]{} (Y3); (Y2) edge \[bend right = 30, color = red\] node\[above\] [ ]{} (Y1); (Y2) edge \[bend right =-35\] node\[above\] [ ]{} (Y4); (Y2) edge \[bend right = -30\] node\[above\] [ ]{} (Y5); (Y4) edge \[bend right = 30 \] node\[above\] [ ]{} (Y5); (Y5) edge \[bend right =35, color = red\] node\[above\] [ ]{} (Y1); (Y5) edge \[right =35\] node\[above\] [ ]{} (Y3);
(X1) at (7,0) [$X_1$]{}; (X2) at (9,1.7) [$X_2$]{}; (X3) at (9,0) [$X_3$]{}; (X4) at (9,-1.7) [$X_4$]{}; (X5) at (11,0) [$X_5$]{}; (X12) at (9,-2.7) [$G_2$]{};
(X1) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X2); (X1) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X1); (X2) edge \[bend right =-30\] node\[above\] [ ]{} (X5); (X5) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =30\] node\[above\] [ ]{} (X5);
Here we show that $(G_1,\mathbb{P})$ satisfies the identifiable SMR assumption and and $(G_2,\mathbb{P})$ satisfies the MDR assumption, where $\mathbb{P}$ has the following CI statements: $$\begin{aligned}
CI(\mathbb{P}) = \{ & X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_3 \mid (X_1, X_5) \textrm{ or } (X_1, X_4, X_5); X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid X_1; \\
& X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid (X_2, X_5) \textrm{ or } (X_2, X_3, X_5); X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid (X_2, X_4); \\
& X_3 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid (X_1, X_5), (X_2, X_5),\textrm{ or } (X_1, X_2, X_5) \}.\end{aligned}$$
Clearly both DAGs $G_1$ and $G_2$ do not belong to the same [MEC]{} since they have different skeletons. To be explicit, we state all d-separation rules entailed by $G_1$ and $G_2$. Both graphs entail the following sets of d-separation rules:
- $X_2$ is d-separated from $X_3$ given $(X_1, X_5)$ or $(X_1, X_4, X_5)$.
- $X_3$ is d-separated from $X_4$ given $(X_1, X_5)$ or $(X_1, X_2, X_5)$.
The set of d-separation rules entailed by $G_1$ which are not entailed by $G_2$ is as follows:
- $X_1$ is d-separated from $X_4$ given $(X_2, X_5)$ or $(X_2, X_4, X_5)$.
- $X_3$ is d-separated from $X_4$ given $(X_2, X_5)$.
Furthermore, the set of d-separation rules entailed by $G_2$ which are not entailed by $G_1$ is as follows:
- $X_1$ is d-separated from $X_5$ given $(X_2, X_4)$.
- $X_2$ is d-separated from $X_4$ given $X_1$.
With our choice of distribution, both DAG models $(G_1, \mathbb{P})$ and $(G_2, \mathbb{P})$ satisfy the CMC and it is straightforward to see that $G_2$ has fewer edges than $G_1$ while $G_1$ entails more d-separation rules than $G_2$.
It can be shown from an exhaustive search that there is no graph $G$ such that $G$ is sparser or as sparse as $G_2$ and $(G, \mathbb{P})$ satisfies the CMC. Moreover, it can be shown that $G_1$ entails the maximum d-separation rules amongst graphs satisfying the CMC with respect to the distribution again through an exhaustive search. Therefore $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption and $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption.
Proof of Lemma \[Lem:Sec4a\] (b) {#Proof:lemma(b)}
---------------------------------
(Z1) at (0, 0) [$X_1$]{}; (Z2) at (3, 3) [$X_2$]{}; (Z3) at (3, 0) [$X_3$]{}; (Z4) at (3, -3) [$X_4$]{}; (Z5) at (6, 0) [$X_5$]{}; (Z6) at (1.5, 1.5) [$X_6$]{}; (Z7) at (3, 1.5) [$X_7$]{}; (Z8) at (4.5, 1.5) [$X_8$]{}; (Z9) at (1.5,-1.5) [$X_9$]{}; (Z10) at (3, -1.5) [$X_{10}$]{}; (Z11) at (4.5,-1.5) [$X_{11}$]{};
(K1) at (1.7, 4.2) [$~Y~$]{} ;
(Z12) at (3,-4) [$G_1$]{};
(Z1) edge \[bend right=-35\] node\[above\] [ ]{} (Z2); (Z3) edge \[right=25, color =red\] node\[above\] [ ]{} (Z1); (Z1) edge \[ bend right= 35\] node\[above\] [ ]{} (Z4); (Z2) edge \[bend right= -35\] node\[above\] [ ]{} (Z5); (Z5) edge \[right=25, color = red\] node\[above\] [ ]{} (Z3); (Z4) edge \[bend right=35\] node\[above\] [ ]{} (Z5);
(Z2) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z8);
(Z3) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z8);
(Z3) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z11);
(Z4) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z11);
(Z2) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z2) edge \[-, dotted, color =red, thick, bend right=10\] node\[above\] [ ]{} (K1); (K1) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z1) edge \[bend right=-25\] node\[above\] [ ]{} (K1); (K1) edge \[bend right=-45\] node\[above\] [ ]{} (Z5);
(K2) at (10.7, 4.2) [$~Y~$]{} ;
(Y1) at (9,0) [$X_1$]{}; (Y2) at (12,3) [$X_2$]{}; (Y3) at (12,0) [$X_3$]{}; (Y4) at (12,-3) [$X_4$]{}; (Y5) at (15,0) [$X_5$]{}; (Y6) at (10.5,1.5) [$X_6$]{}; (Y7) at (12,1.5) [$X_7$]{}; (Y8) at (13.5,1.5) [$X_8$]{}; (Y9) at (10.5,-1.5) [$X_9$]{}; (Y10) at (12,-1.5) [$X_{10}$]{}; (Y11) at (13.5,-1.5) [$X_{11}$]{}; (Y12) at (12,-4) [$G_2$]{};
(Y1) edge \[bend right=-35\] node\[above left\] [ ]{} (Y2); (Y1) edge \[right=25\] node\[above\] [ $\beta_1$ ]{} (Y3); (Y1) edge \[bend right=35\] node\[below left\] [ ]{} (Y4); (Y2) edge \[bend right=-35\] node\[above right\] [ ]{} (Y5); (Y5) edge \[right=25, color = red\] node\[above\] [ $\beta_2$ ]{} (Y3); (Y4) edge \[bend right=35\] node\[below right\] [ ]{} (Y5);
(Y2) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y8);
(Y3) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y8);
(Y3) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y11);
(Y4) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y1) edge \[bend right= 25, color = red\] node\[ above left \] [$\beta_1 \beta_2~~~~~~$ ]{} (Y5);
(Y1) edge \[bend right=-25\] node\[above\] [ ]{} (K2); (K2) edge \[bend right=-45\] node\[above\] [ ]{} (Y5);
Suppose that the pair $(G_2,\mathbb{P})$ is a Gaussian linear DCG model with specified edge weights in Figure \[fig:Sec4dA\], where the non-specified edge weights can be chosen arbitrarily. Once again to be explicit, we state all d-separation rules entailed by $G_1$ and $G_2$. Both graphs entail the following sets of d-separation rules:
- For any node $A \in \{X_6,X_7,X_8\}$ and $B \in \{X_1,X_5\}$, $A$ is d-separated from $B$ given $\{X_2, X_3\} \cup C$ for any $C \subset \{ X_1,X_4,X_5,X_6,X_7,X_8, X_9, X_{10}, X_{11},Y \} \setminus \{A,B\}$.
- For any node $A \in \{X_9,X_{10},X_{11}\}$ and $B \in \{X_1,X_5\}$, $A$ is d-separated from $B$ given $\{X_3, X_4\} \cup C$ for any $C \subset \{X_1, X_2, X_3, X_5,X_6,X_7,X_8, X_9, X_{10}$ $, X_{11},Y \} \setminus \{A,B\}$.
- For any nodes $A,B \in \{X_6,X_7, X_8\}$, $A$ is d-separated from $B$ given $\{X_2,$ $X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$.
- For any nodes $A,B \in \{X_9,X_{10}, X_{11}\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$.
- For any nodes $A \in \{X_6,X_7, X_8\}$ and $B \in \{X_4\}$, $A$ is d-separated from $B$ given $\{X_2,X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_2,X_5\} \cup D$ for any $D \subset \{X_4,X_6,X_7,X_8,Y \}\setminus\{A,B\}$.
- For any nodes $A \in \{X_6, X_7, X_8\}$ and $B \in \{Y\}$, $A$ is d-separated from $B$ given $\{X_2,X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_2,X_5\} \cup D$ for any $D \subset \{X_4,X_6,X_7,X_8,,X_9,X_{10}$ $,X_{11},Y \}\setminus\{A,B\}$.
- For any nodes $A \in \{X_9,X_{10}, X_{11}\}$ and $B \in \{X_2\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_4,X_5\} \cup D$ for any $D \subset \{X_2,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$.
- For any nodes $A \in \{X_9,X_{10}, X_{11}\}$ and $B \in \{Y\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_4,X_5\} \cup D$ for any $D \subset \{X_2,X_6,X_7,X_8,X_9,X_{10}$ $,X_{11},Y \}\setminus\{A,B\}$.
- For any nodes $A\in \{X_6,X_7, X_8\}$, $B \in \{X_9,X_{10}, X_{11}\}$, $A$ is d-separated from $B$ given $\{X_3\} \cup C \cup D$ for $C \subset \{X_1,X_2,X_4\}$, $C \neq \emptyset$ and $D \subset \{X_1,X_2,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B,C\}$.
- $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,$ $X_9,X_{10},X_{11},Y\}$.
- $X_3$ is d-separated from $X_4$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6$ $,X_7,X_8,Y\}$.
- $X_3$ is d-separated from $Y$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6$ $,X_7,X_8,X_9,X_{10},X_{11}\}$.
- $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_4,X_9$ $,X_{10},X_{11}, Y\}$.
- $X_4$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_2,X_6,X_7$ $,X_8, Y\}$.
- $Y$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_2,X_6,X_7,X_8$ $,X_4,X_9,X_{10},X_{11}\}$.
The set of d-separation rules entailed by $G_1$ that is not entailed by $G_2$ is as follows:
- $X_1$ is d-separated from $X_5$ given $\{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7$ $,X_8, X_9,X_{10},X_{11}\}$.
Furthermore, the set of d-separation rules entailed by $G_2$ that is not entailed by $G_1$ is as follows:
- $X_2$ is d-separated from $X_4$ given $X_1$ or $\{ X_1, Y\}$.
- $X_2$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_4\}$.
- $X_4$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_2\}$.
It can then be shown that by using the co-efficients specified for $G_2$ in Figure \[fig:Sec4dA\], $CI(\mathbb{P})$ is the union of the CI statements implied by the sets of d-separation rules entailed by both $G_1$ and $G_2$. Therefore $(G_1,\mathbb{P})$ and $(G_2,\mathbb{P})$ satisfy the CMC. It is straightforward to see that $G_2$ is sparser than $G_1$ while $G_1$ entails more d-separation rules than $G_2$.
Now we prove that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption and $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption. First we prove that $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption. Suppose that $(G_2,\mathbb{P})$ does not satisfy the identifiable [SMR]{} assumption. Then there exists a $G$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ has the same number of edges as $G_2$ or fewer edges than $G_2$. Since the only additional CI statements that are not implied by the d-separation rules of $G_2$ are $X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid \{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9,X_{10},X_{11}\}$ and $(G, \mathbb{P})$ satisfies the CMC, we can consider two graphs, one with an edge between $(X_1, X_5)$ and another without an edge between $(X_1, X_5)$. We firstly consider a graph without an edge between $(X_1, X_5)$. Since $G$ does not have an edge between $(X_1, X_5)$ and by Lemma \[Lem:Sec3a\], $G$ should entail at least one d-separation rule from (a) $X_1$ is d-separated from $X_5$ given $\{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9,X_{10},X_{11}\}$. If $G$ does not have an edge between $(X_2, X_3)$, by Lemma \[Lem:Sec3a\] $G$ should entail at least one d-separation rule from (10) $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_9,X_{10},X_{11},Y\}$. These two sets of d-separation rules can exist only if a cycle $X_1 \to X_2 \to X_5 \to X_3 \to X_1$ or $X_1 \leftarrow X_2 \leftarrow X_5 \leftarrow X_3 \leftarrow X_1$ exists. In the same way, if $G$ does not have edges between $(X_3, X_4)$ and $(X_3, Y)$, there should be cycles which are $X_1 \to A \to X_5 \to X_3 \to X_1$ or $X_1 \leftarrow A \leftarrow X_5 \leftarrow X_3 \leftarrow X_1$ for any $A \in \{X_4, Y\}$ as occurs in $G_1$. However these cycles create virtual edges between $(X_2, X_4), (X_2, Y)$ or $(X_4, Y)$ as occurs in $G_1$. Therefore $G$ should have at least 3 edges either real or virtual edges. This leads to a contradiction that $G$ has the same number of edges of $G_2$ or fewer edges than $G_2$.
Secondly, we consider a graph $G$ with an edge between $(X_1, X_5)$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ has fewer edges than $G_2$. Note that $G_1$ entails the maximum number of d-separation rules amongst graphs with an edge between $(X_1, X_5)$ satisfying the CMC because $CI(\mathbb{P}) \setminus \{X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid \{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9, X_{10},X_{11}\}$ is exactly matched to the d-separation rules entailed by $G_1$. This leads to $D_{sep}(G) \subset D_{sep}(G_1)$ and $D_{sep}(G) \neq D_{sep}(G_1)$. By Lemma \[Lem:Sec3b\], $G$ cannot contain fewer edges than $G_1$. However since $G_2$ has fewer edges than $G_1$, it is contradictory that $G$ has the same number of edges of $G_2$ or fewer edges than $G_2$. Therefore, $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption.
Now we prove that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption. Suppose that $(G_1, \mathbb{P})$ fails to satisfy the [MDR]{} assumption. Then, there is a graph $G$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ entails more d-separation rules than $G_1$ or as many d-separation rules as $G_1$. Since $(G, \mathbb{P})$ satisfies the CMC, in order for $G$ to entail at least the same number of d-separation rules entailed by $G_1$, $G$ should entail at least one d-separation rule from (b) $X_2$ is d-separated from $X_4$ given $X_1$ or $\{ X_1, Y\}$, (c) $X_2$ is d-separated from $Y$ given $X_1$ or $\{ X_1, X_4\}$ and (d) $X_4$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_2\}$. This implies that $G$ does not have an edge between $(X_2, X_4)$, $(X_2, Y)$ or $(X_4, Y)$ by Lemma \[Lem:Sec3a\]. As we discussed, there is no graph satisfying the CMC without edges $(X_2, X_4)$, $(X_2, Y)$, $(X_4, Y)$, and $(X_1, X_5)$ unless $G$ has additional edges as occurs in $G_1$. Note that the graph $G$ entails at most six d-separation rules than $G_1$ (the total number of d-separation rules of (b), (c), and (d)). However, adding any edge in the graph $G$ generates more than six more d-separation rules because by Lemma \[Lem:Sec3a\], $G$ loses an entire set of d-separation rules from the sets (1) to (15) which each contain more than six d-separation rules. This leads to a contradiction that $G$ entails more d-separation rules than $G_1$ or as many d-separation rules as $G_1$.
|
{
"pile_set_name": "arxiv"
}
|
Friday, June 1, 2012
Song Story: 'Glory Hallelujah'
Unity is huge. It's not just huge in sports teams and in successful businesses. It's not just huge in committees or even families. Unity is huge to God. It's huge in churches, from the leadership all the way to the last attender and even more it's huge in terms of the Church with a big 'C'... the collective Christ-followers and the churches in which they worship throughout our nation and world. The verses in God's Word that discuss the importance of unity are prolific and the urgency with which the concept is discussed is palpable.
All that said, I wanted to write a worship song that, at it's core, could help unify the congregation singing it. I used 'we' language on this one--something I haven't used a ton in the past--because the lyric and theme begged for it, and I searched for words that could articulate the depth of unity that I believe God desires from us. I think the lyric I'm most proud of in this particular song kicks off the second verse:
All of our brothers and sisters through time have sung of the blood of the same sacrifice
This lyrics speaks to the beautiful truth that singing of His love and sacrifice for us binds believers together in a way that transcends even time. No matter what melody is being sung, no matter what chords are being played by what instruments, believers have been uniting together for centuries singing about the truths of Christ's glorious death and resurrection and all that they imply. To me that's an amazing reality, and one worth giving some serious real estate in our church services!
As for the song-writing fodder I promised... this one was fun to play around with as I wrote it. I used a hemiola passage in the verse (played quietly with a wurlitzer) and bridge (a little more apparent from an electric guitar) with three notes being repeated all the way through large 4/4 phrases. I also truncated all of the phrases in the bridge--all of them 3 bars instead of four--just to add to the urgency of the concept sung there. One of the things I think is most fun, though, about this tune is that the chords in the second verse are quite different than in the first even though the melody is identical... capped off with a 2sus chord replacing the typical 5 in the first verse. Yup... theory geek stuff for sure!
Hope that gives you some insight into the first song on the project! Come back and visit soon as I'll be discussing my first single next time... track #2, entitled 'Here With You.'
AVAILABLE NOW!!!
Subscribe To
who's Mark Roach?
I'm a Christ-following husband, father, songwriter, worship leader and St. Louis Blues fan.
It's worth noting that, while I am the Worship Arts Director at MSC, all the stuff found here is me... just me... and shouldn't be taken to represent MSC or any other entity, really... beyond just me. :)
|
{
"pile_set_name": "pile-cc"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.