text
stringlengths 18
160k
| meta
dict |
---|---|
Dymas
In Greek mythology, Dymas (Ancient Greek: Δύμας) is the name attributed to the following individuals:
Dymas, a Mariandynian who warned the Argonauts about the cruelty of Amycus, king of the Bebrycians. Both Mariandynians and Bebrycians lived in northwestern Asia Minor.
Dymas, a soldier who fought on the side of the Seven Against Thebes. He took part in the foot-race at Opheltes' funeral games in Nemea. Dymas was wounded in battle and killed himself when the enemy started questioning him.
Dymas, a Dorian and the ancestor of the Dymanes. His father, Aegimius, adopted Heracles' son, Hyllas. Dymas and his brother, Pamphylus, submitted to Hyllas.
Dymas, king of Phrygia and father of Hecuba.
Dymas, perhaps the same as the first. According to Quintus Smyrnaeus this Dymas was the father of Meges, a Trojan whose sons fought at Troy.
Dymas, an Aulian warrior, who came to fight at Troy under the leadership of Archesilaus. He died at the hands of Aeneas.
Dymas, a Trojan soldier who fought with Aeneas and was killed at Troy.
Dymas, was mentioned in Homer's Odyssey as a Phaeacian captain, whose daughter was a friend to the princess Nausicaa.
References
Category:Kings of Phrygia
Category:Characters in Greek mythology
Category:Dorian mythology
|
{
"pile_set_name": "wikipedia_en"
}
|
Sand Ridge State Forest
Sand Ridge State Forest is a conservation area located in the U.S. state of Illinois. Containing , it is the largest state forest in Illinois. It is located in northern Mason County. The nearest town is Manito, Illinois and the nearest numbered highway is U.S. Highway 136. It is located on a low bluff, or "sand ridge", overlooking the Illinois River, hence the name. The sand ridge is believed to be an artifact of the post-glacial Kankakee Torrent.
The Sand Ridge State Forest largely dates back to 1939, when the state of Illinois purchased parcels of submarginal sandy farmland for conservation purposes. The Civilian Conservation Corps planted pine trees on much of the land. Today, the state forest contains of dryland oak-hickory woodlands, of pine woodlands, and of open fields and sand prairies. Endemic species include the prickly pear cactus, Opuntia, more familiar to Mexicans and residents of the U.S. Southwest.
The Sand Ridge State Forest contains the Clear Lake Site, an archeological site listed on the National Register of Historic Places.
Current status
In the 2010s, Sand Ridge is managed by the Illinois Department of Natural Resources (IDNR) as open space for active recreational purposes, especially whitetail deer hunting. Revis Hill Prairie, also located within Mason County, is operated by IDNR as a disjunct area of Sand Ridge State Forest.
In early 2012, Sand Ridge State Forest lost about to a fire caused by a man burning brush in high winds which sparked the trees.
External links
Illinois DNR Sand Ridge State Forest site
Category:1939 establishments in Illinois
Category:Civilian Conservation Corps in Illinois
Category:Illinois River
Category:Illinois state forests
Category:Protected areas established in 1939
Category:Protected areas of Mason County, Illinois
|
{
"pile_set_name": "wikipedia_en"
}
|
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX ns: <http://example.org/ns#>
SELECT ?title ?price
{ ?x ns:price ?p .
?x ns:discount ?discount
BIND (?p*(1-?discount) AS ?price)
FILTER(?price < 20)
?x dc:title ?title .
}
|
{
"pile_set_name": "github"
}
|
#include <bits/stdc++.h>
#define sd(x) scanf("%d",&x)
#define sd2(x,y) scanf("%d%d",&x,&y)
#define sd3(x,y,z) scanf("%d%d%d",&x,&y,&z)
#define fi first
#define se second
#define pb(x) push_back(x)
#define mp(x,y) make_pair(x,y)
#define LET(x, a) __typeof(a) x(a)
#define foreach(it, v) for(LET(it, v.begin()); it != v.end(); it++)
#define _ ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL);
#define __ freopen("input.txt","r",stdin);freopen("output.txt","w",stdout);
#define func __FUNCTION__
#define line __LINE__
using namespace std;
template<typename S, typename T>
ostream& operator<<(ostream& out, pair<S, T> const& p){out<<'('<<p.fi<<", "<<p.se<<')'; return out;}
template<typename T>
ostream& operator<<(ostream& out, vector<T> const & v){
int l = v.size(); for(int i = 0; i < l-1; i++) out<<v[i]<<' '; if(l>0) out<<v[l-1]; return out;}
void tr(){cout << endl;}
template<typename S, typename ... Strings>
void tr(S x, const Strings&... rest){cout<<x<<' ';tr(rest...);}
const int N = 100100;
int n, p;
int l[N], r[N];
int main(){
sd2(n,p);
for(int i = 0; i < n; i++){
sd2(l[i], r[i]);
}
l[n] = l[0];
r[n] = r[0];
long double res = 0;
for(int i = 1; i <= n; i++){
long long v1 = (r[i]/p) - ((l[i]-1)/p);
long long v2 = (r[i-1]/p) - ((l[i-1]-1)/p);
long long l1 = r[i]-l[i]+1;
long long l2 = r[i-1]-l[i-1]+1;
long long t = (l1-v1)*(l2-v2);
long double p = (long double) t / (long double) (l1*l2);
p = 1.0f-p;
res += p*2000;
}
printf("%.9lf\n", (double)res);
return 0;
}
|
{
"pile_set_name": "github"
}
|
log.level=${log.level}
log.path=${log.path}
dubbo.registry.address=${dubbo.registry.address}
dubbo.protocal.port=${dubbo.protocal.port}
dubbo.service.version=${dubbo.service.version}
ws.connect.path=${ws.connect.path}
ws.connect.port=${ws.connect.port}
ws.connect.bus.port=${ws.connect.bus.port}
service.name=ws_server
service.version=1.0
service.bus.name=bus_ws_server
service.bus.version=1.0
consul.host=${consul.host}
consul.port=${consul.port}
|
{
"pile_set_name": "github"
}
|
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Fact]{} \[theorem\][Problem]{}
@addtoreset
¶[[**P**]{}]{}
**A subdiffusive behaviour of recurrent random walk**
**in random environment on a regular tree**
by
Yueyun Hu $\;$and$\;$ Zhan Shi
*Université Paris XIII & Université Paris VI*
This version: March 11, 2006
=2truecm =2truecm
[***Summary.***]{} We are interested in the random walk in random environment on an infinite tree. Lyons and Pemantle [@lyons-pemantle] give a precise recurrence/transience criterion. Our paper focuses on the almost sure asymptotic behaviours of a recurrent random walk $(X_n)$ in random environment on a regular tree, which is closely related to Mandelbrot [@mandelbrot]’s multiplicative cascade. We prove, under some general assumptions upon the distribution of the environment, the existence of a new exponent $\nu\in (0, {1\over 2}]$ such that $\max_{0\le i \le n} |X_i|$ behaves asymptotically like $n^{\nu}$. The value of $\nu$ is explicitly formulated in terms of the distribution of the environment.
[***Keywords.***]{} Random walk, random environment, tree, Mandelbrot’s multiplicative cascade.
[***2000 Mathematics Subject Classification.***]{} 60K37, 60G50.
Introduction {#s:intro}
============
Random walk in random environment (RWRE) is a fundamental object in the study of random phenomena in random media. RWRE on $\z$ exhibits rich regimes in the transient case (Kesten, Kozlov and Spitzer [@kesten-kozlov-spitzer]), as well as a slow logarithmic movement in the recurrent case (Sinai [@sinai]). On $\z^d$ (for $d\ge 2$), the study of RWRE remains a big challenge to mathematicians (Sznitman [@sznitman], Zeitouni [@zeitouni]). The present paper focuses on RWRE on a regular rooted tree, which can be viewed as an infinite-dimensional RWRE. Our main result reveals a rich regime à la Kesten–Kozlov–Spitzer, but this time even in the recurrent case; it also strongly suggests the existence of a slow logarithmic regime à la Sinai.
Let $\T$ be a $\deg$-ary tree ($\deg\ge 2$) rooted at $e$. For any vertex $x\in \T \backslash \{ e\}$, let ${\buildrel \leftarrow \over x}$ denote the first vertex on the shortest path from $x$ to the root $e$, and $|x|$ the number of edges on this path (notation: $|e|:= 0$). Thus, each vertex $x\in \T \backslash \{ e\}$ has one parent ${\buildrel \leftarrow \over x}$ and $\deg$ children, whereas the root $e$ has $\deg$ children but no parent. We also write ${\buildrel \Leftarrow \over x}$ for the parent of ${\buildrel \leftarrow \over x}$ (for $x\in \T$ such that $|x|\ge 2$).
Let $\omega:= (\omega(x,y), \, x,y\in \T)$ be a family of non-negative random variables such that $\sum_{y\in \T} \omega(x,y)=1$ for any $x\in \T$. Given a realization of $\omega$, we define a Markov chain $X:= (X_n, \, n\ge 0)$ on $\T$ by $X_0 =e$, and whose transition probabilities are $$P_\omega(X_{n+1}= y \, | \, X_n =x) = \omega(x, y) .$$
Let $\P$ denote the distribution of $\omega$, and let $\p (\cdot) := \int P_\omega (\cdot) \P(\! \d \omega)$. The process $X$ is a $\T$-valued RWRE. (By informally taking $\deg=1$, $X$ would become a usual RWRE on the half-line $\z_+$.)
For general properties of tree-valued processes, we refer to Peres [@peres] and Lyons and Peres [@lyons-peres]. See also Duquesne and Le Gall [@duquesne-le-gall] and Le Gall [@le-gall] for continuous random trees. For a list of motivations to study RWRE on a tree, see Pemantle and Peres [@pemantle-peres1], p. 106.
We define $$A(x) := {\omega({\buildrel \leftarrow \over x},
x) \over \omega({\buildrel \leftarrow \over x},
{\buildrel \Leftarrow \over x})} , \qquad
x\in \T, \; |x|\ge 2.
\label{A}$$
Following Lyons and Pemantle [@lyons-pemantle], we assume throughout the paper that $(\omega(x,\bullet))_{x\in \T\backslash \{ e\} }$ is a family of i.i.d. [*non-degenerate*]{} random vectors and that $(A(x), \; x\in \T, \; |x|\ge 2)$ are identically distributed. We also assume the existence of $\varepsilon_0>0$ such that $\omega(x,y) \ge \varepsilon_0$ if either $x= {\buildrel \leftarrow \over y}$ or $y= {\buildrel \leftarrow \over x}$, and $\omega(x,y) =0$ otherwise; in words, $(X_n)$ is a nearest-neighbour walk, satisfying an ellipticity condition.
Let $A$ denote a generic random variable having the common distribution of $A(x)$ (for $|x| \ge 2$). Define $$p := \inf_{t\in [0,1]} \E (A^t) .
\label{p}$$
We recall a recurrence/transience criterion from Lyons and Pemantle ([@lyons-pemantle], Theorem 1 and Proposition 2).
[**Theorem A (Lyons and Pemantle [@lyons-pemantle])**]{} [*With $\p$-probability one, the walk $(X_n)$ is recurrent or transient, according to whether $p\le {1\over \deg}$ or $p>{1\over \deg}$. It is, moreover, positive recurrent if $p<{1\over \deg}$.*]{}
We study the recurrent case $p\le {1\over \deg}$ in this paper. Our first result, which is not deep, concerns the positive recurrent case $p< {1\over \deg}$.
\[t:posrec\] If $p<{1\over \deg}$, then $$\lim_{n\to \infty} \, {1\over \log n} \,
\max_{0\le i\le n} |X_i| =
{1\over \log[1/(q\deg)]},
\qquad \hbox{\rm $\p$-a.s.},
\label{posrec}$$ where the constant $q$ is defined in $(\ref{q})$, and lies in $(0, {1\over \deg})$ when $p<{1\over
\deg}$.
Despite the warning of Pemantle [@pemantle] (“there are many papers proving results on trees as a somewhat unmotivated alternative …to Euclidean space"), it seems to be of particular interest to study the more delicate situation $p={1\over \deg}$ that turns out to possess rich regimes. We prove that, similarly to the Kesten–Kozlov–Spitzer theorem for [*transient*]{} RWRE on the line, $(X_n)$ enjoys, even in the recurrent case, an interesting subdiffusive behaviour.
To state our main result, we define $$\begin{aligned}
\kappa
&:=& \inf\left\{ t>1: \; \E(A^t) = {1\over \deg}
\right\} \in (1, \infty], \qquad (\inf
\emptyset=\infty)
\label{kappa}
\\
\psi(t)
&:=& \log \E \left( A^t \right) , \qquad t\ge 0.
\label{psi}\end{aligned}$$
We use the notation $a_n \approx b_n$ to denote $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$.
\[t:nullrec\] If $p={1\over \deg}$ and if $\psi'(1)<0$, then $$\max_{0\le i\le n} |X_i| \; \approx\; n^\nu,
\qquad \hbox{\rm $\p$-a.s.},
\label{nullrec}$$ where $\nu=\nu(\kappa)$ is defined by $$\nu := 1- {1\over \min\{ \kappa, 2\} } =
\left\{
\begin{array}{ll}
(\kappa-1)/\kappa,
& \mbox{if $\;\kappa \in (1,2]$},
\\
\\
1/2
& \mbox{if $\;\kappa \in (2, \infty].$}
\end{array} \right.
\label{theta}$$
[**Remark.**]{} (i) It is known (Menshikov and Petritis [@menshikov-petritis]) that if $p={1\over \deg}$ and $\psi'(1)<0$, then for $\P$-almost all environment $\omega$, $(X_n)$ is null recurrent.
\(ii) For the value of $\kappa$, see Figure 1. Under the assumptions $p={1\over \deg}$ and $\psi'(1)<0$, the value of $\kappa$ lies in $(2, \infty]$ if and only if $\E (A^2) < {1\over \deg}$; and $\kappa=\infty$ if moreover $\hbox{ess sup}(A) \le 1$.
\(iii) Since the walk is recurrent, $\max_{0\le i\le n} |X_i|$ cannot be replaced by $|X_n|$ in (\[posrec\]) and (\[nullrec\]).
\(iv) Theorem \[t:nullrec\], which could be considered as a (weaker) analogue of the Kesten–Kozlov–Spitzer theorem, shows that tree-valued RWRE has even richer regimes than RWRE on $\z$. In fact, recurrent RWRE on $\z$ is of order of magnitude $(\log n)^2$, and has no $n^a$ (for $0<a<1$) regime.
\(v) The case $\psi'(1)\ge 0$ leads to a phenomenon similar to Sinai’s slow movement, and is studied in a forthcoming paper.
The rest of the paper is organized as follows. Section \[s:posrec\] is devoted to the proof of Theorem \[t:posrec\]. In Section \[s:proba\], we collect some elementary inequalities, which will be of frequent use later on. Theorem \[t:nullrec\] is proved in Section \[s:nullrec\], by means of a result (Proposition \[p:beta-gamma\]) concerning the solution of a recurrence equation which is closely related to Mandelbrot’s multiplicative cascade. We prove Proposition \[p:beta-gamma\] in Section \[s:beta-gamma\].
Throughout the paper, $c$ (possibly with a subscript) denotes a finite and positive constant; we write $c(\omega)$ instead of $c$ when the value of $c$ depends on the environment $\omega$.
Proof of Theorem \[t:posrec\] {#s:posrec}
=============================
We first introduce the constant $q$ in the statement of Theorem \[t:posrec\], which is defined without the assumption $p< {1\over \deg}$. Let $$\varrho(r) := \inf_{t\ge 0} \left\{ r^{-t} \, \E(A^t) \right\} , \qquad r>0.$$
Let $\underline{r} >0$ be such that $$\log \underline{r} = \E(\log A) .$$
We mention that $\varrho(r)=1$ for $r\in (0, \underline{r}]$, and that $\varrho(\cdot)$ is continuous and (strictly) decreasing on $[\underline{r}, \, \Theta)$ (where $\Theta:= \hbox{ess sup}(A) < \infty$), and $\varrho(\Theta) = \P (A= \Theta)$. Moreover, $\varrho(r)=0$ for $r> \Theta$. See Chernoff [@chernoff].
We define $$\overline{r} := \inf\left\{ r>0: \; \varrho(r) \le {1\over \deg} \right\}.$$
Clearly, $\underline{r} < \overline{r}$.
We define $$q:= \sup_{r\in [\underline{r}, \, \overline{r}]}
r \varrho(r).
\label{q}$$
The following elementary lemma tells us that, instead of $p$, we can also use $q$ in the recurrence/transience criterion of Lyons and Pemantle.
\[l:pq\] We have $q>{1\over \deg}$ $($resp., $q={1\over
\deg}$, $q<{1\over \deg})$ if and only if $p>{1\over \deg}$ $($resp., $p={1\over \deg}$, $p<{1\over \deg})$.
[*Proof of Lemma \[l:pq\].*]{} By Lyons and Pemantle ([@lyons-pemantle], p. 129), $p= \sup_{r\in (0, \, 1]} r \varrho (r)$. Since $\varrho(r) =1$ for $r\in (0, \, \underline{r}]$, there exists $\min\{\underline{r}, 1\}\le r^* \le 1$ such that $p= r^* \varrho (r^*)$.
\(i) Assume $p<{1\over \deg}$. Then $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p < {1\over \deg}$, which, by definition of $\overline{r}$, implies $\overline{r} < 1$. Therefore, $q \le p <{1\over \deg}$.
\(ii) Assume $p\ge {1\over \deg}$. We have $\varrho (r^*) \ge p \ge {1\over \deg}$, which yields $r^* \le \overline{r}$. If $\underline{r} \le 1$, then $r^*\ge \underline{r}$, and thus $p=r^* \varrho (r^*) \le q$. If $\underline{r} > 1$, then $p=1$, and thus $q\ge \underline{r}\, \varrho (\underline{r}) = \underline{r} > 1=p$.
We have therefore proved that $p\ge {1\over \deg}$ implies $q\ge p$.
If moreover $p>{1\over \deg}$, then $q \ge p>{1\over \deg}$.
\(iii) Assume $p={1\over \deg}$. We already know from (ii) that $q \ge p$.
On the other hand, $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p = {1\over \deg}$, implying $\overline{r} \le 1$. Thus $q \le p$.
As a consequence, $q=p={1\over \deg}$.$\Box$
Having defined $q$, the next step in the proof of Theorem \[t:posrec\] is to compute invariant measures $\pi$ for $(X_n)$. We first introduce some notation on the tree. For any $m\ge 0$, let $$\T_m := \left\{x \in \T: \; |x| = m \right\} .$$
For any $x\in \T$, let $\{ x_i \}_{1\le i\le \deg}$ be the set of children of $x$.
If $\pi$ is an invariant measure, then $$\pi (x) = {\omega ({\buildrel \leftarrow \over x}, x) \over \omega (x, {\buildrel \leftarrow \over x})} \, \pi({\buildrel \leftarrow \over x}), \qquad \forall \, x\in \T \backslash \{ e\}.$$
By induction, this leads to (recalling $A$ from (\[A\])): for $x\in \T_m$ ($m\ge 1$), $$\pi (x) = {\pi(e)\over \omega (x, {\buildrel \leftarrow \over x})} {\omega (e, x^{(1)}) \over A(x^{(1)})} \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) ,$$
where $]\! ] e, x]\! ]$ denotes the shortest path $x^{(1)}$, $x^{(2)}$, $\cdots$, $x^{(m)} =: x$ from the root $e$ (but excluded) to the vertex $x$. The identity holds for [*any*]{} choice of $(A(e_i), \, 1\le i\le \deg)$. We choose $(A(e_i), \, 1\le i\le \deg)$ to be a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$.
By the ellipticity condition on the environment, we can take $\pi(e)$ to be sufficiently small so that for some $c_0\in (0, 1]$, $$c_0\, \exp\left( \, \sum_{z\in
]\! ] e, x]\! ]} \log A(z) \right) \le \pi (x)
\le \exp\left( \, \sum_{z\in
]\! ] e, x]\! ]} \log A(z) \right) .
\label{pi}$$
By Chebyshev’s inequality, for any $r>\underline{r}$, $$\max_{x\in \T_n} \P \left\{ \pi (x)
\ge r^n\right\} \le \varrho(r)^n.
\label{chernoff}$$
Since $\# \T_n = \deg^n$, this gives $\E (\#\{ x\in \T_n: \; \pi (x)\ge r^n \} ) \le \deg^n \varrho(r)^n$. By Chebyshev’s inequality and the Borel–Cantelli lemma, for any $r>\underline{r}$ and $\P$-almost surely for all large $n$, $$\#\left\{ x\in \T_n: \; \pi (x) \ge r^n \right\}
\le n^2 \deg^n \varrho(r)^n.
\label{Jn-ub1}$$
On the other hand, by (\[chernoff\]), $$\P \left\{ \exists x\in \T_n: \pi (x) \ge r^n\right\} \le \deg^n \varrho (r)^n.$$
For $r> \overline{r}$, the expression on the right-hand side is summable in $n$. By the Borel–Cantelli lemma, for any $r>\overline{r}$ and $\P$-almost surely for all large $n$, $$\max_{x\in \T_n} \pi (x) < r^n.
\label{Jn-ub}$$
[*Proof of Theorem \[t:posrec\]: upper bound.*]{} Fix $\varepsilon>0$ such that $q+ 3\varepsilon < {1\over \deg}$.
We follow the strategy given in Liggett ([@liggett], p. 103) by introducing a positive recurrent birth-and-death chain $(\widetilde{X_j}, \, j\ge 0)$, starting from $0$, with transition probability from $i$ to $i+1$ (for $i\ge 1$) equal to $${1\over \widetilde{\pi} (i)} \, \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x})) ,$$
where $\widetilde{\pi} (i) := \sum_{x\in \T_i} \pi(x)$. We note that $\widetilde{\pi}$ is a finite invariant measure for $(\widetilde{X_j})$. Let $$\tau_n := \inf \left\{ i\ge 1: \, X_i \in \T_n\right\}, \qquad n\ge 0.$$
By Liggett ([@liggett], Theorem II.6.10), for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le \widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0),$$
where $\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0)$ is the probability that $(\widetilde{X_j})$ hits $n$ before returning to $0$. According to Hoel et al. ([@hoel-port-stone], p. 32, Formula (61)), $$\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0) = c_1(\omega) \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x}))}Ê\right)^{\! \! -1} ,$$
where $c_1(\omega)\in (0, \infty)$ depends on $\omega$. We arrive at the following estimate: for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le c_1(\omega) \,
\left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in
\T_i} \pi(x)}Ê\right)^{\! \! -1} .
\label{liggett}$$
We now estimate $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}$. For any fixed $0=r_0< \underline{r} < r_1 < \cdots < r_\ell = \overline{r} <r_{\ell +1}$, $$\sum_{x\in \T_i} \pi(x) \le \sum_{j=1}^{\ell+1} (r_j)^i \# \left\{ x\in \T_i: \pi(x) \ge (r_{j-1})^i \right\} + \sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x).$$
By (\[Jn-ub\]), $\sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x) =0$ $\P$-almost surely for all large $i$. It follows from (\[Jn-ub1\]) that $\P$-almost surely, for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} (r_j)^i i^2 \, \deg^i \varrho (r_{j-1})^i.$$
Recall that $q= \sup_{r\in [\underline{r}, \, \overline{r}] } r \, \varrho(r) \ge \underline{r} \, \varrho (\underline{r}) = \underline{r}$. We choose $r_1:= \underline{r} + \varepsilon \le q+\varepsilon$. We also choose $\ell$ sufficiently large and $(r_j)$ sufficiently close to each other so that $r_j \, \varrho(r_{j-1}) < q+\varepsilon$ for all $2\le j\le \ell+1$. Thus, $\P$-almost surely for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} i^2 \, \deg^i (q+\varepsilon)^i = (r_1)^i \deg^i + \ell \, i^2 \, \deg^i (q+\varepsilon)^i,$$
which implies (recall: $\deg(q+\varepsilon)<1$) that $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)} \ge {c_2\over n^2\, \deg^n (q+\varepsilon)^n}$. Plugging this into (\[liggett\]) yields that, $\P$-almost surely for all large $n$, $$P_\omega (\tau_n< \tau_0) \le c_3(\omega)\, n^2\, \deg^n (q+\varepsilon)^n \le [(q+2\varepsilon)\deg]^n.$$
In particular, by writing $L(\tau_n):= \# \{ 1\le i \le \tau_n: \, X_i = e\}$, we obtain: $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \ge \left\{ 1- [(q+2\varepsilon)\deg]^n \right\}^j ,$$
which, by the Borel–Cantelli lemma, yields that, $\P$-almost surely for all large $n$, $$L(\tau_n) \ge {1\over [(q+3\varepsilon) \deg]^n} , \qquad \hbox{\rm $P_\omega$-a.s.}$$
Since $\{ L(\tau_n) \ge j \} \subset \{ \max_{0\le k \le 2j} |X_k| < n\}$, and since $\varepsilon$ can be as close to 0 as possible, we obtain the upper bound in Theorem \[t:posrec\].$\Box$
[*Proof of Theorem \[t:posrec\]: lower bound.*]{} Assume $p< {1\over \deg}$. Recall that in this case, we have $\overline{r}<1$. Let $\varepsilon>0$ be small. Let $r \in (\underline{r}, \, \overline{r})$ be such that $\varrho(r) > {1\over \deg} \ee^\varepsilon$ and that $r\varrho(r) \ge q\ee^{-\varepsilon}$. Let $L$ be a large integer with $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and satisfying (\[GW\]) below.
We start by constructing a Galton–Watson tree $\G$, which is a certain subtree of $\T$. The first generation of $\G$, denoted by $\G_1$ and defined below, consists of vertices $x\in \T_L$ satisfying a certain property. The second generation of $\G$ is formed by applying the same procedure to each element of $\G_1$, and so on. To be precise, $$\G_1 = \G_1 (L,r) := \left\{ x\in \T_L: \, \min_{z\in ]\! ] e, \, x ]\! ]} \prod_{y\in ]\! ] e, \, z]\! ]} A(y) \ge r^L \right\} ,$$
where $]\! ]e, \, x ]\! ]$ denotes as before the set of vertices (excluding $e$) lying on the shortest path relating $e$ and $x$. More generally, if $\G_i$ denotes the $i$-th generation of $\G$, then $$\G_{n+1} := \bigcup_{u\in \G_n } \left\{ x\in \T_{(n+1)L}: \, \min_{z\in ]\! ] u, \, x ]\! ]} \prod_{y\in ]\! ] u, \, z]\! ]} A(y) \ge r^L \right\} , \qquad n=1,2, \dots$$
We claim that it is possible to choose $L$ sufficiently large such that $$\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L
\varrho(r)^L .
\label{GW}$$
Note that $\ee^{-\varepsilon L} \deg^L \varrho(r)^L>1$, since $\varrho(r) > {1\over \deg} \ee^\varepsilon$.
We admit (\[GW\]) for the moment, which implies that $\G$ is super-critical. By theory of branching processes (Harris [@harris], p. 13), when $n$ goes to infinity, ${\# \G_{n/L} \over [\E(\# \G_1)]^{n/L} }$ converges almost surely (and in $L^2$) to a limit $W$ with $\P(W>0)>0$. Therefore, on the event $\{ W>0\}$, for all large $n$, $$\# (\G_{n/L}) \ge c_4(\omega)
[\E(\# \G_1)]^{n/L}.
\label{GnL}$$
(For notational simplification, we only write our argument for the case when $n$ is a multiple of $L$. It is clear that our final conclusion holds for all large $n$.)
Recall that according to the Dirichlet principle (Griffeath and Liggett [@griffeath-liggett]), $$\begin{aligned}
2\pi(e) P_\omega \left\{ \tau_n < \tau_0
\right\}
&=&\inf_{h: \, h(e)=1, \, h(z)=0, \, \forall |z|
\ge n} \sum_{x,y\in \T} \pi(x) \omega(x,y)
(h(x)- h(y))^2
\nonumber
\\
&\ge& c_5\, \inf_{h: \, h(e)=1, \, h(z)=0, \,
\forall z\in \T_n} \sum_{|x|<n} \sum_{y: \, x=
{\buildrel \leftarrow \over y}} \pi(x) (h(x)-
h(y))^2,
\label{durrett}\end{aligned}$$
the last inequality following from ellipticity condition on the environment. Clearly, $$\begin{aligned}
\sum_{|x|<n} \sum_{y: \, x= {\buildrel
\leftarrow \over y}} \pi(x) (h(x)- h(y))^2
&=&\sum_{i=0}^{(n/L)-1} \sum_{x: \, iL \le |x| <
(i+1) L} \sum_{y: \, x= {\buildrel \leftarrow
\over y}} \pi(x) (h(x)- h(y))^2
\\
&:=&\sum_{i=0}^{(n/L)-1} I_i,\end{aligned}$$
with obvious notation. For any $i$, $$I_i \ge \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2,$$
where $v^\uparrow \in \G_i$ denotes the unique element of $\G_i$ lying on the path $[ \! [ e, v ]\! ]$ (in words, $v^\uparrow$ is the parent of $v$ in the Galton–Watson tree $\G$), and the factor $\deg^{-L}$ comes from the fact that each term $\pi(x) (h(x)- h(y))^2$ is counted at most $\deg^L$ times in the sum on the right-hand side.
By (\[pi\]), for $x\in [\! [ v^\uparrow, v[\! [$, $\pi(x) \ge c_0 \, \prod_{u\in ]\! ]e, x]\! ]} A(u)$, which, by the definition of $\G$, is at least $c_0 \, r^{(i+1)L}$. Therefore, $$\begin{aligned}
I_i
&\ge& c_0 \, \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\!
[ v^\uparrow, v[\! [} \, \sum_{y: \, x=
{\buildrel \leftarrow \over y}} r^{(i+1)L}
(h(x)- h(y))^2
\\
&\ge&c_0 \, \deg^{-L} r^{(i+1)L} \sum_{v\in \G_{i+1}} \,
\sum_{y\in ]\! ] v^\uparrow, v]\! ]}
(h({\buildrel \leftarrow \over y})- h(y))^2 .\end{aligned}$$
By the Cauchy–Schwarz inequality, $\sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 \ge {1\over L} (h(v^\uparrow)-h(v))^2$. Accordingly, $$I_i \ge c_0 \, {\deg^{-L} r^{(i+1)L}\over L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-h(v))^2 ,$$
which yields $$\begin{aligned}
\sum_{i=0}^{(n/L)-1} I_i
&\ge& c_0 \, {\deg^{-L}\over L} \sum_{i=0}^{(n/L)-1}
r^{(i+1)L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-
h(v))^2
\\
&\ge& c_0 \, {\deg^{-L}\over L} \deg^{-n/L} \sum_{v\in \G_{n/L}}
\sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})-
h(v^{(i+1)}))^2 ,\end{aligned}$$
where, $e=: v^{(0)}$, $v^{(1)}$, $v^{(2)}$, $\cdots$, $v^{(n/L)} := v$, is the shortest path (in $\G$) from $e$ to $v$, and the factor $\deg^{-n/L}$ results from the fact that each term $r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2$ is counted at most $\deg^{n/L}$ times in the sum on the right-hand side.
By the Cauchy–Schwarz inequality, for all $h: \T\to \r$ with $h(e)=1$ and $h(z)=0$ ($\forall z\in \T_n$), we have $$\begin{aligned}
\sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})-
h(v^{(i+1)}))^2
&\ge&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \,
\left( \sum_{i=0}^{(n/L)-1} (h(v^{(i)})-
h(v^{(i+1)})) \right)^{\! \! 2}
\\
&=&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \ge
c_6 \, r^n.\end{aligned}$$
Therefore, $$\sum_{i=0}^{(n/L)-1} I_i \ge c_0c_6 \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \# (\G_{n/L}) \ge c_0 c_6 c_4(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} },$$
the last inequality following from (\[GnL\]). Plugging this into (\[durrett\]) yields that for all large $n$, $$P_\omega \left\{ \tau_n < \tau_0 \right\} \ge c_7(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} } .$$
Recall from (\[GW\]) that $\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L$. Therefore, on $\{W>0\}$, for all large $n$, $P_\omega \{ \tau_n < \tau_0 \} \ge c_8(\omega) (\ee^{-\varepsilon} \deg^{-1/L} \deg r \varrho(r))^n$, which is no smaller than $c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n$ (since $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and $r \varrho(r) \ge q \ee^{-\varepsilon}$ by assumption). Thus, by writing $L(\tau_n) := \#\{ 1\le i\le n: \; X_i = e \}$ as before, we have, on $\{ W>0 \}$, $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \le [1- c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n ]^j.$$
By the Borel–Cantelli lemma, for $\P$-almost all $\omega$, on $\{W>0\}$, we have, $P_\omega$-almost surely for all large $n$, $L(\tau_n) \le 1/(\ee^{-4\varepsilon} q \deg)^n$, i.e., $$\max_{0\le k\le \tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor )} |X_k| \ge n ,$$
where $0<\tau_0(1)<\tau_0(2)<\cdots$ are the successive return times to the root $e$ by the walk (thus $\tau_0(1) = \tau_0$). Since the walk is positive recurrent, $\tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor ) \sim {1\over (\ee^{-4\varepsilon} q \deg)^n} E_\omega [\tau_0]$ (for $n\to \infty$), $P_\omega$-almost surely ($a_n \sim b_n$ meaning $\lim_{n\to \infty}Ê{a_n \over b_n} =1$). Therefore, for $\P$-almost all $\omega \in \{ W>0\}$, $$\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n} \ge {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $P_\omega$-a.s.}$$
Recall that $\P\{ W>0\}>0$. Since modifying a finite number of transition probabilities does not change the value of $\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n}$, we obtain the lower bound in Theorem \[t:posrec\].
It remains to prove (\[GW\]). Let $(A^{(i)})_{i\ge 1}$ be an i.i.d. sequence of random variables distributed as $A$. Clearly, for any $\delta\in (0,1)$, $$\begin{aligned}
\E( \# \G_1)
&=& \deg^L \, \P\left( \, \sum_{i=1}^\ell \log
A^{(i)} \ge L \log r , \, \forall 1\le \ell \le
L\right)
\\
&\ge& \deg^L \, \P \left( \, (1-\delta) L
\log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L
\log r , \, \forall 1\le \ell \le L\right) .\end{aligned}$$
We define a new probability $\Q$ by $${\mathrm{d} \Q \over \mathrm{d}\P} := {\ee^{t \log A} \over \E(\ee^{t \log A})} = {A^t \over \E(A^t)},$$
for some $t\ge 0$. Then $$\begin{aligned}
\E(\# \G_1)
&\ge& \deg^L \, \E_\Q \left[ \, {[\E(A^t)]^L \over
\exp\{ t \sum_{i=1}^L \log A^{(i)}\} }\,
{\bf 1}_{\{ (1-\delta) L \log r \ge
\sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \,
\forall 1\le \ell \le L\} } \right]
\\
&\ge& \deg^L \, {[\E(A^t)]^L \over r^{t (1-
\delta) L} } \, \Q \left( (1-
\delta) L \log r \ge \sum_{i=1}^\ell \log
A^{(i)} \ge L \log r , \, \forall 1\le \ell \le
L \right).\end{aligned}$$
To choose an optimal value of $t$, we fix $\widetilde{r}\in (r, \, \overline{r})$ with $\widetilde{r} < r^{1-\delta}$. Our choice of $t=t^*$ is such that $\varrho(\widetilde{r}) = \inf_{t\ge 0} \{ \widetilde{r}^{-t} \E(A^t)\} = \widetilde{r}^{-t^*} \E(A^{t^*})$. With this choice, we have $\E_\Q(\log A)=\log \widetilde{r}$, so that $\Q \{ (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} \ge c_9$. Consequently, $$\E(\# \G_1) \ge c_9 \, \deg^L \, {[\E(A^{t^*})]^L \over r^{t^* (1- \delta) L} }= c_9 \, \deg^L \, {[ \widetilde{r}^{\,t^*} \varrho(\widetilde{r})]^L \over r^{t^* (1- \delta) L} } \ge c_9 \, r^{\delta t^* L} \deg^L \varrho(\widetilde{r})^L .$$
Since $\delta>0$ can be as close to $0$ as possible, the continuity of $\varrho(\cdot)$ on $[\underline{r}, \, \overline{r})$ yields (\[GW\]), and thus completes the proof of Theorem \[t:posrec\].$\Box$
Some elementary inequalities {#s:proba}
============================
We collect some elementary inequalities in this section. They will be of use in the next sections, in the study of the null recurrence case.
\[l:exp\] Let $\xi\ge 0$ be a random variable.
[(i)]{} Assume that $\e(\xi^a)<\infty$ for some $a>1$. Then for any $x\ge 0$, $${\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over
x+\xi})]^a} \le {\e (\xi^a) \over [\e \xi]^a} .
\label{RSD}$$
[(ii)]{} If $\e (\xi) < \infty$, then for any $0
\le \lambda \le 1$ and $t \ge 0$, $$\e \left\{ \exp \left( - t\, { (\lambda+\xi)/
(1+\xi) \over \e [(\lambda+\xi)/
(1+\xi)] } \right) \right\} \le \e \left\{
\exp\left( - t\, { \xi \over \e (\xi)}
\right) \right\} .
\label{exp}$$
[**Remark.**]{} When $a=2$, (\[RSD\]) is a special case of Lemma 6.4 of Pemantle and Peres [@pemantle-peres2].
[*Proof of Lemma \[l:exp\].*]{} We actually prove a very general result, stated as follows. Let $\varphi : (0, \infty) \to \r$ be a convex ${\cal C}^1$-function. Let $x_0 \in \r$ and let $I$ be an open interval containing $x_0$. Assume that $\xi$ takes values in a Borel set $J \subset \r$ (for the moment, we do not assume $\xi\ge 0$). Let $h: I \times J \to (0, \infty)$ and ${\partial h\over \partial x}: I \times J \to \r$ be measurable functions such that
- $\e \{ h(x_0, \xi)\} <\infty$ and $\e \{
|\varphi ({ h(x_0,\xi) \over \e h(x_0, \xi)}
)| \} < \infty$;
- $\e[\sup_{x\in I} \{ | {\partial h\over
\partial x} (x, \xi)| + |\varphi' ({h(x,
\xi) \over \e h(x, \xi)} ) | \,
({| {\partial h\over \partial x} (x, \xi) |
\over \e \{ h(x, \xi)\} } + {h(x, \xi) \over
[\e \{ h(x, \xi)\}]^2 } | \e \{ {\partial
h\over \partial x} (x, \xi) \} | )\} ] <
\infty$;
- both $y \to h(x_0, y)$ and $y \to {
\partial \over \partial x} \log
h(x,y)|_{x=x_0}$ are monotone on $J$.
Then $${\d \over \d x} \e \left\{ \varphi\left({
h(x,\xi) \over \e h(x, \xi)}\right) \right\}
\Big|_{x=x_0} \ge 0, \qquad \hbox{\rm or}\qquad
\le 0,
\label{monotonie}$$
depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity.
To prove (\[monotonie\]), we observe that by the integrability assumptions, $$\begin{aligned}
& &{\d \over \d x} \e \left\{ \varphi\left({
h(x,\xi) \over \e h(x,\xi)}\right) \right\}
\Big|_{x=x_0}
\\
&=&{1 \over ( \e h(x_0, \xi))^2}\, \e \left(
\varphi'( h(x_0, \xi) ) \left[ {\partial h \over
\partial x} (x_0, \xi) \e h(x_0, \xi) -
h(x_0, \xi) \e {\partial h \over \partial x}
(x_0, \xi) \right] \right) .\end{aligned}$$
Let $\widetilde \xi$ be an independent copy of $\xi$. The expectation expression $\e(\varphi'( h(x_0, \xi) ) [\cdots])$ on the right-hand side is $$\begin{aligned}
&=& \e \left(
\varphi'( h(x_0, \xi) ) \left[ {\partial h \over
\partial x} (x_0, \xi) h(x_0, \widetilde\xi) -
h(x_0, \xi) {\partial h \over \partial x}
(x_0, \widetilde\xi) \right] \right)
\\
&=& {1 \over 2}\, \e \left(
\left[ \varphi'( h(x_0, \xi) ) - \varphi'(
h(x_0, \widetilde\xi) )\right]
\left[ {\partial h \over \partial x} (x_0, \xi)
h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial
h \over \partial x} (x_0, \widetilde\xi)
\right] \right)
\\
&=& {1 \over 2}\, \e \left( h(x_0, \xi) h(x_0,
\widetilde \xi) \, \eta \right) ,\end{aligned}$$
where $$\eta := \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) ) \right] \, \left[ {\partial \log h \over \partial x} (x_0, \xi) - {\partial \log h \over \partial x} (x_0, \widetilde\xi) \right] .$$
Therefore, $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0}
\; = \; {1 \over 2( \e h(x_0, \xi))^2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) .$$
Since $\eta \ge 0$ or $\le 0$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity, this yields (\[monotonie\]).
To prove (\[RSD\]) in Lemma \[l:exp\], we take $x_0\in (0,\, \infty)$, $J= \r_+$, $I$ a finite open interval containing $x_0$ and away from 0, $\varphi(z)= z^a$, and $h(x,y)= { y \over x+ y}$, to see that the function $x\mapsto {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}$ is non-decreasing on $(0, \infty)$. By dominated convergence, $$\lim_{x \to\infty} {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}= \lim_{x \to\infty} {\e[({\xi\over 1+\xi/x})^a] \over [\e ( {\xi\over 1+\xi/x})]^a} = {\e (\xi^a) \over [\e \xi]^a} ,$$
yielding (\[RSD\]).
The proof of (\[exp\]) is similar. Indeed, applying (\[monotonie\]) to the functions $\varphi(z)= \ee^{-t z}$ and $ h(x, y) = {x + y \over 1+ y}$ with $x\in (0,1)$, we get that the function $x \mapsto \e \{ \exp ( - t {
(x+\xi)/(1+\xi) \over \e [(x+\xi)/(1+\xi)]} )\}$ is non-increasing on $(0,1)$; hence for $\lambda \in [0,\, 1]$, $$\e \left\{ \exp \left( - t { (\lambda+\xi)/(1+\xi) \over \e [(\lambda+\xi)/(1+\xi)] } \right) \right\} \le \e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\}.$$
On the other hand, we take $\varphi(z)= \ee^{-t z}$ and $h(x,y) = {y \over 1+ xy}$ (for $x\in (0, 1)$) in (\[monotonie\]) to see that $x \mapsto \e \{ \exp ( - t { \xi /(1+x \xi) \over \e [\xi /(1+x\xi)] } ) \}$ is non-increasing on $(0,1)$. Therefore, $$\e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t \, { \xi \over \e (\xi)}\right) \right\} ,$$
which implies (\[exp\]).$\Box$
\[l:moment\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent non-negative random variables such that for some $a\in [1,\, 2]$, $\e(\xi_i^a)<\infty$ $(1\le i\le
k)$. Then $$\e \left[ (\xi_1 + \cdots + \xi_k)^a \right] \le
\sum_{k=1}^k \e(\xi_i^a) + (k-1) \left(
\sum_{i=1}^k \e \xi_i \right)^a.$$
[*Proof.*]{} By induction on $k$, we only need to prove the lemma in case $k=2$. Let $$h(t) := \e \left[ (\xi_1 + t\xi_2)^a \right] - \e(\xi_1^a) - t^a \e(\xi_2^a) - (\e \xi_1 + t \e \xi_2)^a, \qquad t\in [0,1].$$
Clearly, $h(0) = - (\e \xi_1)^a \le 0$. Moreover, $$h'(t) = a \e \left[ (\xi_1 + t\xi_2)^{a-1} \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1 + t \e \xi_2)^{a-1} \e(\xi_2) .$$
Since $(x+y)^{a-1} \le x^{a-1} + y^{a-1}$ (for $1\le a\le 2$), we have $$\begin{aligned}
h'(t)
&\le& a \e \left[ (\xi_1^{a-1} + t^{a-1}\xi_2^{a
-1}) \xi_2 \right] - a t^{a-1} \e(\xi_2^a) -
a(\e \xi_1)^{a-1} \e(\xi_2)
\\
&=& a \e (\xi_1^{a-1}) \e(\xi_2) - a(\e \xi_1)^{a
-1} \e(\xi_2) \le 0,\end{aligned}$$
by Jensen’s inequality (for $1\le a\le 2$). Therefore, $h \le 0$ on $[0,1]$. In particular, $h(1) \le 0$, which implies Lemma \[l:moment\].$\Box$
The following inequality, borrowed from page 82 of Petrov [@petrov], will be of frequent use.
\[f:petrov\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent random variables. We assume that for any $i$, $\e(\xi_i)=0$ and $\e(|\xi_i|^a) <\infty$, where $1\le a\le 2$. Then $$\e \left( \, \left| \sum_{i=1}^k \xi_i \right| ^a
\, \right) \le 2 \sum_{i=1}^k \e( |\xi_i|^a).$$
\[l:abc\] Fix $a >1$. Let $(u_j)_{j\ge 1}$ be a sequence of positive numbers, and let $(\lambda_j)_{j\ge 1}$ be a sequence of non-negative numbers.
[(i)]{} If there exists some constant $c_{10}>0$ such that for all $n\ge 2$, $$u_{j+1} \le \lambda_n + u_j - c_{10}\, u_j^{a},
\qquad \forall 1\le j \le n-1,$$ then we can find a constant $c_{11}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$, such that $$u_n \le c_{11} \, ( \lambda_n^{1/a} +
n^{- 1/(a-1)}), \qquad \forall n\ge 1.$$
[(ii)]{} Fix $K>0$. Assume that $\lim_{j\to\infty} u_j=0$ and that $\lambda_n \in
[0, \, {K\over n}]$ for all $n\ge 1$. If there exist $c_{12}>0$ and $c_{13}>0$ such that for all $n\ge 2$, $$u_{j+1} \ge \lambda_n + (1- c_{12} \lambda_n) u_j -
c_{13} \, u_j^a , \qquad \forall 1 \le j \le n-1,$$ then for some $c_{14}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$ $(c_{14}$ may depend on $K)$, $$u_n \ge c_{14} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)}
), \qquad \forall n\ge 1.$$
[*Proof.*]{} (i) Put $\ell = \ell(n) := \min\{n, \, \lambda_n^{- (a-1)/a} \}$. There are two possible situations.
First situation: there exists some $j_0 \in [n- \ell, n-1]$ such that $u_{j_0} \le ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$. Since $u_{j+1} \le \lambda_n + u_j$ for all $j\in [j_0, n-1]$, we have $$u_n \le (n-j_0 ) \lambda_n + u_{j_0} \le \ell \lambda_n + ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a} \le (1+ ({2 \over c_{10}})^{1/a})\, \lambda_n^{1/a},$$
which implies the desired upper bound.
Second situation: $u_j > ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$, $\forall \, j \in [n- \ell, n-1]$. Then $c_{10}\, u_j^{a} > 2\lambda_n$, which yields $$u_{j+1} \le u_j - {c_{10} \over 2} u_j^a, \qquad \forall \, j \in [n- \ell, n-1].$$
Since $a>1$ and $(1-y)^{1-a} \ge 1+ (a-1) y$ (for $0< y< 1$), this yields, for $j \in [n- \ell, n-1]$, $$u_{j+1}^{1-a} \ge u_j^{1-a} \, \left( 1 - {c_{10} \over 2} u_j^{a-1} \right)^{ 1-a} \ge u_j^{ 1-a} \, \left( 1 + {c_{10} \over 2} (a-1)\, u_j^{a-1} \right) = u_j^{1-a} + {c_{10} \over 2} (a-1) .$$
Therefore, $u_n^{1-a} \ge c_{15}\, \ell$ with $c_{15}:= {c_{10} \over 2} (a-1)$. As a consequence, $u_n \le (c_{15}\, \ell)^{- 1/(a-1)} \le (c_{15})^{- 1/(a-1)} \, ( n^{- 1/(a-1)} + \lambda_n^{1/a} )$, as desired.
\(ii) Let us first prove: $$\label{c7}
u_n \ge c_{16}\, n^{- 1/(a-1)}.$$
To this end, let $n$ be large and define $v_j := u_j \, (1- c_{12} \lambda_n)^{ -j} $ for $1 \le j \le n$. Since $u_{j+1} \ge (1- c_{12} \lambda_n) u_j - c_{13} u_j^a $ and $\lambda_n \le K/n$, we get $$v_{j+1} \ge v_j - c_{13} (1- c_{12} \lambda_n)^{(a-1)j-1}\, v_j^a\ge v_j - c_{17} \, v_j^a, \qquad \forall\, 1\le j \le n-1.$$
Since $u_j \to 0$, there exists some $j_0>0$ such that for all $n>j \ge j_0$, we have $c_{17} \, v_j^{a-1} < 1/2$, and $$v_{j+1}^{1-a} \le v_j^{1-a}\, \left( 1- c_{17} \, v_j^{a-1}\right)^{1-a} \le v_j^{1-a}\, \left( 1+ c_{18} \, v_j^{a-1}\right) = v_j^{1-a} + c_{18}.$$
It follows that $v_n^{1-a} \le c_{18}\, (n-j_0) + v_{j_0}^{1-a}$, which implies (\[c7\]).
It remains to show that $u_n \ge c_{19} \, \lambda_n^{1/a}$. Consider a large $n$. The function $h(x):= \lambda_n + (1- c_{12} \lambda_n) x - c_{13} x^a$ is increasing on $[0, c_{20}]$ for some fixed constant $c_{20}>0$. Since $u_j \to 0$, there exists $j_0$ such that $u_j \le c_{20}$ for all $j \ge j_0$. We claim there exists $j \in [j_0, n-1]$ such that $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$: otherwise, we would have $c_{13}\, u_j^a \le {\lambda_n\over 2} \le \lambda_n$ for all $j \in [j_0, n-1]$, and thus $$u_{j+1} \ge (1- c_{12}\, \lambda_n) u_j \ge \cdots \ge (1- c_{12}\,\lambda_n)^{j-j_0} \, u_{j_0} ;$$
in particular, $u_n \ge (1- c_{12}\, \lambda_n)^{n-j_0} \, u_{j_0}$ which would contradict the assumption $u_n \to 0$ (since $\lambda_n \le K/n$).
Therefore, $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$ for some $j\ge j_0$. By monotonicity of $h(\cdot)$ on $[0, c_{20}]$, $$u_{j+1} \ge h(u_j) \ge h\left(({\lambda_n\over 2
c_{13}})^{1/a}\right) \ge ({\lambda_n\over 2 c_{13}})^{1/a},$$
the last inequality being elementary. This leads to: $u_{j+2} \ge h(u_{j+1}) \ge h(({\lambda_n\over 2 c_{13}})^{1/a} ) \ge ({\lambda_n\over 2 c_{13}})^{1/a}$. Iterating the procedure, we obtain: $u_n \ge ({\lambda_n\over 2 c_{13}})^{1/a}$ for all $n> j_0$, which completes the proof of the Lemma.$\Box$
Proof of Theorem \[t:nullrec\] {#s:nullrec}
==============================
Let $n\ge 2$, and let as before $$\tau_n := \inf\left\{ i\ge 1: X_i \in \T_n
\right\} .$$
We start with a characterization of the distribution of $\tau_n$ via its Laplace transform $\e ( \ee^{- \lambda \tau_n} )$, for $\lambda \ge 0$. To state the result, we define $\alpha_{n,\lambda}(\cdot)$, $\beta_{n,\lambda}(\cdot)$ and $\gamma_n(\cdot)$ by $\alpha_{n,\lambda}(x) = \beta_{n,\lambda} (x) = 1$ and $\gamma_n(x)=0$ (for $x\in \T_n$), and $$\begin{aligned}
\alpha_{n,\lambda}(x)
&=& \ee^{-\lambda} \, {\sum_{i=1}^\deg A(x_i)
\alpha_{n,\lambda} (x_i) \over 1+
\sum_{i=1}^\deg A(x_i) \beta_{n,\lambda}
(x_i)},
\label{alpha}
\\
\beta_{n,\lambda}(x)
&=& {(1-\ee^{-2\lambda}) + \sum_{i=1}^\deg A(x_i)
\beta_{n,\lambda} (x_i) \over 1+
\sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)},
\label{beta}
\\
\gamma_n(x)
&=& {[1/\omega(x, {\buildrel \leftarrow \over x} )]
+ \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \over 1+
\sum_{i=1}^\deg A(x_i) \beta_n(x_i)} , \qquad
1\le |x| < n,
\label{gamma}\end{aligned}$$
where $\beta_n(\cdot) := \beta_{n,0}(\cdot)$, and for any $x\in \T$, $\{x_i\}_{1\le i\le \deg}$ stands as before for the set of children of $x$.
\[p:tau\] We have, for $n\ge 2$, $$\begin{aligned}
E_\omega\left( \ee^{- \lambda \tau_n} \right)
&=&\ee^{-\lambda} \, {\sum_{i=1}^\deg \omega (e,
e_i) \alpha_{n,\lambda} (e_i) \over
\sum_{i=1}^\deg \omega (e, e_i)
\beta_{n,\lambda} (e_i)}, \qquad \forall
\lambda \ge 0,
\label{Laplace-tau}
\\
E_\omega(\tau_n)
&=& {1+ \sum_{i=1}^\deg \omega(e,e_i) \gamma_n
(e_i) \over \sum_{i=1}^\deg \omega(e,e_i)
\beta_n(e_i)}.
\label{E(tau)}
\end{aligned}$$
[*Proof of Proposition \[p:tau\].*]{} Identity (\[E(tau)\]) can be found in Rozikov [@rozikov]. The proof of (\[Laplace-tau\]) is along similar lines; so we feel free to give an outline only. Let $g_{n, \lambda}(x) := E_\omega (\ee^{- \lambda \tau_n} \, | \, X_0=x)$. By the Markov property, $g_{n, \lambda}(x) = \ee^{-\lambda} \sum_{i=1}^\deg \omega(x, x_i)g_{n, \lambda}(x_i) + \ee^{-\lambda} \omega(x, {\buildrel \leftarrow \over x}) g_{n, \lambda}({\buildrel \leftarrow \over x})$, for $|x| < n$. By induction on $|x|$ (such that $1\le |x| \le n-1$), we obtain: $g_{n, \lambda}(x) = \ee^\lambda (1- \beta_{n, \lambda} (x)) g_{n, \lambda}({\buildrel \leftarrow \over x}) + \alpha_{n, \lambda} (x)$, from which (\[Laplace-tau\]) follows.
Probabilistic interpretation: for $1\le |x| <n$, if $T_{\buildrel \leftarrow \over x} := \inf \{ k\ge 0: X_k= {\buildrel \leftarrow \over x} \}$, then $\alpha_{n, \lambda} (x) = E_\omega [ \ee^{-\lambda \tau_n} {\bf 1}_{ \{ \tau_n < T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, $\beta_{n, \lambda} (x) = 1- E_\omega [ \ee^{-\lambda (1+ T_{\buildrel \leftarrow \over x}) } {\bf 1}_{ \{ \tau_n > T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, and $\gamma_n (x) = E_\omega [ (\tau_n \wedge T_{\buildrel \leftarrow \over x}) \, | \, X_0=x]$. We do not use these identities in the paper.$\Box$
It turns out that $\beta_{n,\lambda}(\cdot)$ is closely related to Mandelbrot’s multiplicative cascade [@mandelbrot]. Let $$M_n := \sum_{x\in \T_n} \prod_{y\in ] \! ] e, \,
x] \! ] } A(y) , \qquad n\ge 1,
\label{Mn}$$
where $] \! ] e, \,x] \! ]$ denotes as before the shortest path relating $e$ to $x$. We mention that $(A(e_i), \, 1\le i\le \deg)$ is a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and is distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$.
Let us recall some properties of $(M_n)$ from Theorem 2.2 of Liu [@liu00] and Theorem 2.5 of Liu [@liu01]: under the conditions $p={1\over \deg}$ and $\psi'(1)<0$, $(M_n)$ is a martingale, bounded in $L^a$ for any $a\in [1, \kappa)$; in particular, $$M_\infty := \lim_{n\to \infty} M_n \in (0,
\infty),
\label{cvg-M}$$
exists $\P$-almost surely and in $L^a(\P)$, and $$\E\left( \ee^{-s M_\infty} \right) \le
\exp\left( - c_{21} \, s^{c_{22}}\right), \qquad
\forall s\ge 1;
\label{M-lowertail}$$
furthermore, if $1<\kappa< \infty$, then we also have $${c_{23}\over x^\kappa} \le \P\left( M_\infty >
x\right) \le {c_{24}\over x^\kappa}, \qquad
x\ge 1.
\label{M-tail}$$
We now summarize the asymptotic properties of $\beta_{n,\lambda}(\cdot)$ which will be needed later on.
\[p:beta-gamma\] Assume $p= {1\over \deg}$ and $\psi'(1)<0$.
[(i)]{} For any $1\le i\le \deg$, $n\ge 2$, $t\ge
0$ and $\lambda \in [0, \, 1]$, we have $$\E \left\{ \exp \left[ -t \, {\beta_{n,
\lambda} (e_i) \over \E[\beta_{n, \lambda}
(e_i)]} \right] \right\} \le \left\{\E \left(
\ee^{-t\, M_n/\Theta} \right)
\right\}^{1/\deg} ,
\label{comp-Laplace}$$ where, as before, $\Theta:= \hbox{\rm ess sup}(A) <
\infty$.
[(ii)]{} If $\kappa\in (2, \infty]$, then for any $1\le i\le \deg$ and all $n\ge 2$ and $\lambda \in
[0, \, {1\over n}]$, $$c_{25} \left( \sqrt {\lambda} + {1\over n}
\right) \le \E[\beta_{n, \lambda}(e_i)]
\le c_{26} \left( \sqrt {\lambda} + {1\over
n} \right).
\label{E(beta):kappa>2}$$
[(iii)]{} If $\kappa\in (1,2]$, then for any $1\le i\le \deg$, when $n\to \infty$ and uniformly in $\lambda \in [0, {1\over n}]$, $$\E[\beta_{n, \lambda}(e_i)] \; \approx \;
\lambda^{1/\kappa} + {1\over n^{1/(\kappa-1)}}
,
\label{E(beta):kappa<2}$$ where $a_n \approx b_n$ denotes as before $\lim_{n\to \infty} \, {\log a_n \over \log b_n}
=1$.
The proof of Proposition \[p:beta-gamma\] is postponed until Section \[s:beta-gamma\]. By admitting it for the moment, we are able to prove Theorem \[t:nullrec\].
[*Proof of Theorem \[t:nullrec\].*]{} Assume $p= {1\over \deg}$ and $\psi'(1)<0$.
Let $\pi$ be an invariant measure. By (\[pi\]) and the definition of $(M_n)$, $\sum_{x\in \T_n} \pi(x) \ge c_0 \, M_n$. Therefore by (\[cvg-M\]), we have $\sum_{x\in \T} \pi(x) =\infty$, $\P$-a.s., implying that $(X_n)$ is null recurrent.
We proceed to prove the lower bound in (\[nullrec\]). By (\[gamma\]) and the ellipticity condition on the environment, $\gamma_n (x) \le {1\over \omega(x, {\buildrel \leftarrow \over x} )} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \le c_{27} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i)$. Iterating the argument yields $$\gamma_n (e_i) \le c_{27} \left( 1+ \sum_{j=2}^{n-1} M_j^{(e_i)}\right), \qquad n\ge 3,$$
where $$M_j^{(e_i)} := \sum_{x\in \T_j} \prod_{y\in ] \! ] e_i, x] \! ]} A(y).$$
For future use, we also observe that $$\label{defMei1}
M_n= \sum_{i=1}^\deg \, A(e_i) \, M^{(e_i)}_n,
\qquad n\ge 2.$$
Let $1\le i\le \deg$. Since $(M_j^{(e_i)}, \, j\ge 2)$ is distributed as $(M_{j-1}, \, j\ge 2)$, it follows from (\[cvg-M\]) that $M_j^{(e_i)}$ converges (when $j\to \infty$) almost surely, which implies $\gamma_n (e_i) \le c_{28}(\omega) \, n$. Plugging this into (\[E(tau)\]), we see that for all $n\ge 3$, $$E_\omega \left( \tau_n \right) \le {c_{29}(\omega) \, n \over \sum_{i=1}^\deg
\omega(e,e_i) \beta_n(e_i)} \le {c_{30}(\omega)
\, n \over \beta_n(e_1)},
\label{toto2}$$
the last inequality following from the ellipticity assumption on the environment.
We now bound $\beta_n(e_1)$ from below (for large $n$). Let $1\le i\le \deg$. By (\[comp-Laplace\]), for $\lambda \in [0,\, 1]$ and $s\ge 0$, $$\E \left\{ \exp \left[ -s \, {\beta_{n, \lambda}
(e_i) \over \E [\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{ \E \left( \ee^{-s \, M_n/\Theta} \right) \right\}^{1/\deg} \le \left\{ \E \left(\ee^{-s \, M_\infty/\Theta} \right) \right\}^{1/\deg} ,$$
where, in the last inequality, we used the fact that $(M_n)$ is a uniformly integrable martingale. Let $\varepsilon>0$. Applying (\[M-lowertail\]) to $s:= n^{\varepsilon}$, we see that $$\sum_n \E \left\{ \exp \left[ -n^{\varepsilon}
{\beta_{n, \lambda} (e_i) \over \E[\beta_{n,
\lambda} (e_i)]} \right] \right\} <\infty .
\label{toto3}$$
In particular, $\sum_n \exp [ -n^{\varepsilon} {\beta_n (e_1) \over \E [\beta_n (e_1)]} ]$ is $\P$-almost surely finite (by taking $\lambda=0$; recalling that $\beta_n (\cdot) := \beta_{n, 0} (\cdot)$). Thus, for $\P$-almost all $\omega$ and all sufficiently large $n$, $\beta_n (e_1) \ge n^{-\varepsilon} \, \E [\beta_n (e_1)]$. Going back to (\[toto2\]), we see that for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega \left( \tau_n \right) \le {c_{30}(\omega) \, n^{1+\varepsilon} \over \E [\beta_n (e_1)]}.$$
Let $m(n):= \lfloor {n^{1+2\varepsilon} \over \E [\beta_n (e_1)]} \rfloor$. By Chebyshev’s inequality, for $\P$-almost all $\omega$ and all sufficiently large $n$, $P_\omega ( \tau_n \ge m(n) ) \le c_{31}(\omega) \, n^{-\varepsilon}$. Considering the subsequence $n_k:= \lfloor k^{2/\varepsilon}\rfloor$, we see that $\sum_k P_\omega ( \tau_{n_k} \ge m(n_k) )< \infty$, $\P$-a.s. By the Borel–Cantelli lemma, for $\P$-almost all $\omega$ and $P_\omega$-almost all sufficiently large $k$, $\tau_{n_k} < m(n_k)$, which implies that for $n\in [n_{k-1}, n_k]$ and large $k$, we have $\tau_n < m(n_k) \le {n_k^{1+2\varepsilon} \over \E [\beta_{n_k} (e_1)]} \le {n^{1+3\varepsilon} \over \E [\beta_n(e_1)]}$ (the last inequality following from the estimate of $\E [\beta_n(e_1)]$ in Proposition \[p:beta-gamma\]). In view of Proposition \[p:beta-gamma\], and since $\varepsilon$ can be as small as possible, this gives the lower bound in (\[nullrec\]) of Theorem \[t:nullrec\].
To prove the upper bound, we note that $\alpha_{n,\lambda}(x) \le \beta_n(x)$ for any $\lambda\ge 0$ and any $0<|x|\le n$ (this is easily checked by induction on $|x|$). Thus, by (\[Laplace-tau\]), for any $\lambda\ge 0$, $$E_\omega\left( \ee^{- \lambda \tau_n} \right) \le {\sum_{i=1}^\deg \omega (e, e_i) \beta_n (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)} \le \sum_{i=1}^\deg {\beta_n (e_i) \over \beta_{n,\lambda} (e_i)}.$$
We now fix $r\in (1, \, {1\over \nu})$, where $\nu:= 1- {1\over \min\{ \kappa, \, 2\} }$ is defined in (\[theta\]). It is possible to choose a small $\varepsilon>0$ such that $${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon \quad \hbox{if }\kappa \in (1, \, 2], \qquad 1 - {r\over 2}> 3\varepsilon \quad \hbox{if }\kappa \in (2, \, \infty].$$
Let $\lambda = \lambda(n) := n^{-r}$. By (\[toto3\]), we have $\beta_{n,n^{-r}} (e_i) \ge n^{-\varepsilon}\, \E [\beta_{n,n^{-r}} (e_i)]$ for $\P$-almost all $\omega$ and all sufficiently large $n$, which yields $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^\varepsilon \sum_{i=1}^\deg {\beta_n (e_i) \over \E [\beta_{n, n^{-r}} (e_i)]} .$$
It is easy to bound $\beta_n (e_i)$. For any given $x\in \T \backslash \{ e\}$ with $|x|\le n$, $n\mapsto \beta_n (x)$ is non-increasing (this is easily checked by induction on $|x|$). Chebyshev’s inequality, together with the Borel–Cantelli lemma (applied to a subsequence, as we did in the proof of the lower bound) and the monotonicity of $n\mapsto \beta_n(e_i)$, readily yields $\beta_n (e_i) \le n^\varepsilon \, \E [\beta_n (e_i)]$ for almost all $\omega$ and all sufficiently large $n$. As a consequence, for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^{2\varepsilon} \sum_{i=1}^\deg {\E [\beta_n (e_i)] \over \E [\beta_{n, n^{-r}} (e_i)]} .$$
By Proposition \[p:beta-gamma\], this yields $E_\omega ( \ee^{- n^{-r} \tau_n} ) \le n^{-\varepsilon}$ (for $\P$-almost all $\omega$ and all sufficiently large $n$; this is where we use ${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon$ if $\kappa \in (1, \, 2]$, and $1 - {r\over 2}> 3\varepsilon$ if $\kappa \in (2, \, \infty]$). In particular, for $n_k:= \lfloor k^{2/\varepsilon} \rfloor$, we have $\P$-almost surely, $E_\omega ( \sum_k \ee^{- n_k^{-r} \tau_{n_k}} ) < \infty$, which implies that, $\p$-almost surely for all sufficiently large $k$, $\tau_{n_k} \ge n_k^r$. This implies that $\p$-almost surely for all sufficiently large $n$, $\tau_n \ge {1\over 2}\, n^r$. The upper bound in (\[nullrec\]) of Theorem \[t:nullrec\] follows.$\Box$
Proposition \[p:beta-gamma\] is proved in Section \[s:beta-gamma\].
Proof of Proposition \[p:beta-gamma\] {#s:beta-gamma}
=====================================
Let $\theta \in [0,\, 1]$. Let $(Z_{n,\theta})$ be a sequence of random variables, such that $Z_{1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i$, where $(A_i, \, 1\le i\le \deg)$ is distributed as $(A(x_i), \, 1\le i\le \deg)$ (for any $x\in \T$), and that $$Z_{j+1,\theta} \; \buildrel law \over = \;
\sum_{i=1}^\deg A_i {\theta +
Z_{j,\theta}^{(i)} \over
1+ Z_{j,\theta}^{(i)} } , \qquad \forall\,
j\ge 1,
\label{ZW}$$
where $Z_{j,\theta}^{(i)}$ (for $1\le i \le \deg$) are independent copies of $Z_{j,\theta}$, and are independent of the random vector $(A_i, \, 1\le i\le \deg)$.
Then, for any given $n\ge 1$ and $\lambda\ge 0$, $$Z_{n, 1-\ee^{-2\lambda}} \; \buildrel law \over
= \; \sum_{i=1}^\deg A_i\, \beta_{n,
\lambda}(e_i) ,
\label{Z=beta}$$
provided $(A_i, \, 1\le i\le \deg)$ and $(\beta_{n, \lambda}(e_i), \, 1\le i\le \deg)$ are independent.
\[p:concentration\] Assume $p={1\over \deg}$ and $\psi'(1)<0$. Let $\kappa$ be as in $(\ref{kappa})$. For all $a\in (1, \kappa) \cap (1, 2]$, we have $$\sup_{\theta \in [0,1]} \sup_{j\ge 1}
{[\E (Z_{j,\theta} )^a ] \over (\E
Z_{j,\theta})^a} < \infty.$$
[*Proof of Proposition \[p:concentration\].*]{} Let $a\in (1,2]$. Conditioning on $A_1$, $\dots$, $A_\deg$, we can apply Lemma \[l:moment\] to see that $$\begin{aligned}
&&\E \left[ \left( \, \sum_{i=1}^\deg A_i
{\theta+ Z_{j,\theta}^{(i)} \over 1+
Z_{j,\theta}^{(i)} }
\right)^a \Big| A_1, \dots, A_\deg \right]
\\
&\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta}
\over 1+ Z_{j,\theta} }\right)^a \;
\right] + (\deg-1) \left[ \sum_{i=1}^\deg A_i\,
\E \left( {\theta+ Z_{j,\theta} \over 1+
Z_{j,\theta} }
\right) \right]^a
\\
&\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta}
\over 1+ Z_{j,\theta} }\right)^a \;
\right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+
Z_{j,\theta} } \right) \right]^a,\end{aligned}$$
where $c_{32}$ depends on $a$, $\deg$ and the bound of $A$ (recalling that $A$ is bounded away from 0 and infinity). Taking expectation on both sides, and in view of (\[ZW\]), we obtain: $$\E[(Z_{j+1,\theta})^a] \le \deg \E(A^a) \E \left[ \left( {\theta+ Z_{j,\theta}\over
1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+
Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a.$$
We divide by $(\E Z_{j+1,\theta})^a = [ \E({\theta+Z_{j,\theta}\over 1+
Z_{j,\theta} })]^a$ on both sides, to see that $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[ ({\theta+ Z_{j,\theta}
\over 1+ Z_{j,\theta} })^a] \over [\E ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a }
+ c_{32}.$$
Put $\xi = \theta+ Z_{j,\theta}$. By (\[RSD\]), we have $${\E[ ({\theta+Z_{j,\theta} \over 1+Z_{j,\theta} })^a] \over [\E ({\theta+Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } = {\E[ ({\xi \over 1- \theta+ \xi })^a] \over [\E ({ \xi \over 1- \theta+ \xi })]^a } \le {\E[\xi^a] \over [\E \xi ]^a } .$$
Applying Lemma \[l:moment\] to $k=2$ yields that $\E[\xi^a] = \E[( \theta+ Z_{j,\theta} )^a] \le \theta^a + \E[( Z_{j,\theta} )^a] + (\theta + \E( Z_{j,\theta} ))^a $. It follows that ${\E[ \xi^a] \over [\E \xi ]^a } \le {\E[ (Z_{j,\theta})^a] \over [\E Z_{j,\theta}]^a } +2$, which implies that for $j\ge 1$, $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[(Z_{j,\theta})^a]\over (\E Z_{j,\theta})^a} + (2 \deg \E(A^a)+ c_{32}).$$
Thus, if $\deg \E(A^a)<1$ (which is the case if $1<a<\kappa$), then $$\sup_{j\ge 1} {\E[ (Z_{j,\theta})^a] \over (\E Z_{j,\theta})^a} < \infty,$$
uniformly in $\theta \in [0, \, 1]$.$\Box$
We now turn to the proof of Proposition \[p:beta-gamma\]. For the sake of clarity, the proofs of (\[comp-Laplace\]), (\[E(beta):kappa>2\]) and (\[E(beta):kappa<2\]) are presented in three distinct parts.
Proof of (\[comp-Laplace\]) {#subs:beta}
---------------------------
By (\[exp\]) and (\[ZW\]), we have, for all $\theta\in [0, \, 1]$ and $j\ge 1$, $$\E \left\{ \exp\left( - t \, { Z_{j+1, \theta} \over \E (Z_{j+1, \theta})}\right) \right\} \le \E \left\{ \exp\left( - t \sum_{i=1}^\deg A_i { Z^{(i)}_{j, \theta} \over \E (Z^{(i)}_{j, \theta}) }\right) \right\}, \qquad t\ge 0.$$
Let $f_j(t) := \E \{ \exp ( - t { Z_{j, \theta} \over \E Z_{j, \theta}} )\}$ and $g_j(t):= \E (\ee^{ -t\, M_j})$ (for $j\ge 1$). We have $$f_{j+1}(t) \le \E \left( \prod_{i=1}^\deg f_j(t A_i) \right), \quad j\ge 1.$$
On the other hand, by (\[defMei1\]), $$g_{j+1}(t) = \E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) M^{(e_i)}_{j+1} \right) \right\} = \E \left( \prod_{i=1}^\deg g_j(t A_i) \right), \qquad j\ge 1.$$
Since $f_1(\cdot)= g_1(\cdot)$, it follows by induction on $j$ that for all $j\ge 1$, $f_j(t) \le g_j(t)$; in particular, $f_n(t) \le g_n(t)$. We take $\theta = 1- \ee^{-2\lambda}$. In view of (\[Z=beta\]), we have proved that $$\E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i)
{\beta_{n, \lambda}(e_i) \over \E [\beta_{n,
\lambda}(e_i)] }\right) \right\} \le \E \left\{
\ee^{- t \, M_n} \right\} ,
\label{beta_n(e)}$$
which yields (\[comp-Laplace\]).$\Box$
[**Remark.**]{} Let $$\beta_{n,\lambda}(e) := {(1-\ee^{-2\lambda})+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i) \over 1+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i)}.$$
By (\[beta\_n(e)\]) and (\[exp\]), if $\E(A)= {1\over \deg}$, then for $\lambda\ge 0$, $n\ge 1$ and $t\ge 0$, $$\E \left\{ \exp\left( - t {\beta_{n, \lambda}(e) \over \E [\beta_{n, \lambda}(e)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} .$$
Proof of (\[E(beta):kappa>2\]) {#subs:kappa>2}
---------------------------------
Assume $p={1\over \deg}$ and $\psi'(1)<0$. Since $Z_{j, \theta}$ is bounded uniformly in $j$, we have, by (\[ZW\]), for $1\le j \le n-1$, $$\begin{aligned}
\E(Z_{j+1, \theta})
&=& \E\left( {\theta+Z_{j, \theta} \over 1+Z_{j,
\theta} } \right)
\nonumber
\\
&\le& \E\left[(\theta+ Z_{j, \theta} )(1 - c_{33}\,
Z_{j, \theta} )\right]
\nonumber
\\
&\le & \theta + \E(Z_{j, \theta}) - c_{33}\,
\E\left[(Z_{j, \theta})^2\right]
\label{E(Z2)}
\\
&\le & \theta + \E(Z_{j, \theta}) - c_{33}\,
\left[ \E Z_{j, \theta} \right]^2.
\nonumber\end{aligned}$$
By Lemma \[l:abc\], we have, for any $K>0$ and uniformly in $\theta\in [0, \,Ê{K\over n}]$, $$\label{53}
\E (Z_{n, \theta}) \le c_{34} \left( \sqrt
{\theta} + {1\over n} \right) \le {c_{35} \over
\sqrt{n}}.$$
We mention that this holds for all $\kappa \in (1, \, \infty]$. In view of (\[Z=beta\]), this yields the upper bound in (\[E(beta):kappa>2\]).
To prove the lower bound, we observe that $$\E(Z_{j+1, \theta}) \ge \E\left[(\theta+ Z_{j,
\theta} )(1 - Z_{j, \theta} )\right] = \theta+
(1-\theta) \E(Z_{j, \theta}) - \E\left[(Z_{j,
\theta})^2\right] .
\label{51}$$
If furthermore $\kappa \in (2, \infty]$, then $\E [(Z_{j, \theta})^2 ] \le c_{36}\, (\E Z_{j, \theta})^2$ (see Proposition \[p:concentration\]). Thus, for all $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{36}\, (\E Z_{j,\theta})^2 .$$
By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of (\[Z=beta\]) and Lemma \[l:abc\] readily yields the lower bound in (\[E(beta):kappa>2\]).$\Box$
Proof of (\[E(beta):kappa<2\]) {#subs:kappa<2}
---------------------------------
We assume in this part $p={1\over \deg}$, $\psi'(1)<0$ and $1<\kappa \le 2$.
Let $\varepsilon>0$ be small. Since $(Z_{j, \theta})$ is bounded, we have $\E[(Z_{j, \theta})^2] \le c_{37} \, \E [(Z_{j, \theta})^{\kappa-\varepsilon}]$, which, by Proposition \[p:concentration\], implies $$\E\left[ (Z_{j, \theta})^2 \right] \le c_{38} \,
\left( \E Z_{j, \theta} \right)^{\kappa-
\varepsilon} .
\label{c38}$$
Therefore, (\[51\]) yields that $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{38} \, (\E Z_{j, \theta})^{\kappa-\varepsilon} .$$
By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of Lemma \[l:abc\] implies that for any $K>0$, $$\E (Z_{\ell, \theta}) \ge c_{14} \left(
\theta^{1/(\kappa-\varepsilon)} + {1\over
\ell^{1/(\kappa -1 - \varepsilon)}} \right),
\qquad \forall \, \theta\in [0, \,Ê{K\over n}],
\; \; \forall \, 1\le \ell \le n.
\label{ell}$$
The lower bound in (\[E(beta):kappa<2\]) follows from (\[Z=beta\]).
It remains to prove the upper bound. Define $$Y_{j, \theta} := {Z_{j, \theta} \over \E(Z_{j, \theta})} , \qquad 1\le j\le n.$$
We take $Z_{j-1, \theta}^{(x)}$ (for $x\in \T_1$) to be independent copies of $Z_{j-1, \theta}$, and independent of $(A(x), \; x\in \T_1)$. By (\[ZW\]), for $2\le j\le n$, $$\begin{aligned}
Y_{j, \theta}
&\; {\buildrel law \over =} \;& \sum_{x\in \T_1}
A(x) {(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+
Z_{j-1, \theta}^{(x)}) \over \E
[(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1,
\theta}^{(x)}) ]} \ge
\sum_{x\in \T_1}
A(x) {Z_{j-1, \theta}^{(x)} /
(1+ Z_{j-1, \theta}^{(x)}) \over \theta+ \E
[Z_{j-1, \theta}]}
\\
&=& { \E [Z_{j-1, \theta}]\over \theta+ \E
[Z_{j-1, \theta}]} \sum_{x\in \T_1}
A(x)
Y_{j-1, \theta}^{(x)} - { \E [Z_{j-1,
\theta}]\over \theta+ \E
[Z_{j-1, \theta}]} \sum_{x\in \T_1}
A(x) {(Z_{j-1, \theta}^{(x)})^2/\E(Z_{j-1,
\theta}) \over 1+Z_{j-1, \theta}^{(x)}}
\\
&\ge& \sum_{x\in \T_1}
A(x) Y_{j-1, \theta}^{(x)} -
\Delta_{j-1, \theta} \; ,\end{aligned}$$
where $$\begin{aligned}
Y_{j-1, \theta}^{(x)}
&:=&{Z_{j-1, \theta}^{(x)} \over \E(Z_{j-1,
\theta})} ,
\\
\Delta_{j-1, \theta}
&:=&{\theta\over \theta+ \E [Z_{j-1, \theta}]}
\sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} +
\sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2
\over \E(Z_{j-1, \theta})} .\end{aligned}$$
By (\[c38\]), $\E[ {(Z_{j-1, \theta}^{(i)})^2 \over \E(Z_{j-1, \theta})}]\le c_{38}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. On the other hand, by (\[ell\]), $\E(Z_{j-1, \theta}) \ge c_{14}\, \theta^{1/(\kappa-\varepsilon)}$ for $2\le j \le n$, and thus ${\theta\over \theta+ \E [Z_{j-1, \theta}]} \le c_{39}\, (\E Z_{j-1, \theta})^{\kappa-1- \varepsilon}$. As a consequence, $\E( \Delta_{j-1, \theta} ) \le c_{40}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$.
If we write $\xi \; {\buildrel st. \over \ge} \; \eta$ to denote that $\xi$ is stochastically greater than or equal to $\eta$, then we have proved that $Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{x\in \T_1}^\deg A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta}$. Applying the same argument to each of $(Y_{j-1, \theta}^{(x)}, \, x\in \T_1)$, we see that, for $3\le j\le n$, $$Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{u\in \T_1} A(u) \sum_{v\in \T_2: \; u={\buildrel \leftarrow \over v}} A(v) Y_{j-2, \theta}^{(v)} - \left( \Delta_{j-1, \theta}+ \sum_{u\in \T_1} A(u) \Delta_{j-2, \theta}^{(u)} \right) ,$$
where $Y_{j-2, \theta}^{(v)}$ (for $v\in \T_2$) are independent copies of $Y_{j-2, \theta}$, and are independent of $(A(w), \, w\in \T_1 \cup \T_2)$, and $(\Delta_{j-2, \theta}^{(u)}, \, u\in \T_1)$ are independent of $(A(u), \, u\in \T_1)$ and are such that $\e[\Delta_{j-2, \theta}^{(u)}] \le c_{40}\, (\E Z_{j-2, \theta})^{\kappa-1-\varepsilon}$.
By induction, we arrive at: for $j>m \ge 1$, $$Y_{j, \theta} \; {\buildrel st. \over \ge}\;
\sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x
]\! ]} A(y) \right) Y_{j-m, \theta}^{(x)} -
\Lambda_{j,m,\theta},
\label{Yn>}$$
where $Y_{j-m, \theta}^{(x)}$ (for $x\in \T_m$) are independent copies of $Y_{j-m, \theta}$, and are independent of the random vector $(A(w), \, 1\le |w| \le m)$, and $\E(\Lambda_{j,m,\theta}) \le c_{40}\, \sum_{\ell=1}^m (\E Z_{j-\ell, \theta})^{\kappa-1-\varepsilon} $.
Since $\E(Z_{i, \theta}) = \E({\theta+ Z_{i-1, \theta} \over 1+ Z_{i-1, \theta}}) \ge
\E(Z_{i-1, \theta}) - \E[(Z_{i-1, \theta})^2] \ge \E(Z_{i-1, \theta}) - c_{38}\, [\E Z_{i-1, \theta}
]^{\kappa-\varepsilon}$ (by (\[c38\])), we have, for all $j\in (j_0, n]$ (with a large but fixed integer $j_0$) and $1\le \ell \le j-j_0$, $$\begin{aligned}
\E(Z_{j, \theta})
&\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell
\left\{ 1- c_{38}\, [\E Z_{j-i, \theta}
]^{\kappa-1-\varepsilon}\right\}
\\
&\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell
\left\{ 1- c_{41}\, (j-i)^{-(\kappa-1-
\varepsilon)/2}\right\} ,\end{aligned}$$
the last inequality being a consequence of (\[53\]). Thus, for $j\in (j_0, n]$ and $1\le \ell \le j^{(\kappa-1-\varepsilon)/2}$, $\E(Z_{j, \theta}) \ge c_{42}\, \E(Z_{j-\ell, \theta})$, which implies that for all $m\le j^{(\kappa-1-\varepsilon)/2}$, $\E(\Lambda_{j,m, \theta}) \le c_{43} \, m (\E Z_{j, \theta})^{\kappa-1-\varepsilon}$. By Chebyshev’s inequality, for $j\in (j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P\left\{ \Lambda_{j,m, \theta} > \varepsilon
r\right\} \le {c_{43} \, m (\E Z_{j,
\theta})^{\kappa -1-\varepsilon} \over
\varepsilon r}.
\label{toto4}$$
Let us go back to (\[Yn>\]), and study the behaviour of $\sum_{x\in \T_m} ( \prod_{y\in ]\! ] e, x ]\! ]} A(y) ) Y_{j-m, \theta}^{(x)}$. Let $M^{(x)}$ (for $x\in \T_m$) be independent copies of $M_\infty$ and independent of all other random variables. Since $\E(Y_{j-m, \theta}^{(x)})= \E(M^{(x)})=1$, we have, by Fact \[f:petrov\], for any $a\in (1, \, \kappa)$, $$\begin{aligned}
&&\E \left\{ \left| \sum_{x\in \T_m} \left(
\prod_{y\in ]\! ] e, x ]\! ]} A(y) \right)
(Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a
\right\}
\\
&\le&2 \E \left\{ \sum_{x\in \T_m} \left(
\prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right)
\, \E\left( | Y_{j-m, \theta}^{(x)} -
M^{(x)}|^a \right) \right\}.\end{aligned}$$
By Proposition \[p:concentration\] and the fact that $(M_n)$ is a martingale bounded in $L^a$, we have $\E ( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a ) \le c_{44}$. Thus, $$\begin{aligned}
\E \left\{ \left| \sum_{x\in \T_m} \left(
\prod_{y\in ]\! ] e, x ]\! ]} A(y) \right)
(Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a
\right\}
&\le& 2c_{44} \E \left\{ \sum_{x\in \T_m}
\prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right\}
\\
&=& 2c_{44} \, \deg^m \, [\E(A^a)]^m.\end{aligned}$$
By Chebyshev’s inequality, $$\P \left\{ \left| \sum_{x\in \T_m} \left(
\prod_{y\in ]\! ] e, x ]\! ]} A(y) \right)
(Y_{j- m, \theta}^{(x)} - M^{(x)}) \right| >
\varepsilon r\right\} \le {2c_{44} \, \deg^m
[\E(A^a)]^m \over \varepsilon^a r^a}.
\label{toto6}$$
Clearly, $\sum_{x\in \T_m} (\prod_{y\in ]\! ] e, x ]\! ]} A(y) ) M^{(x)}$ is distributed as $M_\infty$. We can thus plug (\[toto6\]) and (\[toto4\]) into (\[Yn>\]), to see that for $j\in [j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P \left\{ Y_{j, \theta} > (1-2\varepsilon)
r\right\} \ge \P \left\{ M_\infty > r\right\} -
{c_{43}\, m (\E Z_{j, \theta})^{\kappa-1-
\varepsilon} \over \varepsilon r} - {2c_{44} \,
\deg^m [\E(A^a)]^m \over
\varepsilon^a r^a} .
\label{Yn-lb}$$
We choose $m:= \lfloor j^\varepsilon \rfloor$. Since $a\in (1, \, \kappa)$, we have $\deg \E(A^a) <1$, so that $\deg^m [\E(A^a)]^m \le \exp( - j^{\varepsilon/2})$ for all large $j$. We choose $r= {1\over (\E Z_{j, \theta})^{1- \delta}}$, with $\delta := {4\kappa \varepsilon \over \kappa -1}$. In view of (\[M-tail\]), we obtain: for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{23} \, (\E Z_{j, \theta})^{(1- \delta) \kappa} - {c_{43}\over \varepsilon} \, j^\varepsilon\, (\E Z_{j, \theta})^{\kappa-\varepsilon-\delta} - {2c_{44} \, (\E Z_{j, \theta})^{(1- \delta)a} \over \varepsilon^a \exp(j^{\varepsilon/2})} .$$
Since $c_{14}/j^{1/(\kappa-1- \varepsilon)} \le \E(Z_{j, \theta}) \le c_{35}/j^{1/2}$ (see (\[ell\]) and (\[53\]), respectively), we can pick up sufficiently small $\varepsilon$, so that for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge {c_{23} \over 2} \, (\E Z_{j, \theta})^{(1-\delta) \kappa}.$$
Recall that by definition, $Y_{j, \theta} = {Z_{j, \theta} \over \E(Z_{j, \theta})}$. Therefore, for $j\in [j_0, n]$, $$\E[(Z_{j, \theta})^2] \ge [\E Z_{j, \theta}]^2 \, {(1-2\varepsilon)^2\over (\E Z_{j, \theta})^{2(1- \delta)}} \P \left\{ Y_{j, \theta} > {1-2\varepsilon \over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{45} \, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta}.$$
Of course, the inequality holds trivially for $0\le j < j_0$ (with possibly a different value of the constant $c_{45}$). Plugging this into (\[E(Z2)\]), we see that for $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \le \theta + \E(Z_{j, \theta}) - c_{46}\, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta} .$$
By Lemma \[l:abc\], this yields $\E(Z_{n, \theta}) \le c_{47} \, \{ \theta^{1/[\kappa+ (2- \kappa)\delta]} + n^{- 1/ [\kappa -1 + (2- \kappa)\delta]}\}$. An application of (\[Z=beta\]) implies the desired upper bound in (\[E(beta):kappa<2\]).$\Box$
[**Remark.**]{} A close inspection on our argument shows that under the assumptions $p= {1\over \deg}$ and $\psi'(1)<0$, we have, for any $1\le i \le \deg$ and uniformly in $\lambda \in [0, \, {1\over n}]$, $$\left( {\alpha_{n, \lambda}(e_i) \over \E[\alpha_{n, \lambda}(e_i)]} ,\; {\beta_{n, \lambda}(e_i) \over \E[\beta_{n, \lambda}(e_i)]} , \; {\gamma_n(e_i) \over \E[\gamma_n (e_i)]} \right) \; {\buildrel law \over \longrightarrow} \; (M_\infty, \, M_\infty, \, M_\infty),$$
where “${\buildrel law \over \longrightarrow}$" stands for convergence in distribution, and $M_\infty$ is the random variable defined in $(\ref{cvg-M})$.$\Box$
[**Acknowledgements**]{}
We are grateful to Philippe Carmona and Marc Yor for helpful discussions.
[99]{}
Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. [*Ann. Math. Statist.*]{} [**23**]{}, 493–507.
Duquesne, T. and Le Gall, J.-F. (2002). [*Random Trees, Lévy Processes and Spatial Branching Processes.*]{} Astérisque [**281**]{}. Société Mathématique de France, Paris.
Griffeath, D. and Liggett, T.M. (1982). Critical phenomena for Spitzer’s reversible nearest particle systems. [*Ann. Probab.*]{} [**10**]{}, 881–895.
Harris, T.E. (1963). [*The Theory of Branching Processes.*]{} Springer, Berlin.
Hoel, P., Port, S. and Stone, C. (1972). [*Introduction to Stochastic Processes.*]{} Houghton Mifflin, Boston.
Kesten, H., Kozlov, M.V. and Spitzer, F. (1975). A limit law for random walk in a random environment. [*Compositio Math.*]{} [**30**]{}, 145–168.
Le Gall, J.-F. (2005). Random trees and applications. [*Probab. Surveys*]{} [**2**]{}, 245–311.
Liggett, T.M. (1985). [*Interacting Particle Systems.*]{} Springer, New York.
Liu, Q.S. (2000). On generalized multiplicative cascades. [*Stoch. Proc. Appl.*]{} [**86**]{}, 263–286.
Liu, Q.S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. [*Stoch. Proc. Appl.*]{} [**95**]{}, 83–107.
Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. [*Ann. Probab.*]{} [**20**]{}, 125–136.
Lyons, R. and Peres, Y. (2005+). [*Probability on Trees and Networks.*]{} (Forthcoming book) [http://mypage.iu.edu/\~rdlyons/prbtree/prbtree.html]{}
Mandelbrot, B. (1974). Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. [*C. R. Acad. Sci. Paris*]{} [**278**]{}, 289–292.
Menshikov, M.V. and Petritis, D. (2002). On random walks in random environment on trees and their relationship with multiplicative chaos. In: [*Mathematics and Computer Science II (Versailles, 2002)*]{}, pp. 415–422. Birkhäuser, Basel.
Pemantle, R. (1995). Tree-indexed processes. [*Statist. Sci.*]{} [**10**]{}, 200–213.
Pemantle, R. and Peres, Y. (1995). Critical random walk in random environment on trees. [*Ann. Probab.*]{} [**23**]{}, 105–140.
Pemantle, R. and Peres, Y. (2005+). The critical Ising model on trees, concave recursions and nonlinear capacity. [ArXiv:math.PR/0503137.]{}
Peres, Y. (1999). Probability on trees: an introductory climb. In: [*École d’Été St-Flour 1997*]{}, Lecture Notes in Mathematics [**1717**]{}, pp. 193–280. Springer, Berlin.
Petrov, V.V. (1995). [*Limit Theorems of Probability Theory.*]{} Clarendon Press, Oxford.
Rozikov, U.A. (2001). Random walks in random environments on the Cayley tree. [*Ukrainian Math. J.*]{} [**53**]{}, 1688–1702.
Sinai, Ya.G. (1982). The limit behavior of a one-dimensional random walk in a random environment. [*Theory Probab. Appl.*]{} [**27**]{}, 247–258.
Sznitman, A.-S. (2005+). Random motions in random media. (Lecture notes of minicourse at Les Houches summer school.) [http://www.math.ethz.ch/u/sznitman/]{}
Zeitouni, O. (2004). Random walks in random environment. In: [*École d’Été St-Flour 2001*]{}, Lecture Notes in Mathematics [**1837**]{}, pp. 189–312. Springer, Berlin.
-- ------------------------------ ---------------------------------------------------
Yueyun Hu Zhan Shi
Département de Mathématiques Laboratoire de Probabilités et Modèles Aléatoires
Université Paris XIII Université Paris VI
99 avenue J-B Clément 4 place Jussieu
F-93430 Villetaneuse F-75252 Paris Cedex 05
France France
-- ------------------------------ ---------------------------------------------------
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'This issue of *Statistical Science* draws its inspiration from the work of James M. Robins. Jon Wellner, the Editor at the time, asked the two of us to edit a special issue that would highlight the research topics studied by Robins and the breadth and depth of Robins’ contributions. Between the two of us, we have collaborated closely with Jamie for nearly 40 years. We agreed to edit this issue because we recognized that we were among the few in a position to relate the trajectory of his research career to date.'
address:
- 'Thomas S. Richardson is Professor and Chair, Department of Statistics, University of Washington, Box 354322, Seattle, Washington 98195, USA .'
- 'Andrea Rotnitzky is Professor, Department of Economics, Universidad Torcuato Di Tella & CONICET, Av. Figueroa Alcorta 7350, Sáenz Valiente 1010, Buenos Aires, Argentina .'
author:
-
-
title: 'Causal Etiology of the Research of James M. Robins'
---
Many readers may be unfamiliar with Robins’ singular career trajectory and in particular how his early practical experience motivated many of the inferential problems with which he was subsequently involved. Robins majored in mathematics at Harvard College, but then, in the spirit of the times, left college to pursue more activist social and political goals. Several years later, Robins enrolled in Medical School at Washington University in St. Louis, graduating in 1976. His M.D. degree remains his only degree, other than his high school diploma.
After graduating, he interned in medicine at Harlem Hospital in New York. After completing the internship, Robins spent a year working as a primary care physician in a community clinic in the Roxbury neighborhood of Boston. During that year, he helped organize a vertical Service Employees International Union affiliate that included all salaried personnel, from maintenance to physicians, working at the health center. In retaliation, he was dismissed by the director of the clinic and found that he was somewhat unwelcome at the other Boston community clinics. Unable to find a job and with his unemployment insurance running out, he surprisingly was able to obtain a prestigious residency in Internal Medicine at Yale University, a testament, he says with some irony, to the enduring strength of one’s Ivy League connections.
At Yale, Robins and his college friend Mark Cullen, now head of General Medicine at Stanford Medical School, founded an occupational health clinic, with the goal of working with trade unions in promoting occupational health and safety. When testifying in workers’ compensation cases, Robins was regularly asked whether it was “more probable than not that a worker’s death or illness was *caused* by exposure to chemicals in the workplace.” Robins’ lifelong interest in causal inference began with his need to provide an answer. As the relevant scientific papers consisted of epidemiologic studies and biostatistical analyses, Robins enrolled in biostatistics and epidemiology classes at Yale. He was dismayed to learn that the one question he needed to answer was the one question excluded from formal discussion in the mainstream biostatistical literature.[^1] At the time, most biostatisticians insisted that evidence for causation could only be obtained through randomized controlled trials; since, for ethical reasons, potentially harmful chemicals could not be randomly assigned, it followed that statistics could play little role in disentangling causation from spurious correlation.
Confounding
===========
In his classes, Robins was struck by the gap present between the informal, yet insightful, language of epidemiologists such as @miettinen:1981 ([-@miettinen:1981]) expressed in terms of “confounding, comparability, and bias,” and the technical language of mathematical statistics in which these terms either did not have analogs or had other meanings. Robins’ first major paper “The foundations of confounding in Epidemiology” written in 1982, though only published in [-@robins:foundation:1987], was an attempt to bridge this gap. As one example, he offered a precise mathematical definition for the informal epidemiologic concept of a “confounding variable” that has apparently stood the test of time (see @vanderweele2013, [-@vanderweele2013]). As a second example, @efron:hinkley:78 ([-@efron:hinkley:78]) had formally considered inference accurate to order $n^{-3/2}$ in variance conditional on exact or approximate ancillary statistics. Robins showed, surprisingly, that long before their paper, epidemiologists had been intuitively and informally referring to an estimator as “unbiased” just when it was asymptotically unbiased conditional on either exact or approximate ancillary statistics; furthermore, they intuitively required that the associated conditional Wald confidence interval be accurate to $O(n^{-3/2})$ in variance. As a third example, he solved the problem of constructing the tightest Wald-type intervals guaranteed to have conservative coverage for the average causal effect among the $n$ study subjects participating in a completely randomized experiment with a binary response variable; he showed that this interval can be strictly narrower than the usual binomial interval even under the Neyman null hypothesis of no average causal effect. To do so, he constructed an estimator of the variance of the empirical difference in treatment means that improved on a variance estimator earlier proposed by @neyman:sur:1923 ([-@neyman:sur:1923]). @aronow2014 ([-@aronow2014]) have recently generalized this result in several directions including to nonbinary responses.
Time-dependent Confounding and the -formula {#sec:time-dependent}
===========================================
It was also in 1982 that Robins turned his attention to the subject that would become his grail: causal inference from complex longitudinal data with time-varying treatments, that eventually culminated in his revolutionary papers @robins:1986 ([-@robins:1986; -@robins:1987:addendum]). His interest in this topic was sparked by (i) a paper of @gilbert:1982 ([-@gilbert:1982])[^2] on the healthy worker survivor effect in occupational epidemiology, wherein the author raised a number of questions Robins answered in these papers and (ii) his medical experience of trying to optimally adjust a patient’s treatments in response to the evolution of the patient’s clinical and laboratory data.
Overview
--------
Robins career from this point on became a “quest” to solve this problem, and thereby provide methods that would address central epidemiological questions, for example, *is a given long-term exposure harmful or a treatment beneficial?* *If beneficial, what interventions, that is, treatment strategies, are optimal or near optimal?*
In the process, Robins created a “bestiary” of causal models and analytic methods.[^3] There are the basic “phyla” consisting of the g-formula, marginal structural models and structural nested models. These phyla then contain “species,” for example, structural nested failure time models, structural nested distribution models, structural nested (multiplicative additive and logistic) mean models and yet further “subspecies”: direct-effect structural nested models and optimal-regime structural nested models.
Each subsequent model in this taxa was developed to help answer particular causal questions in specific contexts that the “older siblings” were not quite up to. Thus, for example, Robins’ creation of structural nested and marginal structural models was driven by the so-called null paradox, which could lead to falsely finding a treatment effect where none existed, and was a serious nonrobustness of the estimated g-formula, his then current methodology. Similarly, his research on higher-order influence function estimators was motivated by a concern that, in the presence of confounding by continuous, high dimensional confounders, even doubly robust methods might fail to adequately control for confounding bias.
This variety also reflects Robins’ belief that the best analytic approach varies with the causal question to be answered, and, even more importantly, that confidence in one’s substantive findings only comes when multiple, nearly orthogonal, modeling strategies lead to the same conclusion.
Causally Interpreted Structured Tree Graphs {#sec:tree-graph}
-------------------------------------------
Suppose one wishes to estimate from longitudinal data the causal effect of time-varying treatment or exposure, say cigarette smoking, on a failure time outcome such as all-cause mortality. In this setting, a time-dependent confounder is a time-varying covariate (e.g., presence of emphysema) that is a predictor of both future exposure and of failure. In 1982, the standard analytic approach was to model the conditional probability (i.e., the hazard) of failure time $t$ as a function of past exposure history using a time-dependent Cox proportional hazards model. Robins formally showed that, even when confounding by unmeasured factors and model specification are absent, this approach may result in estimates of effect that may fail to have a causal interpretation, regardless of whether or not one also adjusts for the measured time-dependent confounders in the analysis. In fact, if previous exposure also predicts the subsequent evolution of the time-dependent confounders (e.g., since smoking is a cause of emphysema, it predicts this disease) then the standard approach can find an artifactual exposure effect even under the sharp null hypothesis of no net, direct or indirect effect of exposure on the failure time of any subject.
Prior to @robins:1986 ([-@robins:1986]), although informal discussions of net, direct and indirect (i.e., mediated) effects of time varying exposures were to be found in the discussion sections of most epidemiologic papers, no formal mathematical definitions existed. To address this, @robins:1986 ([-@robins:1986]) introduced a new counterfactual model, the *finest fully randomized causally interpreted structured tree graph* (FFRCISTG)[^4] model that extended the point treatment counterfactual model of @neyman:sur:1923 ([-@neyman:sur:1923]) and @rubin:estimating:1974 ([-@rubin:estimating:1974; -@Rubi:baye:1978])[^5] to longitudinal studies with time-varying treatments, direct and indirect effects and feedback of one cause on another. Due to his lack of formal statistical training, the notation and formalisms in @robins:1986 ([-@robins:1986]) differ from those found in the mainstream literature; as a consequence the paper can be a difficult read.[^6] @richardson:robins:2013 ([-@richardson:robins:2013], Appendix C) present the FFRCISTG model using a more familiar notation.[^7]
![Causal tree graph depicting a simple scenario with treatments at two times $A_1$, $A_2$, a response $L$ measured prior to $A_2$, and a final response $Y$. Blue circles indicate evolution of the process determined by Nature; red dots indicate potential treatment choices.[]{data-label="fig:event-tree"}](505f01.eps)
We illustrate the basic ideas using a simplified example. Suppose that we obtain data from an observational or randomized study in which $n$ patients are treated at two times. Let $A_{1}$ and $A_{2}$ denote the treatments. Let $L$ be a measurement taken just prior to the second treatment and let $Y$ be a final outcome, higher values of which are desirable. To simplify matters, for now we will suppose that all of the treatments and responses are binary. As a concrete example, consider a study of HIV infected subjects with $(A_{1},L,A_{2},Y)$, respectively, being binary indicators of anti-retroviral treatment at time $1$, high CD4 count just before time $2$, anti-retroviral therapy at time $2$, and survival at time $3$ (where for simplicity we assume no deaths prior to assignment of $A_2$). There are $2^{4}=16$ possible observed data sequences for $(A_{1},L,A_{2},Y)$; these may be depicted as an event tree as in Figure \[fig:event-tree\]..2pt[^8] @robins:1986 ([-@robins:1986]) referred to such event trees as “structured tree graphs.”
We wish to assess the effect of the two treatments $(a_1, a_2)$ on $Y$. In more detail, for a given subject we suppose the existence of four potential $Y(a_{1},a_{2})$ for $a_{1},a_{2}\in\{0,1\}$ which are the outcome a patient would have if (possibly counter-to-fact) they were to receive the treatments $a_{1}$ and $a_{2}$. Then $E[Y(a_{1},a_{2})]$ is the mean outcome (e.g., the survival probability) if everyone in the population were to receive the specified level of the two treatments. The particular instance of this regime under which everyone is treated at both times, so $a_{1}=a_{2}=1$, is depicted in Figure \[fig:event-tree-reg\](a). We are interested in estimation of these four means since the regime $(a_{1},a_{2})$ that maximizes $E[Y(a_{1},a_{2})]$ is the regime a new patient exchangeable with the $n$ study subjects should follow.
There are two extreme scenarios: If in an observational study, the treatments are assigned, for example, by doctors, based on additional unmeasured predictors $U$ of $Y$ then $E[Y(a_{1},a_{2})]$ is not identified since those receiving $(a_{1},a_{2})$ within the study are not representative of the population as a whole.
At the other extreme, if the data comes from a completely randomized clinical trial (RCT) in which treatment is assigned independently at each time by the flip of coin, then it is simple to see that the counterfactual $Y( a_{1},a_{2}) $ is independent of the treatments $ (A_{1},A_{2} ) $ and that the average potential outcomes are identified since those receiving $(a_{1},a_{2})$ in the study are a simple random sample of the whole population. Thus, $$\begin{aligned}
Y( a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& \{ A_{1},A_{2}
\} , \label{eq:full-rand}
\\
E\bigl[Y(a_{1},a_{2})\bigr] &=& E[ Y\mid A_{1}
= a_{1},A_{2} = a_{2}], \label{eq:asscaus}\end{aligned}$$ where the right-hand side of (\[eq:asscaus\]) is a function of the observed data distribution. In a completely randomized experiment, association is causation: the associational quantity on the right-hand side of (\[eq:asscaus\]) equals the causal quantity on the left-hand side. Robins, however, considered an intermediate trial design in which both treatments are randomized, but the probability of receiving $A_{2}$ is dependent on both the treatment received initially ($A_{1}$) and the observed response ($L$); a scenario now termed a *sequential randomized trial*. Robins viewed his analysis as also applicable to observational data as follows. In an observational study, the role of an epidemiologist is to use subject matter knowledge to try to collect in $L$ sufficient data to eliminate confounding by unmeasured factors, and thus to have the study mimic a sequential RCT. If successful, the only difference between an actual sequential randomized trial and an observational study is that in the former the randomization probabilities $\Pr(A_{2}=1 \mid L,A_{1})$ are known by design while in the latter they must be estimated from the data.[^9] Robins viewed the sequential randomized trial as a collection of five trials in total: the original trial at $t=1$, plus a set of four randomized trials at $t=2$ nested within the original trial.[^10] Let the counterfactual $L( a_{1}) $ be the outcome $L$ when $A_{1}$ is set to $a_{1}$. Since the counterfactuals $Y(a_{1},a_{2})$ and $L( a_{1}) $ do not depend on the actual treatment received, they can be viewed, like a subject’s genetic make-up, as a fixed (possibly unobserved) characteristic of a subject and therefore independent of the randomly assigned treatment conditional on pre-randomization covariates. That is, for each $(a_{1},a_{2})$ and $l$: $$\begin{aligned}
\bigl\{ Y(a_{1},a_{2}),L(a_{1}) \bigr\} & {\protect\mathpalette{\protect\independenT}{\perp}}& A_{1}, \label{eq:ind1}
\\
Y(a_{1},a_{2})& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid
A_{1} = a_{1},\quad L = l. \label{eq:ind2}\end{aligned}$$
These independences suffice to identify the joint density $f_{Y(a_{1},a_{2}),L(a_{1})}(y,l)$ of $(Y(a_{1},a_{2}), L(a_{1}))$ from the distribution of the factual variables by the “g-computation algorithm formula” (or simply *g-formula*) density $$f_{a_{1},a_{2}}^{\ast}(y,l)\equiv f(y \mid a_{1},l,a_{2})f(l
\mid a_{1})$$ provided the conditional probabilities on the right-hand side are well-defined (@robins:1986, [-@robins:1986], page 1423). Note that $f_{a_{1},a_{2}}^{\ast}(y,l)$ is obtained from the joint density of the factuals by removing the treatment terms $f(a_{2} \mid a_{1},l,a_{2})f(a_{1})$. This is in-line with the intuition that $A_{1}$ and $A_{2}$ cease to be random since, under the regime, they are set by intervention to constants $a_{1}$ and $a_{2}$. The g-formula was later referred to as the “manipulated density” by @cps93 ([-@cps93]) and the “truncated factorization” by @pearl:2000 ([-@pearl:2000]).
@robins:1987:addendum ([-@robins:1987:addendum]) showed that under the weaker condition that replaces (\[eq:ind1\]) and (\[eq:ind2\]) with $$\begin{aligned}
\label{eq:statrand} Y(a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& A_{1}
\quad\hbox{and}
\nonumber
\\[-8pt]
\\[-8pt]
Y(a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid A_{1}
= a_{1}, \quad L = l,
\nonumber\end{aligned}$$ the marginal density of $Y(a_{1},a_{2})$ is still identified by $$\label{eq:g-formula-for-y} f_{a_{1},a_{2}}^{\ast}(y)=\sum
_{l}f(y \mid a_{1},l,a_{2})f(l \mid
a_{1}),$$ the marginal under $f_{a_{1},a_{2}}^{\ast}(y,l)$.[^11] Robins called (\[eq:statrand\]) *randomization w.r.t. $Y$*.[^12] Furthermore, he provided substantive examples of observational studies in which only the weaker assumption would be expected to hold. It is much easier to describe these studies using representations of causal systems using Directed Acyclic Graphs and Single World Intervention Graphs, neither of which existed when ([-@robins:1987:addendum]) was written.

Causal DAGs and Single World Intervention Graphs (SWIGs) {#sec:dags}
--------------------------------------------------------
Causal DAGs were first introduced in the seminal work of @cps93 ([-@cps93]); the theory was subsequently developed and extended by @pearl:biom ([-@pearl:biom; -@pearl:2000]) among others.
A causal DAG with random variables $V_{1},\ldots,V_{M}$ as nodes is a graph in which (1) the lack of an arrow from node $V_{j}$ to $V_{m}$ can be interpreted as the absence of a direct causal effect of $V_{j}$ on $V_{m}$ (relative to the other variables on the graph), (2) all common causes, even if unmeasured, of any pair of variables on the graph are themselves on the graph, and (3) the Causal Markov Assumption (CMA) holds. The CMA links the causal structure represented by the Directed Acyclic Graph (DAG) to the statistical data obtained in a study. It states that the distribution of the factual variables factor according to the DAG. A distribution factors according to the DAG if nondescendants of a given variable $V_{j}$ are independent of $V_{j}$ conditional on $\hbox{pa}_{j}$, the parents of $V_{j}$. The CMA is mathematically equivalent to the statement that the density $f(v_{1},\ldots,v_{M})$ of the variables on the causal DAG $\mathcal{G}$ satisfies the Markov factorization $$\label{eq:dag-factor} f(v_{1},\ldots,v_{M})=\prod
_{j=1}^{M}f(v_{j}\mid\mathrm{pa}_{j}).$$ A graphical criterion, called d-separation ([-@pearl:1988]), characterizes all the marginal and conditional independences that hold in every distribution obeying the Markov factorization (\[eq:dag-factor\]).
Causal DAGs may also be used to represent the joint distribution of the observed data under the counterfactual FFRCISTG model of @robins:1986 ([-@robins:1986]). This follows because an FFRCISTG model over the variables $\{V_{1},\ldots,V_{M}\}$ induces a distribution that factors as (\[eq:dag-factor\]). Figure \[fig:seq-rand\](a) shows a causal Directed Acyclic Graph (DAG) corresponding to the sequentially randomized experiment described above: vertex $H$ represents an unmeasured common cause (e.g., immune function) of CD4 count $L$ and survival $Y$. Randomization of treatment implies $A_{1}$ has no parents and $A_{2}$ has only the observed variables $A_{1}$ and $L$ as parents.

Single-World Intervention Graphs (SWIGs), introduced in ([-@richardson:robins:2013]), provide a simple way to derive the counterfactual independence relations implied by an FFRCISTG model. SWIGs were designed to unify the graphical and potential outcome approaches to causality. The nodes on a SWIG are the counterfactual random variables associated with a specific hypothetical intervention on the treatment variables. The SWIG in Figure \[fig:seq-rand\](b) is derived from the causal DAG in Figure \[fig:seq-rand\](a) corresponding to a sequentially randomized experiment. The SWIG represents the counterfactual world in which $A_{1}$ and $A_{2}$ have been set to $(a_{1},a_{2})$, respectively. @richardson:robins:2013 ([-@richardson:robins:2013]) show that under the (naturally associated) FFRCISTG model the distribution of the counterfactual variables on the SWIG factors according to the graph. Applying Pearl’s d-separation criterion to the SWIG we obtain the independences (\[eq:ind1\]) and (\[eq:ind2\]).[^13]
@robins:1987:addendum ([-@robins:1987:addendum]) in one of the aforementioned substantive examples described an observational study of the effect of formaldehyde exposure on the mortality of rubber workers which can represented by the causal graph in Figure \[fig:seq-rand-variant\](a). (This graph cannot represent a sequential RCT because the treatment variable $A_{1}$ and the response $L$ have an unmeasured common cause.) Follow-up begins at time of hire; time $1$ on the graph. The vertices $H_{1}$, $A_{1}$, $H_{2}$, $L_{2}$, $A_{2}$, $Y$ are indicators of sensitivity to eye irritants, formaldehyde exposure at time $1$, lung cancer, current employment, formaldehyde exposure at time $2$ and survival. Data on eye-sensitivity and lung cancer were not collected. Formaldehyde is a known eye-irritant. The presence of an arrow from $H_{1}$ to $A_{1}$ but not from $H_{1}$ to $A_{2}$ reflects the fact that subjects who believe their eyes to be sensitive to formaldehyde are given the discretion to choose a job without formaldehyde exposure at time of hire but not later. The arrow from $H_{1}$ to $L$ reflects the fact that eye sensitivity causes some subjects to leave employment. The arrows from $H_{2}$ to $L_{2}$ and $Y$ reflects the fact that lung cancer causes both death and loss of employment. The fact that $H_{1}$ and $H_{2}$ are independent reflects the fact that eye sensitivity is unrelated to the risk of lung cancer.
From the SWIG in Figure \[fig:seq-rand-variant\](b), we can see that (\[eq:statrand\]) holds so we have randomization with respect to $Y$ but $L( a_{1}) $ is not independent of $A_{1}$. It follows that the g-formula $f_{a_{1},a_{2}}^{\ast}(y)$ equals the density of $Y(a_{1},a_{2})$ even though (i) the distribution of $L( a_{1}) $ is not identified and (ii) neither of the individual terms $f(l \mid a_{1})$ and $f(y \mid a_{1},l,a_{2})$ occurring in the g-formula has a causal interpretation.[^14]
Subsequently, @tian02general ([-@tian02general]) developed a graphical algorithm for nonparametric identification that is “complete” in the sense that if the algorithm fails to derive an identifying formula, then the causal quantity is not identified (@shpitser06id, [-@shpitser06id]; @huang06do, [-@huang06do]). This algorithm strictly extends the set of causal identification results obtained by Robins for static regimes.
Dynamic Regimes {#sec:dynamic-regimes}
---------------
The “g” in “g-formula” and elsewhere in Robins’ work refers to generalized treatment regimes $g$. The set $\mathbb{G}$ of all such regimes includes *dynamic* regimes in which a subject’s treatment at time $2$ depends on the response $L$ to the treatment at time $1$. An example of a dynamic regime is the regime in which all subjects receive anti-retroviral treatment at time $1$, but continue to receive treatment at time $2$ only if their CD4 count at time $2$ is low, indicating that they have not yet responded to anti-retroviral treatment. In our study with no baseline covariates and $A_{1}$ and $A_{2}$ binary, a dynamic regime $g$ can be written as $g= (a_{1},g_{2}( l) ) $ where the function $g_{2}(l)$ specifies the treatment to be given at time $2$. The dynamic regime above has $(a_{1} = 1,g_{2}(l) = 1-l)$ and is highlighted in Figure \[fig:event-tree-reg\]. If $L$ is binary, then $\mathbb{G}$ consists of $8$ regimes comprised of the $4$ earlier static regimes $ (a_{1},a_{2} ) $ and $4$ *dynamic* regimes. The *g-formula* density associated with a regime $g= ( a_{1},g_{2}(l) ) $ is $$f_{g}^{\ast}(y,l)\equiv f( l \mid a_{1})f\bigl(y \mid
A_{1}=a_{1},L=l,A_{2}=g_{2}( l)
\bigr).$$ Letting $Y(g)$ be a subject’s counterfactual outcome under regime $g$, @robins:1987:addendum ([-@robins:1987:addendum]) proves that if both of the following hold: $$\begin{aligned}
\label{eq:indg} Y(g)& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{1},
\nonumber
\\[-8pt]
\\[-8pt]
Y(g)& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid A_{1} = a_{1}, \quad L =
l
\nonumber\end{aligned}$$ then $f_{Y(g)}(y)$ is identified by the g-formula density for $Y$: $$\begin{aligned}
f_{g}^{\ast}(y) &=&\sum_{l}f_{g}^{\ast}(y,l)
\\
&=&\sum_{l}f\bigl(y \mid A_{1}=a_{1},L=l,A_{2}=g_{2}(l) \bigr)
\\\
&&\hspace*{15pt}{} \cdot f(l \mid a_{1}).\end{aligned}$$ @robins:1987:addendum ([-@robins:1987:addendum]) refers to (\[eq:indg\]) as the assumption that regime $g$ *is randomized with respect to $Y$*. Given a causal DAG, Dynamic SWIGs (dSWIGS) can be used to check whether (\[eq:indg\]) holds. @tian:dynamic:2008 ([-@tian:dynamic:2008]) gives a complete graphical algorithm for identification of the effect of dynamic regimes based on DAGs.

Independences (\[eq:ind1\]) and (\[eq:ind2\]) imply that (\[eq:indg\]) is true for all $g\in\mathbb{G}$. For a drug treatment, for which, say, higher outcome values are better, the optimal regime $g_{\mathrm{opt}}$ maximizing $E[ Y(g)] $ over $g\in\mathbb{G}$ is almost always a dynamic regime, as treatment must be discontinued when toxicity, a component of $L$, develops.
@robins:iv:1989 ([-@robins:iv:1989; -@robins:1986], page 1423) used the g-notation $f(y \mid g)$ as a shorthand for $f_{Y(g)}(y)$ in order to emphasize that this was the density of $Y$ *had intervention $g$ been applied to the population*. In the special case of static regimes $ ( a_{1},a_{2} ) $, he wrote $f(y \mid g=(a_{1},a_{2}))$.[^15]
Statistical Limitations of the Estimated g-Formulae
---------------------------------------------------
Consider a sequentially randomized experiment. In this context, randomization probabilities $f(a_{1})$ and $f(a_{2} \mid a_{1},l)$ are known by design; however, the densities $f(y \mid a_{1},a_{2},l)$ and $f(l \mid a_{1})$ are not known and, therefore, they must be replaced by estimates $\widehat{f}(y \mid a_{1},a_{2},l_{2})$ and $\widehat{f}(l \mid a_{1})$ in the g-formula. If the sample size is moderate and $l$ is high dimensional, these estimates must come from fitting dimension-reducing models. Model misspecification will then lead to biased estimators of the mean of $Y(a_{1},a_{2})$. @robins:1986 ([-@robins:1986]) and @robins97estimation ([-@robins97estimation]) described a serious nonrobustness of the g-formula: the so-called “null paradox”: In biomedical trials, it is frequently of interest to consider the possibility that the sharp causal null hypothesis of no effect of either $A_{1}$ or $A_{2}$ on $Y$ holds. Under this null, the causal DAG generating the data is as in Figure \[fig:seq-rand\] except without the arrows from $A_{1}$, $A_{2}$ and $L$ into $Y$.[^16] Then, under this null, although $f_{a_{1},a_{2}}^{\ast}(y)=\sum_{l}f(y \mid a_{1},l,a_{2})f(l \mid a_{1})$ does not depend on $ ( a_{1},a_{2} )$, nonetheless both $f(y \mid a_{1},l,a_{2})$ and $f(l \mid a_{1})$ will, in general, depend on $a_{1}$ (as may be seen via d-connection).[^17] In general, if $L$ has discrete components, it is not possible for standard nonsaturated parametric models (e.g., logistic regression models) for both $f(y\mid a_{1},a_{2},l_{2})$ and $f(l_{2} \mid a_{1})$ to be correctly specified, and thus depend on $a_{1}$ and yet for $f_{a_{1},a_{2}}^{\ast}(y)$ not to depend on $a_{1}$.[^18] As a consequence, inference based on the estimated g-formula must result in the sharp null hypothesis being falsely rejected with probability going to $1$, as the trial size increases, even when it is true.
Structural Nested Models {#sec:snm}
------------------------
To overcome the null paradox, @robins:iv:1989 ([-@robins:iv:1989]) and @robins:pneumocystis:carinii:1992 ([-@robins:pneumocystis:carinii:1992]) introduced the semiparametric structural nested distribution model (SNDMs) for continuous outcomes $Y$ and structural nested failure time models (SNFTMs) for time to event outcomes. See @robins:snftm ([-@robins:longdata; -@robins:snftm]) for additional details.
@robins:1986 ([-@robins:1986], Section 6) defined the *$g$-null hypothesis* as $$\begin{aligned}
\label{g-null} && H_{0}\dvtx \mbox{the distribution of }Y(g)
\nonumber
\\[-8pt]
\\[-8pt]
&& \hspace*{20pt} \mbox{is the same for all }g\in\mathbb{G}.
\nonumber\end{aligned}$$ This hypothesis is implied by the sharp null hypothesis of no effect of $A_{1}$ or $A_{2}$ on any subject’s $Y$. If (\[eq:indg\]) holds for all $g\in\mathbb{G,}$ then the $g$-null hypothesis is equivalent to any one of the following assertions:
(i) $f_{g}^{\ast}(y)$ equals the factual density $f( y) $ for all $g\in\mathbb{G}$;
(ii) $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_1$ and $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_2 \mid L,A_1$;
(iii) $f_{a_{1},a_{2}}^{\ast}(y)$ does not depend on $ (a_{1},a_{2} ) $ and $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_{2} \mid L,A_1$;
see @robins:1986 ([-@robins:1986], Section 6). In addition, any one of these assertions exhausts all restrictions on the observed data distribution implied by the sharp null hypothesis.
Robins’ goal was to construct a causal model indexed by a parameter $\psi^{\ast}$ such that in a sequentially randomized trial (i) $\psi^{\ast}=0$ if and only if the $g$-null hypothesis (\[g-null\]) was true and (ii) if known, one could use the randomization probabilities to both construct an unbiased estimating function for $\psi^{\ast}$ and to construct tests of $\psi^{\ast}=0$ that were guaranteed (asymptotically) to reject under the null at the nominal level. The SNDMs and SNFTMs accomplish this goal for continuous and failure time outcomes $Y$. @robins:iv:1989 ([-@robins:iv:1989]) and @robins:cis:correcting:1994 ([-@robins:cis:correcting:1994]) also constructed additive and multiplicative structural nested mean models (SNMMs) which satisfied the above properties except with the hypothesis replaced by the *$g$-null mean hypothesis*: $$\label{g-null-mean} H_{0}\dvtx E\bigl[Y(g)\bigr]=E[Y]\quad\mbox{for all }g
\in\mathbb{G}.$$ As an example, we consider an additive structural nested mean model. Define $$\begin{aligned}
&& \gamma(a_{1},l,a_{2})
\\
&&\quad= E\bigl[ Y(a_{1},a_{2})-Y(a_{1},0) \mid
L=l, A_{1} = a_{1},
\\
&&\hspace*{168pt} A_{2} = a_{2}\bigr]\end{aligned}$$ and $$\gamma(a_{1})=E\bigl[ Y(a_{1},0)-Y(0,0) \mid A_{1}
= a_{1}\bigr] .$$ Note $\gamma(a_{1},l,a_{2})$ is the effect of the last blip of treatment $a_{2}$ at time $2$ among subjects with observed history $ (a_{1},l,a_{2} ) $, while $\gamma( a_{1})$ is the effect of the last blip of treatment $a_{1}$ at time $1$ among subjects with history $a_{1}$. An additive SNMM specifies parametric models $\gamma(a_{1},l,a_{2};\psi_{2})$ and $\gamma(a_{1};\psi_{1})$ for these blip functions with $\gamma(a_{1};0)=\gamma(a_{1},l,a_{2};0)=0$. Under the independence assumptions (\[eq:indg\]), $H_{2}(\psi_{2})\* d(L,A_1) \{ A_{2}-E[ A_{2} \mid L,A_{1}] \} $ and $H_{1}( \psi) \{ A_{1}-E[ A_{1}] \} $ are unbiased estimating functions for the true $\psi^{\ast}$, where $H_{2}( \psi_{2}) =Y-\gamma(A_{1},L,A_{2};\psi_{2})$, $H_{1}(\psi)=H_{2}( \psi_{2}) -\gamma(A_{1};\psi_{1})$, and $d(L,A_1)$ is a user-supplied function of the same dimension as $\psi_2$. Under the $g$-null mean hypothesis (\[g-null-mean\]), the SNMM is guaranteed to be correctly specified with $\psi^{\ast}=0$. Thus, these estimating functions when evaluated at $\psi^{\ast}=0$, can be used in the construction of an asymptotically $\alpha$-level test of the $g$-null mean hypothesis when $f(a_{1})$ and $f(a_{2} \mid a_{1},l)$ are known (or are consistently estimated).[^19] When $L$ is a high-dimensional vector, the parametric blip models may well be misspecified when $g$-null mean hypothesis is false. However, because the functions $\gamma(a_{1},l,a_{2})$ and $\gamma(a_{1})$ are nonparametrically identified under assumptions (\[eq:indg\]), one can construct consistent tests of the correctness of the blip models $\gamma(a_{1},l,a_{2};\psi_{2})$ and $\gamma(a_{1};\psi_{1})$. Furthermore, one can also estimate the blip functions using cross-validation ([-@robins:optimal:2004]) and/or flexible machine learning methods in lieu of a prespecified parametric model ([-@van2011targeted]). A recent modification of a multiplicative SNMM, the structural nested cumulative failure time model, designed for censored time to event outcomes has computational advantages compared to a SNFTM, because, in contrast to a SNFTM, parameters are estimated using an unbiased estimating function that is differentiable in the model parameters; see @picciotto2012structural ([-@picciotto2012structural]).
@robins:optimal:2004 ([-@robins:optimal:2004]) also introduced optimal-regime SNNMs drawing on the seminal work of @Murp:opti:2003 ([-@Murp:opti:2003]) on semiparametric methods for the estimation of optimal treatment strategies. Optimal-regime SNNM estimation, called A-learning in computer science, can be viewed as a semiparametric implementation of dynamic programming (@bellman:1957, [-@bellman:1957]).[^20] Optimal-regime SNMMs differ from standard SNMMs only in that $\gamma(a_{1})$ is redefined to be $$\begin{aligned}
\gamma(a_{1}) &=& E\bigl[Y\bigl(a_{1},g_{2,\mathrm{opt}}
\bigl(a_{1},L(a_{1})\bigr)\bigr)
\\
&&\hspace*{11pt}{} - Y\bigl(0,g_{2,\mathrm{opt}}\bigl(0,L(0)\bigr)\bigr)
\mid A_{1} =a_{1}\bigr],\end{aligned}$$ where $g_{2,\mathrm{opt}}(a_{1},l)=\arg\max_{a_{2}}\gamma(a_{1},l,a_{2})$ is the optimal treatment at time $2$ given past history $ ( a_{1},l ) $. The overall optimal treatment strategy $g_{\mathrm{opt}}$ is then $ (a_{1,\mathrm{opt}},g_{2,\mathrm{opt}}( a_{1},l) ) $ where $a_{1,\mathrm{opt}}=\arg\max_{a_{1}}\gamma(a_{1})$. More on the estimation of optimal treatment regimes can be found in @schulte:2014 ([-@schulte:2014]) in this volume.
Instrumental Variables and Bounds for the Average Treatment Effect
------------------------------------------------------------------
@robins:iv:1989 ([-@robins:iv:1989; -@robins1993analytic]) also noted that structural nested models can be used to estimate treatment effects when assumptions (\[eq:indg\]) do not hold but data are available on a time dependent instrumental variable. As an example, patients sometimes fail to fill their prescriptions and thus do not comply with their prescribed treatment. In that case, we can take $A_{j}=( A_{j}^{p},A_{j}^{d}) $ for each time $j$, where $A_{j}^{p}$ denotes the treatment *prescribed* and $A_{j}^{d}$ denotes the *dose* of treatment actually received at time $j$. Robins defined $A_{j}^{p}$ to be *an instrumental variable* if (\[eq:indg\]) still holds after replacing $A_{j}$ by $A_{j}^{p}$ and for all subjects $Y( a_{1},a_{2})$ depends on $a_{j}=( a_{j}^{p},a_{j}^{d}) $ only through the actual dose $a_{j}^{d}$. Robins noted that unlike the case of full compliance (i.e., $A_{j}^{p}=A_{j}^{d}$ with probability $1)$ discussed earlier, the treatment effect functions $\gamma$ are not nonparametrically identified. Consequently, identification can only be achieved by correctly specifying (sufficiently restrictive) parametric models for $\gamma$.
If we are unwilling to rely on such parametric assumptions, then the observed data distribution only implies bounds for the $\gamma$’s. In particular, in the setting of a point treatment randomized trial with noncompliance and the instrument $A_{1}^{p}$ being the assigned treatment, @robins:iv:1989 ([-@robins:iv:1989]) obtained bounds on the average causal effect $E[Y(a_{d}=1)-Y(a_{d}=0)]$ of the received treatment $A_{d}$. To the best of our knowledge, this paper was the first to derive bounds for nonidentified causal effects defined through potential outcomes.[^21] The study of such bounds has become an active area of research. Other early papers include @manski:1990 ([-@manski:1990]) and @balke:pearl:1994 ([-@balke:pearl:1994]).[^22] See @richardson:hudgens:2014 ([-@richardson:hudgens:2014]) in this volume for a survey of recent research on bounds.
Limitations of Structural Nested Models {#sec:limits-of-snms}
---------------------------------------
@robins00marginal ([-@robins00marginal]) noted that there exist causal questions for which SNMs are not altogether satisfactory. As an example, for $Y$ binary, @robins00marginal ([-@robins00marginal]) proposed a structural nested logistic model in order to ensure estimates of the counterfactual mean of $Y$ were between zero and one. However, he noted that knowledge of the randomization probabilities did not allow one to construct unbiased estimating function for its parameter $\psi^{\ast}$. More importantly, SNMs do not directly model the final object of public health interest—the distribution or mean of the outcome $Y$ as function of the regimes $g$—as these distributions are generally functions not only of the parameters of the SNM but also of the conditional law of the time dependent covariates $L$ given the past history. In addition, SNMs constitute a rather large conceptual leap from standard associational regression models familiar to most statisticians. @Robi:marg:1997 ([-@Robi:marg:1997; -@robins00marginal]) introduced a new class of causal models, marginal structural models, that overcame these particular difficulties. Robins also pointed out that MSMs have their own shortcomings, which we discuss below. @robins00marginal ([-@robins00marginal]) concluded that the best causal model to use will vary with the causal question of interest.
Dependent Censoring and Inverse Probability Weighting {#sec:censoring}
-----------------------------------------------------
Marginal Structural Models grew out of Robins’ work on censoring and *inverse probability of censoring weighted* (IPCW) estimators. Robins work on dependent censoring was motivated by the familiar clinical observation that patients who did not return to the clinic and were thus censored differed from other patients on important risk factors, for example measures of cardio-pulmonary reserve. In the 1970s and 1980s, the analysis of right censored data was a major area of statistical research, driven by the introduction of the proportional hazards model ([-@cox:1972:jrssb]; [-@Kalb:Pren:stat:1980]) and by martingale methods for their analysis ([-@Aalen:counting:1978]; [-@andersen:borgan:gill:keiding:1992]; [-@Flem:Harr:coun:1991]). This research, however, was focused on independent censoring. An important insight in @robins:1986 ([-@robins:1986]) was the recognition that by reframing the problem of censoring as a causal inference problem as we will now explain, it was possible to adjust for dependent censoring with the .
@Rubi:baye:1978 ([-@Rubi:baye:1978]) had pointed out previously that counterfactual causal inference could be viewed as a missing data problem. @robins:1986 ([-@robins:1986], page 1491) recognized that the converse was indeed also true: a missing data problem could be viewed as a problem in counterfactual causal inference.[^23] Robins conceptualized right censoring as just another time dependent “treatment” $A_{t}$ and one’s inferential goal as the estimation of the outcome $Y$ under the static regime $g$ “never censored.” Inference based on the g-formula was then licensed provided that censoring was explainable in the sense that (\[eq:statrand\]) holds. This approach to dependent censoring subsumed independent censoring as the latter is a special case of the former.
Robins, however, recognized once again that inference based on the estimated g-formula could be nonrobust. To overcome this difficulty, [@robins:rotnitzky:recovery:1992] introduced IPCW tests and estimators whose properties are easiest to explain in the context of a two-armed RCT of a single treatment ($A_{1}$). The standard Intention-to-Treat (ITT) analysis for comparing the survival distributions in the two arms is a log-rank test. However, data are often collected on covariates, both pre- and post-randomization, that are predictive of the outcome as well as (possibly) of censoring. An ITT analysis that tries to adjust for dependent-censoring by IPCW uses estimates of the arm-specific hazards of censoring as functions of past covariate history. The proposed IPCW tests have the following two advantages compared to the log rank test. First, if censoring is dependent but explainable by the covariates, the log-rank test is not asymptotically valid. In contrast, IPCW tests asymptotically reject at their nominal level provided the arm-specific hazard estimators are consistent. Second, when censoring is independent, although both the IPCW tests and the log-rank test asymptotically reject at their nominal level, the IPCW tests, by making use of covariates, can be more powerful than the log-rank test even against proportional-hazards alternatives. Even under independent censoring tests based on the estimated g-formula are not guaranteed to be asymptotically $\alpha$-level, and hence are not robust.
To illustrate, we consider here an RCT with $A_{1}$ being the randomization indicator, $L$ a post-randomization covariate, $A_{2}$ the indicator of censoring and $Y$ the indicator of survival. For simplicity, we assume that any censoring occurs at time $2$ and that there are no failures prior to time $2$. The IPCW estimator $\widehat{\beta}$ of the ITT effect $\beta^{\ast}=E[ Y \mid A=1] -E[ Y \mid A=0] $ is defined as the solution to $$\mathbb{P}_{n}\bigl[ I ( A_{2}=0 ) U(\beta)/\widehat{
\Pr}(A_{2}=0 \mid L,A_{1})\bigr] =0, \hspace*{-25pt}$$ where $U(\beta)= ( Y-\beta A_{1} ) ( A_{1}-1/2 ) $, throughout $\mathbb{P}_{n}$ denotes the empirical mean operator and $\widehat{\Pr}( A_{2}=0 \mid L,A_{1}) $ is an estimator of the arm-specific conditional probability of being uncensored. When first introduced in 1992, IPCW estimators, even when taking the form of simple Horvitz–Thompson estimators, were met with both surprise and suspicion as they violated the then widely held belief that one should never adjust for a post-randomization variable affected by treatment in a RCT.
Marginal Structural Models {#sec:msm}
--------------------------
@robins1993analytic ([-@robins1993analytic], Remark A1.3, pages 257–258) noted that, for any treatment regime $g$, if randomization w.r.t. $Y$, that is, (\[eq:indg\]), holds, $\Pr\{Y(g)>y\}$ can be estimated by IPCW if one defines a person’s censoring time as the first time he/she fails to take the treatment specified by the regime. In this setting, he referred to IPCW as *inverse probability of treatment weighted* (IPTW). In actual longitudinal data in which either (i) treatment $A_{k}$ is measured at many times $k$ or (ii) the $A_{k}$ are discrete with many levels or continuous, one often finds that few study subjects follow any particular regime. In response, @Robi:marg:1997 ([-@Robi:marg:1997; -@robins00marginal]) introduced MSMs. These models address the aforementioned difficulty by borrowing information across regimes. Additionally, MSMs represent another response to the $g$-null paradox complementary to Structural Nested Models.
To illustrate, suppose that in our example of Section \[sec:time-dependent\], $A_{1}$ and $A_{2}$ now have many levels. An instance of an MSM for the counterfactual means $E[ Y(a_{1},a_{2})]$ is a model that specifies that $$\Phi^{-1}\bigl\{E\bigl[Y(a_{1},a_{2})\bigr]\bigr
\}=\beta_{0}^{\ast} + \gamma\bigl(a_{1},a_{2};
\beta_{1}^{\ast}\bigr),$$ where $\Phi^{-1}$ is a given link function such as the logit, log, or identity link and $\gamma( a_{1},a_{2};\beta_{1}) $ is a known function satisfying $\gamma( a_{1},a_{2};0) =0$. In this model, $\beta_{1}=0$ encodes the *static-regime mean null hypothesis* that $$\label{static null}
H_{0}\dvtx E\bigl[ Y(a_{1},a_{2})\bigr] \mbox{ is the same for all } (a_{1},a_{2} ) .
\hspace*{-15pt}$$ @Robi:marg:1997 ([-@Robi:marg:1997]) proposed IPTW estimators $( \widehat{\beta}_{0},\widehat{\beta}_{1}) $ of $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} )$. When the treatment probabilities are known, these estimators are defined as the solution to $$\begin{aligned}
\label{eq:msm2}
\qquad && \mathbb{P}_{n}\bigl[ \vphantom{\hat{P}}Wv(A_{1},A_{2})
\bigl( Y-\Phi\bigl\{\beta_{0}+\gamma(A_1,A_2;
\beta_{1})\bigr\} \bigr) \bigr]
\nonumber\\[-8pt]
\\[-8pt]
&&\quad=0
\nonumber\end{aligned}$$ for a user supplied vector function $v(A_{1},A_{2})$ of the dimension of $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ where $$W=1/ \bigl\{ f( A_{1}) f(A_{2} \mid A_{1},L) \bigr
\}.$$ Informally, the product $f(A_{1})f(A_{2} \mid A_{1},L)$ is the “probability that a subject had the treatment history he did indeed have.”[^24] When the treatment probabilities are unknown, they are replaced by estimators.
Intuitively, the reason why the estimating function of (\[eq:msm2\]) has mean zero at $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ is as follows: Suppose the data had been generated from a sequentially randomized trial represented by DAG in Figure \[fig:seq-rand\]. We may create a pseudo-population by making $1/\{f(A_{1})f( A_{2}\mid A_{1},L)\}$ copies of each study subject. It can be shown that in the resulting pseudo-population $A_{2}{\protect\mathpalette{\protect\independenT}{\perp}}\{ L,A_{1} \}$, and thus is represented by the DAG in Figure \[fig:seq-rand\], except with both arrows into $A_{2}$ removed. In the pseudo-population, treatment is completely randomized (i.e., there is no confounding by either measured or unmeasured variables), and hence causation is association. Further, the mean of $Y(a_{1},a_{2})$ takes the same value in the pseudo-population as in the actual population. Thus if, for example, $\gamma(a_{1},a_{2};\beta_{1}) =\beta_{1,1}a_{1}+\beta_{1,2}a_{2}$ and $\Phi^{-1} $ is the identity link, we can estimate $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ by OLS in the pseudo-population. However, OLS in the pseudo-population is precisely weighted least squares in the actual study population with weights $1/\{f(A_{1})f( A_{2}\mid A_{1},L)\}$.[^25]
@robins00marginal ([-@robins00marginal], Section 4.3) also noted that the weights $W$ can be replaced by the so-called stabilized weights $SW= \{ f( A_{1})
f(A_{2} \mid A_{1}) \} / \{ f( A_{1}) f(A_{2}\mid A_{1},L) \}$, and described settings where, for efficiency reasons, using $SW$ is preferable to using $W$.
MSMs are not restricted to models for the dependence of the mean of $Y(a_{1},a_{2})$ on $ ( a_{1},a_{2} ) $. Indeed, one can consider MSMs for the dependence of any functional of the law of $Y(a_{1},a_{2})$ on $ ( a_{1},a_{2} )$, such as a quantile or the hazard function if $Y$ is a time-to-event variable. If the study is fully randomized, that is, (\[eq:full-rand\]) holds, then an MSM model for a given functional of the law of $Y( a_{1},a_{2}) $ is tantamount to an associational model for the same functional of the law of $Y$ conditional on $A_{1}=a_{1}$ and $A_{2}=a_{2}$. Thus, under (\[eq:full-rand\]), the MSM model can be estimated using standard methods for estimating the corresponding associational model. If the study is only sequentially randomized, that is, (\[eq:statrand\]) holds but (\[eq:full-rand\]) does not, then the model can still be estimated by the same standard methods but weighting each subject by $W$ or $SW$.
@robins00marginal ([-@robins00marginal]) discussed disadvantages of MSMs compared to SNMs. Here, we summarize some of the main drawbacks. Suppose (\[eq:indg\]) holds for all $g \in\mathbb{G}$. If the $g$-null hypothesis (\[g-null\]) is false but the static regime null hypothesis that the law of $Y( a_{1},a_{2}) $ is the same for all $ ( a_{1},a_{2} ) $ is true, then by (iii) of Section \[sec:snm\], $f( y \mid A_{1}=a_{1},A_{2}=a_{2},L=l) $ will depend on $a_{2}$ for some stratum $ ( a_{1}, l )$ thus implying a causal effect of $A_{2}$ in that stratum; estimation of an SNM model would, but estimation of an MSM model would not, detect this effect. A second drawback is that estimation of MSM models, suffers from marked instability and finite-sample bias in the presence of weights $W$ that are highly variable and skewed. This is not generally an issue in SNM estimation. A third limitation of MSMs is that when (\[eq:statrand\]) fails but an instrumental variable is available, one can still consistently estimate the parameters of a SNM but not of an MSM.[^26]
An advantage of MSMs over SNMs that was not discussed in Section \[sec:limits-of-snms\] is the following. MSMs can be constructed that are indexed by easily interpretable parameters that quantify the overall effects of a subset of all possible dynamic regimes ([-@Miguel:Robins:dominique:2005]; [-@van:Pete:caus:2007]; @orellana2010dynamic ([-@orellana2010dynamic; -@orellana2010proof]). As an example consider a longitudinal study of HIV infected patients with baseline CD4 counts exceeding 600 in which we wish to determine the optimal CD4 count at which to begin anti-retroviral treatment. Let $g_{x}$ denote the dynamic regime that specifies treatment is to be initiated the first time a subject’s CD4 count falls below $x$, $x\in \{1,2,\ldots, 600 \} $. Let $Y(g_{x})$ be the associated counterfactual response and suppose few study subjects follow any given regime. If we assume $E[Y(g_{x})]$ varies smoothly with $x$, we can specify and fit (by IPTW) a dynamic regime MSM model $E[Y(g_{x})]=\beta_{0}^{\ast}+\beta_{1}^{\ast T}h(x)$ where, say, $h(x)$ is a vector of appropriate spline functions.
Direct Effects
==============
Robins’ analysis of sequential regimes leads immediately to the consideration of direct effects. Thus, perhaps not surprisingly, all three of the distinct direct effect concepts that are now an integral part of the causal literature are all to be found in his early papers. Intuitively, all the notions of direct effect consider whether “the outcome ($Y$) would have been different had cause ($A_{1}$) been different, but the level of ($A_{2}$) remained unchanged.” The notions differ regarding the precise meaning of $A_2$ “remained unchanged.”

Controlled Direct Effects {#sec:cde}
-------------------------
In a setting in which there are temporally ordered treatments $A_{1}$ and $A_{2}$, it is natural to wonder whether the first treatment has any effect on the final outcome were everyone to receive the second treatment. Formally, we wish to compare the potential outcomes $Y(a_{1} = 1,a_{2} =1)$ and $Y(a_{1} = 0,a_{2} = 1)$. @robins:1986 ([-@robins:1986], Section 8) considered such contrasts, that are now referred to as *controlled direct effects*. More generally, the *average controlled direct effect of $A_{1}$ on $Y$ when $A_{2}$ is set to $a_{2}$* is defined to be $$\label{eq:acde}
\hspace*{23pt}
\mathrm{CDE}(a_{2})\equiv E\bigl[Y(a_{1}=1,a_{2})-Y(a_{1}=0,a_{2})\bigr] ,$$ where $Y(a_{1}=1,a_{2})-Y(a_{1}=0,a_{2})$ is the individual level direct effect. Thus, if $A_{2}$ takes $k$-levels then there are $k$ such contrasts.
Under the causal graph shown in Figure \[fig:no-confound\](a), in contrast to Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\], the effect of $A_{2}$ on $Y$ is unconfounded, by either measured or unmeasured variables, association is causation and thus, under the associated FFRCISTG model: $$\begin{aligned}
\mathrm{CDE}(a_{2}) &=& E[ Y \mid A_{1}=1,A_{2}=a_{2}]
\\
&&{} - E[Y \mid A_{1}=0,A_{2}=a_{2}] .\end{aligned}$$
The CDE can be identified even in the presence of time-dependent confounding. For example, in the context of the FFRCISTG associated with either of the causal DAGs shown in Figures \[fig:seq-rand\] and [fig:seq-rand-variant]{}, the $\operatorname{CDE}(a_2)$ will be identified via the difference in the expectations of $Y$ under the g-formula densities $f_{a_1 =
1,a_2}^*(y)$ and $f_{a_1 = 0,a_2}^*(y)$.[^27]
The CDE requires that the potential outcomes $Y(a_{1},\allowbreak a_{2})$ be well-defined for all values of $a_{1}$ and $a_{2}$. This is because the CDE treats both $A_{2}$ and $A_{1}$ as causes, and interprets “$A_{2}$ remained unchanged” to mean “had there been an intervention on $A_2$ fixing it to $a_2$.”
This clearly requires that the analyst be able to describe a well-defined intervention on the mediating variable $A_{2}$.
There are many contexts in which there is no clear well-defined intervention on $A_{2}$ and thus it is not meaningful to refer to $Y(a_{1},a_{2})$. The CDE is not applicable in such contexts.
Principal Stratum Direct Effects (PSDE) {#sec:psde}
---------------------------------------
@robins:1986 ([-@robins:1986]) considered causal contrasts in the situation described in Section \[sec:censoring\] in which death from a disease of interest, for example, a heart attack, may be censored by death from other diseases. To describe these contrasts, we suppose $A_{1}$ is a treatment of interest, $Y=1$ is the indicator of death from the disease of interest (in a short interval subsequent to a given fixed time $t$) and $A_{2}=0$ is the “at risk indicator” denoting the absence of death either from other diseases or the disease of interest prior to time $t$.
Earlier @Kalb:Pren:stat:1980 ([-@Kalb:Pren:stat:1980]) had argued that if $A_{2}=1$, so that the subject does not survive to time $t$, then the question of whether the subject would have died of heart disease subsequent to $t$ had death before $t$ been prevented is meaningless. In the language of counterfactuals, they were saying (i) that if $A_{1}=a_{1}$ and $A_{2}\equiv A_{2}(a_{1}) =1$, the counterfactual $Y(a_{1},a_{2}=0)$ is not well-defined and (ii) the counterfactual $Y(a_{1},a_{2}=1)$ is never well-defined.
@robins:1986 ([-@robins:1986], Section 12.2) observed that if one accepts this then the only direct effect contrast that is well-defined is $Y(a_{1} =1,a_{2}=0)- Y(a_{1} = 0,a_{2}=0)$ and that is well-defined only for those subjects who would survive to $t$ regardless of whether they received $a_{1} = 0$ or $a_{1} = 1$. In other words, even though $Y(a_{1},a_{2})$ may not be well-defined for all subjects and all $a_{1}$, $a_{2}$, the contrast: $$\begin{aligned}
\label{eq:psde-contrast}
&& E\bigl[ Y(a_{1} = 0,a_{2})
-Y(a_{1}= 1,a_{2}) \mid
\nonumber\\[-8pt]
\\[-8pt]
&&\hspace*{16pt}{}A_{2}(a_{1} =1)=A_{2}(a_{1} = 0)=a_2\bigr]
\nonumber\end{aligned}$$ is still well-defined when $a_{2}=0$. As noted by Robins, this could provide a solution to the problem of defining the causal effect of the treatment $A_{1}$ on the outcome $Y$ in the context of censoring by death due to other diseases.
@Rubi:more:1998 ([-@Rubi:more:1998]) and @Fran:Rubi:addr:1999 ([-@Fran:Rubi:addr:1999; -@Fran:prin:2002]) later used this same contrast to solve precisely the same problem of “censoring by death.”[^28]
In the terminology of @Fran:prin:2002 ([-@Fran:prin:2002]) for a subject with $A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}$, the *individual principal stratum direct effect* is defined to be:[^29] $$Y(a_{1}=1,a_{2}) - Y(a_{1}=0,a_{2})$$ (here, $A_{1}$ is assumed to be binary). The *average PSDE in principal stratum $a_{2}$* is then defined to be $$\begin{aligned}
\label{eq:psde2}
\qquad
\operatorname{PSDE}(a_{2})
&\equiv & E\bigl[Y(a_{1} = 1,a_{2}) -Y(a_{1} = 0,a_{2})\mid
\nonumber\\
&&\hspace*{16pt}
A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}\bigr]
\nonumber\\[-8pt]
\\[-8pt]
& =& E\bigl[ Y(a_{1} = 1)-Y(a_{1} = 0)\mid
\nonumber\\
&&\hspace*{13pt}
A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}\bigr],
\nonumber\end{aligned}$$ where the second equality here follows, since $Y(a_{1},\allowbreak A_{2}(a_{1}))=Y(a_{1})$.[^30] In contrast to the CDE, the PSDE has the advantage that it may be defined, via (\[eq:psde2\]), without reference to potential outcomes involving intervention on $a_{2}$. Whereas the CDE views $A_{2}$ as a treatment, the PSDE treats $A_{2}$ as a response. Equivalently, this contrast interprets “had $A_2$ remained unchanged” to mean “we restrict attention to those people whose value of $A_{2}$ would still have been $a_{2}$, even under an intervention that set $A_{1}$ to a different value.”
Although the PSDE is an interesting parameter in many settings ([-@gilbert:bosch:hudgens:biometrics:2003]), it has drawbacks beyond the obvious (but perhaps less important) ones that neither the parameter itself nor the subgroup conditioned on are nonparametrically identified. In fact, having just defined the PSDE parameter, @robins:1986 ([-@robins:1986]) criticized it for its lack of transitivity when there is a non-null direct effect of $A_1$ and $A_{1}$ has more than two levels; that is, for a given $a_2$, the PSDEs comparing $a_{1}=0$ with $a_{1}=1$ and $a_{1}=1$ with $a_{1}=2$ may both be positive but the PSDE comparing $a_{1}=0$ with $a_{1}=2$ may be negative. @Robi:Rotn:Vans:disc:2007 ([-@Robi:Rotn:Vans:disc:2007]) noted that the PSDE is undefined when $A_{1}$ has an effect on every subject’s $A_{2}$, a situation that can easily occur if $A_2$ is continuous. In that event, a natural strategy would be to, say, dichotomize $A_{2}$. However, @Robi:Rotn:Vans:disc:2007 ([-@Robi:Rotn:Vans:disc:2007]) showed that the PSDE in principal stratum $a_{2}^{\ast}$ of the dichotomized variable may fail to retain any meaningful substantive interpretation.
Pure Direct Effects (PDE) {#sec:pde}
-------------------------
Once it has been established that a treatment $A_{1}$ has a causal effect on a response $Y$, it is natural to ask what “fraction” of a the total effect may be attributed to a given causal pathway. As an example, consider a RCT in nonhypertensive smokers of the effect of an anti-smoking intervention ($A_{1}$) on the outcome myocardial infarction (MI) at 2 years ($Y$). For simplicity, assume everyone in the intervention arm and no one in the placebo arm quit cigarettes, that all subjects were tested for new-onset hypertension $A_{2}$ at the end of the first year, and no subject suffered an MI in the first year. Hence, $A_{1}$, $A_{2}$ and $Y$ occur in that order. Suppose the trial showed smoking cessation had a beneficial effect on both hypertension and MI. It is natural to consider the query: “What fraction of the total effect of smoking cessation $A_{1}$ on MI $Y$ is through a pathway that does not involve hypertension $A_{2}$?”
@Robi:Gree:iden:1992 formalized this question via the following counterfactual contrast, which they termed the “pure direct effect”: $$Y\bigl\{a_1 = 1,A_2(a_1 = 0)\bigr\}-Y
\bigl\{a_1 = 0,A_2(a_1 = 0)\bigr\}.$$
The second term here is simply $Y(a_{1} = 0)$.[^31] The contrast is thus the difference between two quantities: first, the outcome $Y$ that would result if we set $a_{1}$ to $1$, while “holding fixed” $a_{2}$ at the value $A_{2}(a_{1} = 0)$ that it would have taken had $a_{1}$ been $0$; second, the outcome $Y$ that would result from simply setting $a_{1}$ to $0$ \[and thus having $A_{2}$ again take the value $A_{2}(a_{1} = 0)$\]. Thus, the Pure Direct Effect interprets had “$A_{2}$ remained unchanged” to mean “had (somehow) $A_{2}$ taken the value that it would have taken had we fixed $A_{1}$ to $0$.” The contrast thus represents the effect of $A_{1}$ on $Y$ had the effect of $A_{1}$ on hypertension $A_{2}$ been blocked. As for the CDE, to be well-defined, potential outcomes $Y(a_{1},a_{2})$ must be well-defined. As a summary measure of the direct effect of (a binary variable) $A_{1}$ on $Y$, the PDE has the advantage (relative to the CDE and PSDE) that it is a single number.
The average pure direct effect is defined as[^32] $$\begin{aligned}
\operatorname{PDE} &=& E\bigl[ Y\bigl\{a_{1} = 1,A_{2}(a_{1}
= 0)\bigr\}\bigr]
\\
&&{} -E\bigl[Y\bigl(a_{1} = 0,A_{2}(a_{1} = 0)
\bigr)\bigr] .\end{aligned}$$ Thus, the ratio of the PDE to the total effect $E[ Y\{a_{1} = 1\}] -E[Y\{a_{1} = 0\}] $ is the fraction of the total that is through a pathway that does not involve hypertension ($A_2$).
Unlike the PSDE, the PDE is an average over the full population. However, unlike the CDE, the PDE is not nonparametrically identified under the FFRCISTG model associated with the simple DAG shown in Figure \[fig:no-confound\](a). @robins:mcm:2011 ([-@robins:mcm:2011], App. C) computed bounds for the PDE under the FFRCISTG associated with this DAG.
@pearl:indirect:01 ([-@pearl:indirect:01]) obtains identification of the PDE under the DAG in Figure \[fig:no-confound\](a) by imposing stronger counterfactual independence assumptions, via a Nonparametric Structural Equation Model with Independent Errors (NPSEM-IE).[^33] Under these assumptions, @pearl:indirect:01 ([-@pearl:indirect:01]) obtains the following identifying formula: $$\begin{aligned}
\label{eq:mediation}
&& \sum_{a_{2}} \bigl\{ E[Y\mid A_{1} = 1,A_{2} = a_{2}]
\nonumber\\
&&\hspace*{19pt}{}
-E[Y\mid A_{1} = 0,A_{2} = a_{2}] \bigr\}
\\
&&\hspace*{14pt}{} \cdot
P(A_{2} = a_{2}\mid A_{1} = 0),
\nonumber\end{aligned}$$ which he calls the “Mediation Formula.”
@robins:mcm:2011 ([-@robins:mcm:2011]) noted that the additional assumptions made by the NPSEM-IE are not testable, even in principle, via a randomized experiment. Consequently, this formula represents a departure from the principle, originating with @neyman:sur:1923 ([-@neyman:sur:1923]), that causation be reducible to experimental interventions, often expressed in the slogan “no causation without manipulation.”[^34] @robins:mcm:2011 ([-@robins:mcm:2011]) achieve a rapprochement between these opposing positions by showing that the formula (\[eq:mediation\]) is equal to the g-formula associated with an intervention on two treatment variables not appearing on the graph (but having deterministic relations with $A_{1}$) under the assumption that one of the variables has no direct effect on $A_{2}
$ and the other has no direct effect on $Y$. Hence, under this assumption and in the absence of confounding, the effect of this intervention on $Y$ is point identified by (\[eq:mediation\]).[^35]
Although there was a literature on direct effects in linear structural equation models (see, e.g., [-@blalock1971causal]) that preceded @robins:1986 ([-@robins:1986]) and @Robi:Gree:iden:1992 ([-@Robi:Gree:iden:1992]), the distinction between the CDE and PDE did not arise since in linear models these notions are equivalent.[^36]

The Direct Effect Null {#sec:direct-null}
----------------------
@robins:1986 ([-@robins:1986], Section 8) considered the null hypothesis that $Y(a_{1},a_{2})
$ does not depend on $a_{1}$ for all $a_{2}$, which we term the *sharp null-hypothesis of no direct effect of $A_{1}$ on $Y$* (*relative to $A_{2}$*) or more simply as the “sharp direct effect null.”
In the context of our running example with data $ (A_{1},L,A_{2},Y )$, under (\[eq:statrand\]) the sharp direct effect null implies the following constraint on the observed data distribution: $$\label{eq:verma-constraint} \quad f_{a_{1},a_{2}}^{\ast}(y) \quad\mbox{is not a
function of } a_{1} \mbox{ for all }a_{2}.$$ @robins:1986 ([-@robins:1986], Sections 8 and 9) noted that this constraint (\[eq:verma-constraint\]) is *not* a conditional independence. This is in contrast to the $g$-null hypothesis which we have seen is equivalent to the independencies in (ii) of Section \[sec:snm\] \[when equation (\[eq:indg\]) holds for all $g\in\mathbb{G}$\].[^37] He concluded that, in contrast to the $g$-null hypothesis, the constraint (\[eq:verma-constraint\]), and thus the sharp direct effect null, cannot be tested using case control data with unknown case and control sampling fractions.[^38] This constraint (\[eq:verma-constraint\]) was later independently discovered by @verma:pearl:equivalence:1990 ([-@verma:pearl:equivalence:1990]) and for this reason is called the “Verma constraint” in the Computer Science literature.
@robins:1999 ([-@robins:1999]) noted that, though (\[eq:verma-constraint\]) is not a conditional independence in the observed data distribution, it does correspond to a conditional independence, but in a weighted distribution with weights proportional to $1/f(A_{2}\mid A_{1},L)$.[^39] This can be understood from the informal discussion following equation (\[eq:msm2\]) in the previous section: there it was noted that given the FFRCISTG corresponding to the DAG in Figure \[fig:seq-rand\], reweighting by $1/f(A_{2}\mid A_{1},L)$ corresponds to removing both edges into $A_{2}$. Hence, if the edges $A_{1}\rightarrow Y$ and $L\rightarrow Y$ are not present, so that the sharp direct effect null holds, as in Figure \[fig:seq-rand2\](a), then the reweighted population is described by the DAG in Figure \[fig:seq-rand2\](b). It then follows from the d-separation relations on this DAG that $Y {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}\mid A_{2}$ in the reweighted distribution.
This fact can also be seen as follows. If, in our running example from Section \[sec:tree-graph\], $A_{1}$, $A_{2}$, $Y$ are all binary, the sharp direct effect null implies that $\beta_{1}^{\ast}=\beta_{3}^{\ast}=0$ in the saturated MSM with $$\Phi^{-1}\bigl\{E\bigl[Y(a_{1},a_{2})\bigr]\bigr
\}=\beta_{0}^{\ast} + \beta_{1}^{\ast}a_{1}+
\beta_{2}^{\ast}a_{2}+\beta_{3}^{\ast}a_{1}a_{2}.$$ Since $\beta_{1}^{\ast}$ and $\beta_{3}^{\ast}$ are the associational parameters of the weighted distribution, their being zero implies the conditional independence $Y {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}\mid A_{2}$ under this weighted distribution.
In more complex longitudinal settings, with the number of treatment times $k$ exceeding $2$, all the parameters multiplying terms containing a particular treatment variable in a MSM may be zero, yet there may still be evidence in the data that the sharp direct effect null for that variable is false. This is directly analogous to the limitation of MSMs relative to SNMs with regard to the sharp null hypothesis (\[g-null\]) of no effect of any treatment that we noted at the end of Section \[sec:msm\]. To overcome this problem, @robins:1999 ([-@robins:1999]) introduced direct effect structural nested models. In these models, which involve treatment at $k$ time points, if all parameters multiplying a given $a_j$ take the value $0$, then we can conclude that the distribution of the observables do not refute the natural extension of (\[eq:verma-constraint\]) to $k$ times. The latter is implied by the sharp direct effect null that $a_j$ has no direct effect on $Y$ holding $a_{j+1},\ldots,a_k$ fixed.
The Foundations of Statistics and Bayesian Inference
====================================================
@Robins:Ritov:toward:1997 and @Robi:Wass:cond:2000 recognized that the lack of robustness of estimators based on the g-formula in a sequential randomized trial with known randomization probabilities had implications for the foundations of statistics and for Bayesian inference. To make their argument transparent, we will assume in our running example (from Section \[sec:tree-graph\]) that the density of $L$ is known and that $A_{1}=1$ with probability $1$ (hence we drop $A_{1}$ from the notation). We will further assume the observed data are $n$ i.i.d. copies of a random vector $ ( L,A_{2},Y ) $ with $A_{2}$ and $Y$ binary and $L$ a $d\times1$ continuous vector with support on the unit cube $ ( 0,1 )^{d}$. We consider a model for the law of $ ( L,A_{2},Y )$ that assumes that the density $f^{\ast}( l ) $ of $L$ is known, that the treatment probability $\pi^{\ast}( l)\equiv\Pr( A_{2}=1 \mid L=l)$ lies in the interval $ ( c,1-c ) $ for some known $c>0$ and that $b^{\ast}( l,a_{2}) \equiv E[Y \mid L=l,A_{2}=a_{2}] $ is continuous in $l$. Under this model, the likelihood function is $$\mathcal{L}( b, \pi) = \mathcal{L}_{1}( b) \mathcal{L}_{2}(
\pi) ,$$ where $$\begin{aligned}
\mathcal{L}_{1}( b) &=& \prod_{i=1}^{n}f^{\ast}(
L_{i}) b(L_{i},A_{2,i}) ^{Y}
\nonumber
\\[-8pt]
\\[-8pt]
&&\hspace*{14pt} {} \cdot \bigl\{ 1-b( L_{i},A_{2,i}) \bigr
\} ^{1-Y},
\nonumber
\\
\qquad \mathcal{L}_{2}( \pi) &=& \prod_{i=1}^{n}
\pi_{2}( L_{i})^{A_{2,i}} \bigl\{ 1-\pi_{2}(
L_{i}) \bigr\} ^{1-A_{2,i}},\end{aligned}$$ and $ ( b,\pi ) \in\mathcal{B}\times\bolds{\Pi}$. Here $\mathcal{B}$ is the set of continuous functions from $ ( 0,1 )^{d}\times \{ 0,1 \} $ to $ ( 0,1 ) $ and $\bolds{\Pi}$ is the set of functions from $ ( 0,1 ) ^{d}$ to $ (
c,1-c ) $.
We assume the goal is inference about $\mu( b) $ where $\mu( b) =\int b(l,1) f^{\ast}( l) \,{dl}$. Under randomization, that is (\[eq:ind1\]) and (\[eq:ind2\]), $\mu( b^{\ast})$ is the counterfactual mean of $Y$ when treatment is given at both times.
When $\pi^{\ast}$ is unknown, @Robins:Ritov:toward:1997 ([-@Robins:Ritov:toward:1997]) showed that no estimator of $\mu( b^{\ast}) $ exists that is uniformly consistent over all $\mathcal{B}\times\bolds{\Pi}$. They also showed that even if $\pi^{\ast}$ is known, any estimator that does not use knowledge of $\pi^{\ast}$ cannot be uniformly consistent over $\mathcal{B}\times \{ \pi^{\ast} \} $ for all $\pi^{\ast}$. However, there do exist estimators that depend on $\pi^{\ast}$ that are uniformly $\sqrt{n} $-consistent for $\mu(b^{\ast}) $ over $\mathcal{B}\times \{ \pi^{\ast} \} $ for all $\pi^{\ast}$. The Horvitz–Thompson estimator $\mathbb{P}_{n}\{A_{2}Y/\pi^{\ast}( L) \} $ is a simple example.
@Robins:Ritov:toward:1997 ([-@Robins:Ritov:toward:1997]) concluded that, in this example, any method of estimation that obeys the likelihood principle such as maximum likelihood or Bayesian estimation with independent priors on $b$ and $\pi$, must fail to be uniformly consistent. This is because any procedure that obeys the likelihood principle must result in the same inference for $\mu(b^{\ast})$ regardless of $\pi^{\ast}$, even when $\pi^{\ast}$ becomes known. @Robi:Wass:cond:2000 ([-@Robi:Wass:cond:2000]) noted that this example illustrates that the likelihood principle and frequentist performance can be in severe conflict in that any procedure with good frequentist properties must violate the likelihood principle.[^40] @ritov:2014 ([-@ritov:2014]) in this volume extends this discussion in many directions.
Semiparametric Efficiency and Double Robustness in Missing Data and Causal Inference Models {#sec:semipar-eff}
===========================================================================================
@robins:rotnitzky:recovery:1992 ([-@robins:rotnitzky:recovery:1992]) recognized that the inferential problem of estimation of the mean $E[ Y(g)] $ (when identified by the g-formula) of a response $Y$ under a regime $g$ is a special case of the *general problem* of estimating the parameters of an arbitrary semi-parametric model in the presence of data that had been coarsened at random ([-@Heitjan:Rubin:1991]).[^41] This viewpoint led them to recognize that the IPCW and IPTW estimators described earlier were not fully efficient. To obtain efficient estimators, @robins:rotnitzky:recovery:1992 and @robins:rotnitzky:zhao:1994 used the theory of semiparametric efficiency bounds ([-@bickel:klaasen:ritov:wellner:1993]; [-@van:on:1991]) to derive representations for the efficient score, the efficient influence function, the semiparametric variance bound, and the influence function of any asymptotically linear estimator in this *general* problem. The books by @tsiatis:2006 ([-@tsiatis:2006]) and by @vdL:robins:2003 ([-@vdL:robins:2003]) provide thorough treatments. The generality of these results allowed Robins and his principal collaborators Mark van der Laan and Andrea Rotnitzky to solve many open problems in the analysis of semiparametric models. For example, they used the efficient score representation theorem to derive locally efficient semiparametric estimators in many models of importance in biostatistics. Some examples include conditional mean models with missing regressors and/or responses ([-@robins:rotnitzky:zhao:1994]; [-@Rotn:Robi:semi:1995]), bivariate survival ([-@quale2006locally]) and multivariate survival models with explainable dependent censoring ([-@van2002locally]).[^42]
In coarsened at random data models, whether missing data or causal inference models, locally efficient semiparametric estimators are also doubly robust ([-@Scha:Rotn:Robi:adju:1999], pages 1141–1144) and ([-@Robins:Rotnitzky:comment:on:bickel:2001]). See the book ([-@vdL:robins:2003]) for details and for many examples of doubly robust estimators. Doubly robust estimators had been discovered earlier in special cases. In fact, @Firth:Bennet:1998 ([-@Firth:Bennet:1998]) note that the so-called model-assisted regression estimator of a finite population mean of @Cass:Srnd:Wret:some:1976 ([-@Cass:Srnd:Wret:some:1976]) is design consistent which is tantamount to being doubly robust. See @Robins:Rotnitzky:comment:on:bickel:2001 ([-@Robins:Rotnitzky:comment:on:bickel:2001]) for other precursors.
In the context of our running example, from Section \[sec:tree-graph\], suppose (\[eq:statrand\]) holds. An estimator $\widehat{\mu}_{\mathrm{dr}}$ of $\mu=E[ Y( a_{1},a_{2})]=f_{a_{1},a_{2}}^{\ast}(1)$ for, say $a_{1}=a_{2}=1$, is said to be *doubly robust* (DR) if it is consistent when either (i) a model for $\pi(L) \equiv\Pr( A_{2}=1 \mid A_{1}=1,L) $ or (ii) a model for $b( L) \equiv E[ Y \mid A_{1}=1,L,A_{2}=1]$ is correct. When $L$ is high dimensional and, as in an observational study, $\pi(\cdot)$ is unknown, double robustness is a desirable property because model misspecification is generally unavoidable, even when we use flexible, high dimensional, semiparametric models in (i) and (ii). In fact, DR estimators have advantages even when, as is usually the case, the models in (i) and (ii) are both incorrect. This happens because the bias of the DR estimator $\widehat{\mu}_{\mathrm{dr}}$ is of second order, and thus generally less than the bias of a non-DR estimator (such as a standard IPTW estimator). By second order, we mean that the bias of $\widehat{\mu}_{\mathrm{dr}}$ depends on the product of the error made in the estimation of $\Pr(A_{2}=1 \mid A_{1}=1,L) $ times the error made in the estimation of $E[ Y \mid A_{1}=1,L,A_{2}=1] $.
@Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) noted that the locally efficient estimator of @robins:rotnitzky:zhao:1994 $$\begin{aligned}
\widetilde{\mu}_{\mathrm{dr}}
&=& \bigl\{ \mathbb{P}_{n}[A_{1}] \bigr\}^{-1}
\\
&&{}\cdot \mathbb{P}_{n} \biggl[A_{1} \biggl\{ \frac{A_{2}}{\widehat{\pi}(L) }Y
- \biggl\{ \frac{A_{2}}{\widehat{\pi}( L) }-1
\biggr\} \widehat{b}( L) \biggr\} \biggr]\end{aligned}$$ is doubly robust where $\widehat{\pi}( L) $ and $\widehat{b}( L) $ are estimators of $\pi( L) $ and $b(L)$. Unfortunately, in finite samples this estimator may fail to lie in the parameter space for $\mu$, that is, the interval $[0,1]$ if $Y$ is binary. In response, @Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) proposed a plug-in DR estimator, the doubly robust regression estimator $$\widehat{\mu}_{\mathrm{dr},\mathrm{reg}} = \bigl\{ \mathbb{P}_{n} [
A_{1} ] \bigr\}^{-1}\mathbb{P}_{n} \bigl\{
A_{1} \widehat{b}( L) \bigr\},$$ where now $\widehat{b}( L) =\operatorname{expit}\{ m( L;\widehat{\eta})
+ \widehat{\theta}/\widehat{\pi}( L)\}$ and $( \widehat{\eta},\widehat{\theta}) $ are obtained by fitting by maximum likelihood the logistic regression model $\Pr(Y=1 \mid A_{1}=1,\allowbreak L,A_{2}=1) =\operatorname{expit}\{ m( L;\eta) +\theta
/\widehat{\pi}( L) \} $ to subjects with $A_{1}=1$, $A_{2}=1$. Here, $m( L;\eta)$ is a user-specified function of $L$ and of the Euclidean parameter $\eta$.
@Robi:robust:1999 ([-@Robi:robust:1999]) and @Bang:Robi:doub:2005 ([-@Bang:Robi:doub:2005]) obtained plug-in DR regression estimators in longitudinal missing data and causal inference models by reexpressing the g-formula as a sequence of iterated conditional expectations.
@van:Rubi:targ:2006 ([-@van:Rubi:targ:2006]) proposed a clever general method for obtaining plug-in DR estimators called targeted maximum likelihood. In our setting, the method yields an estimator $\widehat{\mu}_{\mathrm{dr},\mathrm{TMLE}}$ that differs from $\widehat{\mu}_{\mathrm{dr},\mathrm{reg}}$ only in that $\widehat{b}( L) $ is now given by $\operatorname{expit}\{ \widehat{m}( L) +\widehat{\theta}_{\mathrm{greedy}}/\widehat{\pi}( L) \} $ where $\widehat{\theta}_{\mathrm{greedy}}$ is again obtained by maximum likelihood but with a fixed offset $\widehat{m}( L) $. This offset is an estimator of $\Pr(Y=1 \mid A_{1}=1,L,A_{2}=1) $ that might be obtained using flexible machine learning methods. Similar comments apply to models considered by Bang and Robins ([-@Bang:Robi:doub:2005]). Since 2006 there has been an explosion of research that has produced doubly robust estimators with much improved large sample efficiency and finite sample performance; @rotnitkzy:vansteelandt:2014 ([-@rotnitkzy:vansteelandt:2014]) give a review.
We note that CAR models are not the only models that admit doubly robust estimators. For example, @Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) exhibited doubly robust estimators in models with nonignorable missingness. @Robins:Rotnitzky:comment:on:bickel:2001 ([-@Robins:Rotnitzky:comment:on:bickel:2001]) derived sufficient conditions, satisfied by many non-CAR models, that imply the existence of doubly robust estimators. Recently, doubly robust estimators have been obtained in a wide variety of models. See @dudik:2014 ([-@dudik:2014]) in this volume for an interesting example.
Higher Order Influence Functions
================================
It may happen that the second-order bias of a doubly-robust estimator $\widehat{\mu}_{\mathrm{dr}}$ decreases slower to 0 with $n$ than $n^{-1/2}$, and thus the bias exceeds the standard error of the estimator. In that case, confidence intervals for $\mu$ based on $\widehat{\mu}_{\mathrm{dr}}$ fail to cover at their nominal rate even in large samples. Furthermore, in such a case, in terms of mean squared error, $\widehat{\mu}_{\mathrm{dr}}$ does not optimally trade off bias and variance. In an attempt to address these problems, @robins:higher:2008 ([-@robins:higher:2008]) developed a theory of point and interval estimation based on higher order influence functions and use this theory to construct estimators of $\mu$ that improve on $\widehat{\mu}_{\mathrm{dr}}$. Higher order influence functions are higher order U-statistics. The theory of @robins:higher:2008 ([-@robins:higher:2008]) extends to higher order the first order semiparametric inference theory of @bickel:klaasen:ritov:wellner:1993 ([-@bickel:klaasen:ritov:wellner:1993]) and @van:on:1991 ([-@van:on:1991]). In this issue, @vandervaart:2014 ([-@vandervaart:2014]) gives a masterful review of this theory. Here, we present an interesting result found in @robins:higher:2008 ([-@robins:higher:2008]) that can be understood in isolation from the general theory and conclude with an open estimation problem. @robins:higher:2008 ([-@robins:higher:2008]) consider the question of whether, for estimation of a conditional variance, random regressors provide for faster rates of convergence than do fixed regressors, and, if so, how? They consider a setting in which $n$ i.i.d. copies of $ ( Y,X ) $ are observed with $X$ a $d$-dimensional random vector, with bounded density $f( \cdot) $ absolutely continuous w.r.t. the uniform measure on the unit cube $ (0,1 ) ^{d}$. The regression function $b( \cdot) =E [Y \mid X=\cdot ]$ is assumed to lie in a given Hölder ball with Hölder exponent $\beta<1$.[^43] The goal is to estimate $E[\hbox{Var} \{ Y \mid X \} ]$ under the homoscedastic semiparametric model $\operatorname{Var}[ Y \mid X] =\sigma^{2}$. Under this model, the authors construct a simple estimator $\widehat{\sigma}^{2}$ that converges at rate $n^{-\fraca{4\beta/d}{1+4\beta/d}}$, when $\beta/d<1/4$.
@Wang:Brow:Cai:Levi:effe:2008 ([-@Wang:Brow:Cai:Levi:effe:2008]) and @Cai2009126 ([-@Cai2009126]) earlier proved that if $X_{i},i=1,\ldots,n$, are nonrandom but equally spaced in $ ( 0,1 )^{d}$, the minimax rate of convergence for the estimation of $\sigma^{2}$ is $n^{-2\beta/d}$ (when $\beta/d<1/4$) which is slower than $n^{-\fraca{4\beta/d}{1+4\beta/d}}$. Thus, randomness in $X$ allows for improved convergence rates even though no smoothness assumptions are made regarding $f( \cdot)$.
To explain how this happens, we describe the estimator of @robins:higher:2008 ([-@robins:higher:2008]). The unit cube in $\mathbb{R}^{d}$ is divided into $k=k( n ) =n^{\gamma}$, $\gamma>1$ identical subcubes each with edge length $k^{-1/d}$. A simple probability calculation shows that the number of subcubes containing at least two observations is $O_{p}( n^{2}/k)$. One may estimate $\sigma^{2}$ in each such subcube by $( Y_{i}-Y_{j})^{2}/2$.[^44] An estimator $\widehat{\sigma}^{2}$ of $\sigma^{2}$ may then be constructed by simply averaging the subcube-specific estimates $ ( Y_{i}-Y_{j} ) ^{2}/2$ over all the sub-cubes with at least two observations. The rate of convergence of the estimator is maximized at $n^{-\fraca{4\beta/d}{1+4\beta/d}}$ by taking $k=n^{\fracz{2}{1+4\beta/d}}$.[^45]
@robins:higher:2008 ([-@robins:higher:2008]) conclude that the random design estimator has better bias control, and hence converges faster than the optimal equal-spaced fixed $X$ estimator, because the random design estimator exploits the $O_{p} (n^{2}/n^{\fracz{2}{1+4\beta/d}} ) $ random fluctuations for which the $X$’s corresponding to two different observations are only a distance of $O ( \{ n^{\fracz{2}{1+4\beta/d}} \}^{-1/d} )$ apart.
An Open Problem[^46] {#an-open-problem .unnumbered}
--------------------
Consider again the above setting with random $X$. Suppose that $\beta/d$ remains less than $1/4$ but now $\beta>1$. Does there still exist an estimator of $\sigma^{2}$ that converges at $n^{-\fraca{4\beta/d}{1+4\beta/d}}$? Analogy with other nonparametric estimation problems would suggest the answer is “yes,” but the question remains unsolved.[^47]
Other Work {#sec:otherwork}
==========
The available space precludes a complete treatment of all of the topics that Robins has worked on. We provide a brief description of selected additional topics and a guide to the literature.
Analyzing Observational Studies as Nested Randomized Trials {#analyzing-observational-studies-as-nested-randomized-trials .unnumbered}
-----------------------------------------------------------
@hernan2008observational ([-@hernan2008observational]) and @hernan2005discussion conceptualize and analyze observational studies of a time varying treatment as a nested sequence of individual RCTs trials run by nature. Their analysis is closely related to g-estimation of SNM (discussed in Section \[sec:snm\]). The critical difference is that in these papers Robins and Hernán do not specify a SNM to coherently link the trial-specific effect estimates. This has benefits in that it makes the analysis easier and also more familiar to users without training in SNMs. The downside is that, in principle, this lack of coherence can result in different analysts recommending, as optimal, contradictory interventions (@robins2007invited [-@robins2007invited]).
Adjustment for “Reverse Causation” {#adjustment-for-reverse-causation .unnumbered}
----------------------------------
Consider an epidemiological study of a time- dependent treatment (say cigarette smoking) on time to a disease of interest, say clinical lung cancer. In this setting, uncontrolled confounding by undetected preclinical lung cancer (often referred to as “reverse causation") is a serious problem. @robins2008causal ([-@robins2008causal]) develops analytic methods that may still provide an unconfounded effect estimate, provided that (i) all subjects with preclinical disease severe enough to affect treatment (i.e., smoking behavior) at a given time $t$ will have their disease clinically diagnosed within the next $x$, say $2$ years and (ii) based on subject matter knowledge an upper bound, for example, $3$ years, on $x$ is known.
Causal Discovery {#causal-discovery .unnumbered}
----------------
@cps93 ([-@cps93]) and @pearlverm:tark proposed statistical methods that allowed one to draw causal conclusions from associational data. These methods assume an underlying causal DAG (or equivalently an FFRCISTG). If the DAG is incomplete, then such a model imposes conditional independence relations on the associated joint distribution (via d-separation). @cps93 and @pearlverm:tark ([-@pearlverm:tark]) made the additional assumption that [*all*]{} conditional independence relations that hold in the distribution of the observables are implied by the underlying causal graph, an assumption termed “stability” by @pearlverm:tark ([-@pearlverm:tark]), and “faithfulness” by @cps93. Under this assumption, the underlying DAG may be identified up to a (“Markov”) equivalence class. @cps93 proposed two algorithms that recover such a class, entitled “PC” and “FCI.” While the former presupposes that there are no unobserved common causes, the latter explicitly allows for this possibility.
@robins:impossible:1999 ([-@robins:impossible:1999]) and @robins:uniform:2003 ([-@robins:uniform:2003]) pointed out that although these procedures were consistent they were not uniformly consistent. More recent papers ([-@kalisch:2007]; [-@Colo:Maat:Kali:Rich:lear:2012]) recover uniform consistency for these algorithms by imposing additional assumptions. @spirtes:2014 ([-@spirtes:2014]) in this volume extend this work by developing a variant of the PC Algorithm which is uniformly consistent under weaker assumptions.
@shpitser12parameter ([-@shpitser12parameter; -@shpitser2014introduction]), building on @tian02on and @robins:1999 ([-@robins:1999]) develop a theory of *nested Markov models* that relate the structure of a causal DAG to conditional independence relations that arise after re-weighting; see Section \[sec:direct-null\]. This theory, in combination with the theory of graphical Markov models based on Acyclic Directed Mixed Graphs ([-@richardson:2002]; [-@richardson:2003]; [-@wermuth:11]; [-@evans2014]; [-@sadeghi2014]), will facilitate the construction of more powerful[^48] causal discovery algorithms that could (potentially) reveal much more information regarding the structure of a DAG containing hidden variables than algorithms (such as FCI) that solely use conditional independence.
Extrapolation and Transportability of Treatment Effects {#extrapolation-and-transportability-of-treatment-effects .unnumbered}
-------------------------------------------------------
Quality longitudinal data is often only available in high resource settings. An important question is when and how can such data be used to inform the choice of treatment strategy in low resource settings. To help answer this question, @robins2008estimation ([-@robins2008estimation]) studied the extrapolation of optimal dynamic treatment strategies between two HIV infected patient populations. The authors considered the treatment strategies $g_x$, of the same form as those defined in Section \[sec:msm\], namely, “start anti-retroviral therapy the first time at which the measured CD4 count falls below $x$.” Given a utility measure $Y$, their goal is to find the regime $g_{x_{\mathrm{opt}}}$ that maximizes $E[ Y(g_x)]$ in the second low-resource population when good longitudinal data are available only in the first high-resource population. Due to differences in resources, the frequency of CD4 testing in the first population is much greater than in the second and, furthermore, for logistical and/or financial reasons, the testing frequencies cannot be altered. In this setting, the authors derived conditions under which data from the first population is sufficient to identify $g_{x_{\mathrm{opt}}}$ and construct IPTW estimators of $g_{x_{\mathrm{opt}}}$ under those conditions. A key finding is that owing to the differential rates of testing, a necessary condition for identification is that CD4 testing has no direct causal effect on $Y$ not through anti-retroviral therapy. In this issue, @pearl:2014 ([-@pearl:2014]) study the related question of transportability between populations using graphical tools.
Interference, Interactions and Quantum Mechanics {#interference-interactions-and-quantum-mechanics .unnumbered}
------------------------------------------------
Within a counterfactual causal model, @cox1958 ([-@cox1958]) defined there to be *interference between treatments* if the response of some subject depends not only on their treatment but on that of others as well. On the other hand, @Vand:Robi:mini:2009 ([-@Vand:Robi:mini:2009]) defined two binary treatments $ ( a_{1},a_{2} ) $ to be *causally interacting* to cause a binary response $Y$ if for some unit $Y( 1,1) \neq Y( 1,0) =Y(0,1) $; @Vand:epis:2010 ([-@Vand:epis:2010]) defined the interaction to be *epistatic* if $Y( 1,1)
\neq Y( 1,0) =Y(0,1) =Y( 0,0)$. VanderWeele with his collaborators has developed a very general theory of empirical tests for causal interaction of different types ([-@Vand:Robi:mini:2009]; [-@Vand:epis:2010], [-@Vand:suff:2010]; [-@vanderweele2012]).
@robins2012proof ([-@robins2012proof]) showed, perhaps surprisingly, that this theory could be used to give a simple but novel proof of an important result in quantum mechanics known as Bell’s theorem. The proof was based on two insights: The first was that the consequent of Bell’s theorem could, by using the Neyman causal model, be recast as the statement that there is interference between a certain pair of treatments. The second was to recognize that empirical tests for causal interaction can be reinterpreted as tests for certain forms of interference between treatments, including the form needed to prove Bell’s theorem. @vanderweele2012mapping ([-@vanderweele2012mapping]) used this latter insight to show that existing empirical tests for causal interactions could be used to test for interference and spillover effects in vaccine trials and in many other settings in which interference and spillover effects may be present. The papers @ogburn:2014 ([-@ogburn:2014]) and @vanderweele:2014 in this issue contain further results on interference and spillover effects.
Multiple Imputation {#multiple-imputation .unnumbered}
-------------------
@wang1998large ([-@wang1998large]) and @robins2000inference ([-@robins2000inference]) studied the statistical properties of the multiple imputation approach to missing data ([-@rubin2004multiple]). They derived a variance estimator that is consistent for the asymptotic variance of a multiple imputation estimator even under misspecification and incompatibility of the imputation and the (complete data) analysis model. They also characterized the large sample bias of the variance estimator proposed by @Rubi:mult:1978 ([-@Rubi:mult:1978]).
Posterior Predictive Checks {#posterior-predictive-checks .unnumbered}
---------------------------
@robins2000asymptotic ([-@robins2000asymptotic]) studied the asymptotic null distributions of the posterior predictive p-value of @rubin1984bayesianly ([-@rubin1984bayesianly]) and @guttman1967use ([-@guttman1967use]) and of the conditional predictive and partial posterior predictive p-values of @bayarri2000p ([-@bayarri2000p]). They found the latter two p-values to have an asymptotic uniform distribution; in contrast they found that the posterior predictive p-value could be very conservative, thereby diminishing its power to detect a misspecified model. In response, Robins et al. derived an adjusted version of the posterior predictive p-value that was asymptotically uniform.
Sensitivity Analysis {#sensitivity-analysis .unnumbered}
--------------------
Understanding that epidemiologists will almost never succeed in collecting data on all covariates needed to fully prevent confounding by unmeasured factors and/or nonignorable missing data, Robins with collaborators Daniel Scharfstein and Andrea Rotnitzky developed methods for conducting sensitivity analyses. See, for example, @Scha:Rotn:Robi:adju:1999, @robins2000sensitivity and @robins2002covariance ([-@robins2002covariance], pages 319–321). In this issue, @richardson:hudgens:2014 ([-@richardson:hudgens:2014]) describe methods for sensitivity analysis and present several applied examples.
Public Health Impact {#public-health-impact .unnumbered}
--------------------
Finally, we have not discussed the large impact of the methods that Robins introduced on the substantive analysis of longitudinal data in epidemiology and other fields. Many researchers have been involved in transforming Robins’ work on time-varying treatments into increasingly reliable, robust analytic tools and in applying these tools to help answer questions of public health importance.
List of Acronyms Used {#acronyms .unnumbered}
=====================
----------- --------------------------------- ---------------------------------------------------------------------
CAR: Section \[sec:semipar-eff\] coarsened at random.
CD4: Section \[sec:tree-graph\] (medical) cell line depleted by HIV.
CDE: Section \[sec:cde\] controlled direct effect.
CMA: Section \[sec:dags\] causal Markov assumption.
DAG: Section \[sec:dags\] directed acyclic graph.
DR: Section \[sec:semipar-eff\] doubly robust.
dSWIG: Section \[sec:dynamic-regimes\] dynamic single-world intervention graph.
FFRCISTG: Section \[sec:tree-graph\] finest fully randomized causally interpreted structured tree graph.
HIV: Section \[sec:tree-graph\] (medical) human immunodeficiency virus.
IPCW: Section \[sec:censoring\] inverse probability of censoring weighted.
IPTW: Section \[sec:msm\] inverse probability of treatment weighted.
ITT: Section \[sec:censoring\] intention to treat.
MI: Section \[sec:pde\] (medical) myocardial infarction.
MSM: Section \[sec:msm\] marginal structural model.
----------- --------------------------------- ---------------------------------------------------------------------
----------- ---------------------------- ------------------------------------------------------------------
NPSEM: Section \[sec:tree-graph\] nonparametric structural equation model.
NPSEM-IE: Section \[sec:tree-graph\] nonparametric structural equation model with independent errors.
PDE: Section \[sec:pde\] pure direct effects.
PSDE: Section \[sec:psde\] principal stratum direct effects.
RCT: Section \[sec:tree-graph\] randomized clinical trial.
SNM: Section \[sec:snm\] structural nested model.
SNDM: Section \[sec:snm\] structural nested distribution model.
SNFTM: Section \[sec:snm\] structural nested failure time model.
SNMM: Section \[sec:snm\] structural nested mean model.
SWIG: Section \[sec:dags\] single-world intervention graph.
TIE: Section \[sec:pde\] total indirect effect.
----------- ---------------------------- ------------------------------------------------------------------
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the US National Institutes of Health Grant R01 AI032475.
[142]{}
(). . .
Aalen, O. (1978). Nonparametric inference for a family of counting processes. *The Annals of Statistics* [6]{}, 701–726.
, , (). . , .
Andersen, P., O. Borgan, R. Gill, and N. Keiding (1992). *Statistical models based on counting processes*. Springer.
, (). . .
Aronow, P. M., D. P. Green, and D. K. K. Lee (2014). Sharp bounds on the variance in randomized experiments. *The Annals of Statistics* [42]{}(3), 850–871.
(). . In . , .
Balke, A. and J. Pearl (1994). Probabilistic evaluation of counterfactual queries. In [*Proceedings of the $\rm12^{th}$ Conference on Artificial Intelligence*]{}, Volume 1, Menlo Park, CA, pp. 230–7. MIT Press.
(). . .
Bang, H. and J. M. Robins (2005). Doubly robust estimation in missing data and causal inference models. *Biometrics* [61]{}(4), 962–973.
(). . .
Bayarri, M. J. and J. O. Berger (2000). ${P}$ values for composite null models (with discussion). *Journal of the American Statistical Association* [ 95]{}(452), 1127–1170.
(). . , .
Bellman, R. (1957). *Dynamic Programming* (1 ed.). Princeton, NJ, USA: Princeton University Press.
, , (). . , .
Bickel, P. J., C. A. J. Klaassen, Y. Ritov, and J. A. Wellner (1993). *Efficient and Adaptive Estimation for Semiparametric Models*. Baltimore: John Hopkins University Press.
, ed. (). . , .
, H. M. (Ed.) (1971). *Causal models in the social sciences*. Chicago.
, (). . .
Cai, T. T., M. Levine, and L. Wang (2009). Variance function estimation in multivariate nonparametric regression with fixed design. *Journal of Multivariate Analysis* [100]{}(1), 126 – 136.
, (). . .
Cassel, C. M., C. E. Särndal, and J. H. Wretman (1976). Some results on generalized difference estimation and generalized regression estimation for finite populations. *Biometrika* [63]{}, 615–620.
(). . .
Cator, E. A. (2004). On the testability of the car assumption. *The Annals of Statistics* [32]{}(5), 1957–1980.
, , (). . .
Colombo, D., M. H. Maathuis, M. Kalisch, and T. S. Richardson (2012). Learning high-dimensional directed acyclic graphs with latent and selection variables. *The Annals of Statistics* [40]{}(1), 294–321.
(). . , .
Cox, D. R. (1958). *Planning of experiments.* Wiley.
(). . .
Cox, D. R. (1972). Regression models and life-tables (with discussion). *Journal of the Royal Statistical Society, Series B: Methodological* [34]{}, 187–220.
(). . .
Cox, D. R. and N. Wermuth (1999). Likelihood factorizations for mixed discrete and continuous variables. *Scand. J. Statist.* [26]{}(2), 209–220.
, , (). . .
Dudik, M., D. Erhan, J. Langford, and L. Li (2014). Doubly robust policy evaluation and learning. *Statistical Science* [29]{}(4), ??–??
(). . .
Efron, B. and D. V. Hinkley (1978). Assessing the accuracy of the maximum likelihood estimator: observed versus expected [F]{}isher information. *Biometrika* [65]{}(3), 457–487. With comments by Ole Barndorff-Nielsen, A. T. James, G. K. Robinson and D. A. Sprott and a reply by the authors.
(). . .
Evans, R. J. and T. S. Richardson (2014). Markovian acyclic directed mixed graphs for discrete data. *The Annals of Statistics* [42]{}(4), 1452–1482.
(). . .
Firth, D. and K. E. Bennett (1998). Robust models in probability sampling ([with Discussion]{}). *Journal of the Royal Statistical Society, Series B: Statistical Methodology* [60]{}, 3–21.
(). . , .
Fleming, T. R. and D. P. Harrington (1991). *Counting Processes and Survival Analysis*. John Wiley & Sons.
(). . .
Frangakis, C. E. and D. B. Rubin (1999). Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment-noncompliance and subsequent missing outcomes. *Biometrika* [86]{}(2), 365–379.
(). . .
Frangakis, C. E. and D. B. Rubin (2002). Principal stratification in causal inference. *Biometrics* [58]{}(1), 21–29.
(). .
Freedman, D. A. (2006). Statistical [M]{}odels for [C]{}ausation: [W]{}hat [I]{}nferential [L]{}everage [D]{}o [T]{}hey [P]{}rovide? *Evaluation Review* [30]{}(6), 691–713.
(). . .
Gilbert, E. S. (1982). Some confounding factors in the study of mortality and occupational exposures. *American Journal of Epidemiology* [116]{}(1), 177–188.
, (). . .
Gilbert, P. B., R. J. Bosch, and M. G. Hudgens (2003). Sensitivity [A]{}nalysis for the [A]{}ssessment of [C]{}ausal [V]{}accine [E]{}ffects on [V]{}iral [L]{}oad in [HIV]{} [V]{}accine [T]{}rials. *Biometrics* [59]{}(3), 531–541.
(). . .
Gill, R. D. (2014). Statistics, causality and [B]{}ell’s theorem. *Statistical Science* [29]{}(4), ??–??
(). . .
Gill, R. D. and J. M. Robins (2001). Causal inference for complex longitudinal data: The continuous case. *The Annals of Statistics* [29]{}(6), 1785–1811.
, (). . In . . , .
Gill, R. D., M. J. van der Laan, and J. M. Robins (1997). Coarsening at random: [C]{}haracterizations, conjectures, counter-examples. In [*Survival Analysis. Proceedings of the first Seattle Symposium in Biostatistics (Lecture Notes in Statistics Vol. 123)*]{}, pp. 255–294. Publisher to be added.
(). . .
Guttman, I. (1967). The use of the concept of a future observation in goodness-of-fit problems. *Journal of the Royal Statistical Society, Series B: Methodological* [29]{}, 83–100.
(). . .
Heitjan, D. F. and D. B. Rubin (1991). Ignorability and coarse data. *The Annals of Statistics* [19]{}, 2244–2253.
, (). .
Hern[á]{}n, M. A., J. M. Robins, and L. A. Garc[í]{}a Rodr[í]{}guez (2005). Discussion on “statistical issues arising in the women’s health initiative". *Biometrics* [61]{}(4), 922–930.
, , (). . .
Hern[á]{}n, M. A., E. Lanoy, D. Costagliola, and J. M. Robins (2006). Comparison of dynamic treatment regimes via inverse probability weighting. *Basic [&]{} Clinical Pharmacology [&]{} Toxicology* [ 98]{}(3), 237–242.
, , , , , , , (). . .
Hern[á]{}n, M. A., A. Alonso, R. Logan, F. Grodstein, K. B. Michels, M. J. Stampfer, W. C. Willett, J. E. Manson, and J. M. Robins (2008). Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. *Epidemiology (Cambridge, Mass.)* [19]{}(6), 766.
(). . In . .
Huang, Y. and M. Valtorta (2006). Pearl’s calculus of interventions is complete. In [*Proceedings of the 22nd Conference On Uncertainty in Artificial Intelligence*]{}.
(). . , .
Kalbfleisch, J. D. and R. L. Prentice (1980). *The Statistical Analysis of Failure Time Data*. John Wiley & Sons.
(). . .
Kalisch, M. and P. B[ü]{}hlmann (2007). Estimating high-dimensional directed acyclic graphs with the pc-algorithm. *Journal of Machine Learning Research* [8]{}, 613–636.
(). . .
Keiding, N. and D. Clayton (2014). Standardization and control for confounding in observational studies: a historical perspective. *Statist. Sci.* [29]{}, ??–??
(). . .
Manski, C. (1990). Non-parametric bounds on treatment effects. *American Economic Review* [80]{}, 351–374.
(). . .
Miettinen, O. S. and E. F. Cook (1981). Confounding: Essence and detection. *American Journal of Epidemiology* [114]{}(4), 593–603.
, (). . In .
Mohan, K., J. Pearl, and J. Tian (2013). Graphical models for inference with missing data. In [*Advances in Neural Information Processing Systems 26.*]{}, pp.1277–1285.
(). . .
Moore, K. L. and M. J. van der Laan (2009). Covariate adjustment in randomized trials with binary outcomes: Targeted maximum likelihood estimation. *Statistics in medicine* [28]{}(1), 39–64.
(). . .
Murphy, S. A. (2003). Optimal dynamic treatment regimes. *Journal of the Royal Statistical Society, Series B: Statistical Methodology* [65]{}(2), 331–366.
(). . . .
Neyman, J. (1923). Sur les applications de la théorie des probabilités aux experiences agricoles: [E]{}ssai des principes. *Roczniki Nauk Rolniczych* [X]{}, 1–51. In Polish, English translation by D. Dabrowska and T. Speed in [*Statistical Science*]{} [**5**]{} 463–472, 1990.
(). . .
Ogburn, E. L. and T. J. Vander[W]{}eele (2014). Causal diagrams for interference and contagion. *Statistical Science* [29]{}(4), ??–??
, (a). . .
Orellana, L., A. Rotnitzky, and J. M. Robins (2010a). Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part i: main content. *The International Journal of Biostatistics* [6]{}(2).
, (b). . .
Orellana, L., A. Rotnitzky, and J. M. Robins (2010b). Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part ii: proofs of results. *The International Journal of Biostatistics* [6]{}(2).
(). . , .
Pearl, J. (1988). *Probabilistic [R]{}easoning in [I]{}ntelligent [S]{}ystems*. San Mateo, CA: Morgan Kaufmann.
(a). . . .
Pearl, J. (1995a). Causal diagrams for empirical research (with discussion). *Biometrika* [82]{}, 669–690.
(b). . In . , .
Pearl, J. (1995b). On the testability of causal models with latent and instrumental variables. In [*Proceedings of the Eleventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-95)*]{}, San Francisco, CA, pp.435–443. Morgan Kaufmann.
(). . , .
Pearl, J. (2000). *Causality*. Cambridge, UK: Cambridge University Press.
(). . In . , .
Pearl, J. (2001). Direct and indirect effects. In J. S. Breese and D. Koller (Eds.), [*[P]{}roceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence*]{}, San Francisco, pp. 411–42. Morgan Kaufmann.
(). . , [Computer Science Dept., UCLA]{}.
Pearl, J. (2012). Eight myths about causality and structural equation models. Technical Report R-393, Computer Science Department, UCLA.
(). . .
Pearl, J. and E. Bareinboim (2014). External validity: From do-calculus to transportability across populations. *Statistical Science* [29]{}(4), ??–??
(). . In . . , .
Pearl, J. and T. Verma (1991). A theory of inferred causation. In J. Allen, R. Fikes, and E. Sandewall (Eds.), [*Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference*]{}, San Mateo, CA, pp. 441–452. Morgan Kaufmann.
, , , (). . .
Picciotto, S., M. A. Hern[á]{}n, J. H. Page, J. G. Young, and J. M. Robins (2012). Structural nested cumulative failure time models to estimate the effects of interventions. *Journal of the American Statistical Association* [ 107]{}(499), 886–900.
, (). . .
Quale, C. M., M. J. van Der Laan, and J. M. Robins (2006). Locally efficient estimation with bivariate right-censored data. *Journal of the American Statistical Association* [ 101]{}(475), 1076–1084.
(). . .
Richardson, T. S. (2003). Markov properties for acyclic directed mixed graphs. *Scand. J. Statist.* [30]{}, 145–157.
(). . , [Center for Statistics and the Social Sciences, Univ. Washington, Seattle, WA]{}.
Richardson, T. S. and J. M. Robins (2013). ingle [W]{}orld [I]{}ntervention [G]{}raphs [(SWIGs)]{}: A unification of the counterfactual and graphical approaches to causality. Technical Report 128, Center for Statistics and the Social Sciences, University of Washington.
(). . .
Richardson, T. S. and P. Spirtes (2002). Ancestral graph [M]{}arkov models. *Ann. Statist.* [30]{}, 962–1030.
, , (). . .
Richardson, A., M. G. Hudgens, P. B. Gilbert, and J. P. Fine (2014). Nonparametric bounds and sensitivity analysis of treatment effects. *Statistical Science* [29]{}(4), ??–??
, , (). .
Ritov, Y., P. J. Bickel, A. C. Gamst, and B. J. K. Kleijn (2014). The [B]{}ayesian analysis of complex, high-dimensional models: [C]{}an it be [CODA]{}? *Statistical Science* [29]{}(4), ??–??
(). . .
Robins, J. M. (1986). A new approach to causal inference in mortality studies with sustained exposure periods [–]{} applications to control of the healthy worker survivor effect. *Mathematical [M]{}odeling* [7]{}, 1393–1512.
(a). . .
Robins, J. M. (1987a). . *J Chronic Dis* [40 Suppl 2]{}, 139S–161S.
(b). .
Robins, J. M. (1987b). Addendum to: [*A new approach to causal inference in mortality studies with sustained exposure periods – [A]{}pplication to control of the healthy worker survivor effect*]{}. *Computers and Mathematics with Applications* [14]{}, 923–945.
(). . In (, , eds.). ,
Robins, J. M. (1989). The analysis of randomized and non-randomized [AIDS]{} treatment trials using a new approach to causal inference in longitudinal studies. In L. Sechrest, H. Freeman, and A. Mulley (Eds.), [*Health Service Research Methodology: A focus on [AIDS]{}*]{}. Washington, D.C.: U.S. Public Health Service.
(). . .
Robins, J. M. (1992). Estimation of the time-dependent accelerated failure time model in the presence of confounding factors. *Biometrika* [79]{}(2), 321–334.
(). . In . , .
Robins, J. M. (1993). Analytic methods for estimating hiv-treatment and cofactor effects. In [*Methodological Issues in AIDS Behavioral Research*]{}, pp.213–288. Springer.
(). . .
Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. *Communications in Statistics: Theory and Methods* [23]{}, 2379–2412.
(a). . In . . , .
Robins, J. M. (1997a). Causal inference from complex longitudinal data. In M. Berkane (Ed.), [*Latent variable modelling and applications to causality*]{}, Number 120 in Lecture notes in statistics, pp.69–117. New York: Springer-Verlag.
(b). . In ( , Section eds.), ( , eds.) . , .
Robins, J. M. (1997c). Structural nested failure time models. In [ P.K. Andersen and N. Keiding, [(Section Eds.)]{}, [*Survival Analysis*]{}]{}, P. Armitage, and T. Colton (Eds.), [*Encyclopedia of Biostatistics*]{}, pp. 4372–4389. John Wiley [&]{} Sons, Ltd.
(). . In . , .
Robins, J. M. (1997b). Marginal structural models. In [*ASA Proceedings of the Section on Bayesian Statistical Science*]{}, pp. 1–10. American Statistical Association.
(a). . In . , .
Robins, J. M. (1999a). Robust estimation in sequentially ignorable missing data and causal inference models. In [*ASA Proceedings of the Section on Bayesian Statistical Science*]{}, pp. 6–10. American Statistical Association.
(b). . In ( , eds.) . , .
Robins, J. M. (1999b). Testing and estimation of direct effects by reparameterizing directed acyclic graphs with structural nested models. In C. Glymour and G. Cooper (Eds.), [*Computation, Causation, and Discovery*]{}, pp. 349–405. Cambridge, MA: MIT Press.
(). . In . . , .
Robins, J. M. (2000). Marginal structural models versus structural nested models as tools for causal inference. In M. E. Halloran and D. Berry (Eds.), [*Statistical Models in Epidemiology, the Environment, and Clinical Trials*]{}, Volume 116 of [*The IMA Volumes in Mathematics and its Applications*]{}, pp. 95–133. Springer New York.
(). . .
Robins, J. M. (2002). Comment on “Covariance adjustment in randomized experiments and observational studies”, by [P]{}. [R]{}. [R]{}osenbaum. *Statistical Science* [17]{}(3), 309–321.
(). . In . . , .
Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. In D. Y. Lin and P. J. Heagerty (Eds.), [*Proceedings of the Second Seattle Symposium on Biostatistics*]{}, Number 179 in Lecture Notes in Statistics, pp. 189–326. New York: Springer.
(). . .
Robins, J. M. (2008). Causal models for estimating the effects of weight gain on mortality. *International Journal of Obesity* [32]{}, S15–S41.
\(a) . .
Robins, J. M. and S. Greenland (1989a). Estimability and estimation of excess and etiologic fractions. *Statistics in Medicine* [8]{}, 845–859.
(b). . .
Robins, J. M. and S. Greenland (1989b). The probability of causation under a stochastic model for individual risk. *Biometrics* [45]{}, 1125–1138.
(). . .
Robins, J. M. and S. Greenland (1992). Identifiability and exchangeability for direct and indirect effects. *Epidemiology* [3]{}, 143–155.
, (). . .
Robins, J. M., M. A. Hern[á]{}n, and A. Rotnitzky (2007). Invited commentary: effect modification by time-varying covariates. *American [J]{}ournal of [E]{}pidemiology* [166]{}(9), 994–1002.
(). . .
Robins, J. M. and H. Morgenstern (1987). The foundations of confounding in epidemiology. *Comput. Math. Appl.* [14]{}(9-12), 869–916.
, (). . .
Robins, J. M., L. Orellana, and A. Rotnitzky (2008). Estimation and extrapolation of optimal treatment and testing strategies. *Statistics in medicine* [27]{}(23), 4678–4721.
(). . In (, , eds.) . , .
Robins, J. M. and T. S. Richardson (2011). Alternative graphical causal models and the identification of direct effects. In P. Shrout, K. Keyes, and K. Ornstein (Eds.), [*Causality and Psychopathology: Finding the Determinants of Disorders and their Cures*]{}, Chapter 6, pp. 1–52. Oxford University Press.
(). . .
Robins, J. M. and Y. Ritov (1997). Toward a curse of dimensionality appropriate ([CODA]{}) asymptotic theory for semi-parametric models. *Statistics in Medicine* [16]{}, 285–319.
(). . In (, , eds.) . , .
Robins, J. M. and A. Rotnitzky (1992). Recovery of information and adjustment for dependent censoring using surrogate markers. In N. P. Jewell, K. Dietz, and V. T. Farewell (Eds.), [*AIDS Epidemiology*]{}, pp. 297–331. Birkhäuser Boston.
().
Robins, J. M. and A. Rotnitzky (2001). Comment on “[I]{}nference for semiparametric models: [S]{}ome questions and an answer”, by [P]{}. [B]{}ickel. *Statistica Sinica* [11]{}(4), 920–936.
, (). . In . . , .
Robins, J. M., A. Rotnitzky, and D. O. Scharfstein (2000). Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In [*Statistical models in epidemiology, the environment, and clinical trials*]{}, pp. 1–94. Springer.
, (). . .
Robins, J., A. Rotnitzky, and S. Vansteelandt (2007). Discussion of [[*[P]{}rincipal stratification designs to estimate input data missing due to death*]{}]{} by [C]{}.[E]{}. [F]{}rangakis, [D]{}. [B]{}. [R]{}ubin, [M.-W.]{} [A]{}n [&]{} [E]{}. [M]{}ac[K]{}enzie. *Biometrics* [63]{}(3), 650–653.
, (). . .
Robins, J. M., A. Rotnitzky, and L. P. Zhao (1994). Estimation of regression coefficients when some regressors are not always observed. *Journal of the American Statistical Association* [89]{}, 846–866.
, (). .
J. M. Robins, T. J. VanderWeele, and R. D. Gill (2012). A proof of [B]{}ell’s inequality in quantum mechanics using causal interactions. Available at arXiv:1207.4913.
, (). . .
Robins, J. M., A. van der Vaart, and V. Ventura (2000). Asymptotic distribution of p values in composite null models. *Journal of the American Statistical Association* [ 95]{}(452), 1143–1156.
(). . .
Robins, J. M. and N. Wang (2000). Inference for imputation estimators. *Biometrika* [87]{}(1), 113–124.
(). . In . , .
Robins, J. M. and L. Wasserman (1997). Estimation of effects of sequential treatments by reparameterizing directed acyclic graphs. In [*Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence*]{}, pp. 309–420. Morgan Kaufmann.
(). . In ( , eds.) . , .
Robins, J. M. and L. Wasserman (1999). On the impossibility of inferring causation from association without background knowledge. In C. Glymour and G. Cooper (Eds.), [*Computation, Causation, and Discovery*]{}, pp. 305–321. Cambridge, MA: MIT Press.
(). . .
Robins, J. M. and L. Wasserman (2000). Conditioning, likelihood, and coherence: [A]{} review of some foundational concepts. *Journal of the American Statistical Association* [ 95]{}(452), 1340–1346.
, , (). . .
Robins, J. M., D. Blevins, G. Ritter, and M. Wulfsohn (1992). ${G}$-estimation of the effect of prophylaxis therapy for pneumocystis carinii pneumonia on the survival of [AIDS]{} patients. *Epidemiology* [3]{}, 319–336.
, , (). . .
Robins, J. M., R. Scheines, P. Spirtes, and L. Wasserman (2003). Uniform consistency in causal inference. *Biometrika* [90]{}(3), 491–515.
, , (). . In . ( , eds.) . , .
Robins, J. M., L. Li, E. Tchetgen, and A. van der Vaart (2008). Higher order influence functions and minimax estimation of nonlinear functionals. In D. Nolan and T. Speed (Eds.), [*Probability and Statistics: Essays in Honor of David A. Freedman*]{}, Volume 2 of [*Collections*]{}, pp.335–421. Beachwood, Ohio, USA: Institute of Mathematical Statistics.
(). . .
Rotnitzky, A. and J. M. Robins (1995). Semiparametric regression estimation in the presence of dependent censoring. *Biometrika* [82]{}, 805–820.
(). . In (, , , , eds.). , .
Rotnitzky, A. and S. Vansteelandt (2014). Double-robust methods. In G. Fitzmaurice, M. Kenward, G. Molenberghs, A. Tsiatis, and G. Verbeke (Eds.), [*Handbook of Missing Data Methodology*]{}. Chapman & Hall/CRC Press.
(). . .
Rubin, D. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. *Journal of Educational Psychology* [66]{}, 688–701.
(a). . .
Rubin, D. B. (1978a). Bayesian inference for causal effects: [T]{}he role of randomization. *The Annals of Statistics* [6]{}, 34–58.
(b). . In . , .
Rubin, D. B. (1978b). Multiple imputations in sample surveys: [A]{} phenomenological [B]{}ayesian approach to nonresponse ([C]{}/[R]{}: P29-34). In [*ASA Proceedings of the Section on Survey Research Methods*]{}, pp. 20–28. American Statistical Association.
(). . .
Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. *The Annals of Statistics* [12]{}, 1151–1172.
(). . , .
Rubin, D. B. (2004b). *Multiple imputation for nonresponse in surveys*, Volume 81. John Wiley & Sons.
(). . .
Rubin, D. B. (1998). More powerful randomization-based $p$-values in double-blind trials with non-compliance. *Statistics in Medicine* [17]{}, 371–385.
(). . .
Rubin, D. B. (2004a). Direct and indirect causal effects via potential outcomes. *Scandinavian Journal of Statistics* [31]{}(2), 161–170.
(). . .
Sadeghi, K. and S. Lauritzen (2014). Markov properties for mixed graphs. *Bernoulli* [20]{}(2), 676–696.
, (). . .
Scharfstein, D. O., A. Rotnitzky, and J. M. Robins (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion). *Journal of the American Statistical Association* [94]{}, 1096–1120.
, , (). . .
Schulte, P. J., A. A. Tsiatis, E. B. Laber, and M. Davidian (2014). Q- and [A]{}-learning methods for estimating optimal dynamic treatment regimes. *Statistical Science* [29]{}(4), ??–??
(). . In (, , eds.) . , .
Sekhon, J. S. (2008). The [N]{}eyman-[R]{}ubin model of causal inference and estimation via matching methods. In J. M. Box-Steffensmeier, H. E. Brady, and D. Collier (Eds.), [*The [O]{}xford [H]{}andbook of [P]{}olitical [M]{}ethodology*]{}, Chapter 11, pp.271–299. Oxford Handbooks Online.
(). . In . , .
Shpitser, I. and J. Pearl (2006). Identification of joint interventional distributions in recursive semi-[M]{}arkovian causal models. In [*Proceedings of the 21st National Conference on Artificial Intelligence*]{}.
, , (). . In .
Shpitser, I., T. S. Richardson, J. M. Robins, and R. J. Evans (2012). Parameter and structure learning in nested [M]{}arkov models. In [*Causal Structure Learning Workshop of the 28th Conference on Uncertainty in Artificial Intelligence (UAI-12)*]{}.
, , (). . .
Shpitser, I., R. J. Evans, T. S. Richardson, and J. M. Robins (2014). Introduction to nested markov models. *Behaviormetrika* [41]{}(1), 3–39.
, (). . . , .
Spirtes, P., C. Glymour, and R. Scheines (1993). *Causation, [P]{}rediction and [S]{}earch*. Number 81 in Lecture Notes in Statistics. Springer-Verlag.
(). . .
Spirtes, P. and J. Zhang (2014). A uniformly consistent estimator of causal effects under the $k$-triangle-faithfulness assumption. *Statistical Science* [29]{}(4), ??–??
(). . In . , .
Tian, J. (2008). Identifying dynamic sequential plans. In [*24th Conference on Uncertainty in Artificial Intelligence (UAI-08)*]{}. AUAI Press.
(a). . In . , .
Tian, J. and J. Pearl (2002a). A general identification condition for causal effects. In [*Eighteenth National Conference on Artificial Intelligence*]{}, pp. 567–573.
(b). . In . , .
Tian, J. and J. Pearl (2002b). On the testable implications of causal models with hidden variables. In [*Proceedings of UAI-02*]{}, pp. 519–527.
(). . . , .
, A. A. (2006). *[Semiparametric theory and missing data.]{}* New York, NY: Springer.
, , (). . .
Tsiatis, A. A., M. Davidian, M. Zhang, and X. Lu (2008). Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: A principled yet flexible approach. *Statistics in medicine* [27]{}(23), 4658–4677.
(a). . .
Vander[W]{}eele, T. J. (2010a). Epistatic interactions. *Statistical Applications in Genetics and Molecular Biology* [9]{}(1), NA–NA.
(b). . .
Vander[W]{}eele, T. J. (2010b). Sufficient cause interactions for categorical and ordinal exposures with three levels. *Biometrika* [97]{}(3), 647–659.
(). . .
Vander[W]{}eele, T. J. and T. S. Richardson (2012). General theory for interactions in sufficient cause models with dichotomous exposures. *The Annals of Statistics* [40]{}(4), 2128–2161.
(). . .
Vander[W]{}eele, T. J. and J. M. Robins (2009). Minimal sufficient causation and directed acyclic graphs. *The Annals of Statistics* [37]{}(3), 1437–1465.
(). . .
VanderWeele, T. J. and I. Shpitser (2013). On the definition of a confounder. *The Annals of Statistics* [41]{}(1), 196–220.
, (). . .
Vander[W]{}eele, T. J., E. J. [Tchetgen Tchetgen]{}, and M. E. Halloran (2014). Interference and sensitivity analysis. *Statistical Science* [29]{}(4), ??–??
, , (). . .
Vander[W]{}eele, T. J., J. P. Vandenbroucke, E. J. T. Tchetgen, and J. M. Robins (2012). A mapping between interactions and interference: implications for vaccine trials. *Epidemiology* [23]{}(2), 285–292.
(). . .
Vansteelandt, S. and M. Joffe (2014). Structural nested models and [G]{}-estimation: the partially realized promise. *Statistical Science* [29]{}(4), ??–??
, (). . .
van der Laan, M. J., A. E. Hubbard, and J. M. Robins (2002). Locally efficient estimation of a multivariate survival function in longitudinal [S]{}tudies. *Journal of the American Statistical Association* [ 97]{}(458), 494–507.
(). . .
van der Laan, M. J. and M. L. Petersen (2007). Causal effect models for realistic individualized treatment and intention to treat rules. *The International Journal of Biostatistics* [3]{}(1-A3), 1–53.
(). . , .
, M. J. and J. M. [Robins]{} (2003). *[Unified methods for censored longitudinal data and causality.]{}* New York, NY: Springer.
(). . , .
van der Laan, M. J. and S. Rose (2011). *Targeted learning: causal inference for observational and experimental data*. Springer.
(). . .
van der Laan, M. J. and D. Rubin (2006). Targeted maximum likelihood learning. *The International Journal of Biostatistics* [2]{}(1-A11), 1–39.
(). . .
van der Vaart, A. (1991). On differentiable functionals. *The Annals of Statistics* [19]{}, 178–204.
(). . .
van der Vaart, A. (2014). Higher order tangent spaces and influence functions. *Statistical Science* [29]{}(4), ??–??
(). . In . , .
Verma, T. and J. Pearl (1990). Equivalence and synthesis of causal models. In M. Henrion, R. Shachter, L. Kanal, and J. Lemmer (Eds.), [*Uncertainty in Artificial Intelligence: Proceedings of the $\rm6^{th}$ Conference*]{}, Mountain View, CA, pp. 220–227. Association for Uncertainty in AI.
(). . .
Wang, N. and J. M. Robins (1998). Large-sample theory for parametric multiple imputation procedures. *Biometrika* [85]{}(4), 935–948.
, , (). . .
Wang, L., L. D. Brown, T. T. Cai, and M. Levine (2008). Effect of mean on variance function estimation in nonparametric regression. *The Annals of Statistics* [36]{}(2), 646–664.
(). . .
Wermuth, N. (2011). Probability distributions with summary graph structure. *Bernoulli* [17]{}(3), 845–879.
[^1]: @Robi:Gree:esti:1989 ([-@Robi:Gree:esti:1989; -@Robi:Gree:prob:1989]) provided a formal definition of the probability of causation and a definitive answer to the question in the following sense. They proved that the probability of causation was not identified from epidemiologic data even in the absence of confounding, but that sharp upper and lower bounds could be obtained. Specifically, under the assumption that a workplace exposure was never beneficial, the probability $P(t)$ that a workers death occurring $t$ years after exposure was due to that exposure was sharply upper bounded by $1$ and lower bounded by $\max [ 0,\{f_{1}(t)-f_{0}(t)\}/f_{1}(t) ] $, where $f_{1}(t)$ and $f_{0}(t)$ are, respectively, the marginal densities in the exposed and unexposed cohorts of the random variable $T$ encoding time to death.
[^2]: The author, Ethel Gilbert, is the mother of Peter Gilbert who is a contributor to this special issue; see ([-@richardson:hudgens:2014]).
[^3]: In the epidemiologic literature, this bestiary is sometimes referred to as the collection of “g-methods.”
[^4]: A complete of acronyms used is given before the References.
[^5]: See @Freedman01122006 ([-@Freedman01122006]) and @sekhon2008neyman ([-@sekhon2008neyman]) for historical reviews of the counterfactual point treatment model.
[^6]: Robins published an informal, accessible, summary of his main results in the epidemiologic literature ([-@robins:simpleversion:1987]). However, it was not until [-@robins:1992] (and many rejections) that his work on causal inference with time-varying treatments appeared in a major statistical journal.
[^7]: The perhaps more familiar *Non-Parametric Structural Equation Model with Independent Errors* (NPSEM-IE) considered by Pearl may be viewed as submodel of Robins’ FFRCISTG.
A *Non-Parametric Structural Equation Model* (NPSEM) assumes that all variables ($V$) can be intervened on. In contrast, the FFRCISTG model does not require one to assume this. However, if all variables in $V$ can be intervened on, then the FFRCISTG specifies a set of one-step ahead counterfactuals, $V_{m}(\overline{v}_{m-1}) $ which may equivalently be written as structural equations $V_{m}(\overline{v}_{m-1})=f_{m}(\overline{v}_{m-1},\varepsilon_{m})$ for functions $f_{m}$ and (vector-valued) random errors $\varepsilon_{m}$. Thus, leaving aside notational differences, structural equations and one-step ahead counterfactuals are equivalent. All other counterfactuals, as well as factual variables, are then obtained by recursive substitution.
However, the NPSEM-IE model of Pearl ([-@pearl:2000]) further assumes the errors $\varepsilon_{m}$ are jointly independent. In contrast, though an FFRCISTG model is also an NPSEM, the errors (associated with incompatible counterfactual worlds) may be dependent—though any such dependence could not be detected in a RCT. Hence, Pearl’s model is a strict submodel of an FFRCISTG model.
[^8]: In practice, there will almost always exist baseline covariates measured prior to $A_{1}$. In that case, the analysis in the text is to be understood as being with a given joint stratum of a set of baseline covariates sufficient to adjust for confounding due to baseline factors.
[^9]: Of course, one can never be certain that the epidemiologists were successful which is the reason RCTs are generally considered the gold standard for establishing causal effects.
[^10]: That is, the trials starting at $t=2$ are on study populations defined by specific $(A_1,L)$-histories.
[^11]: The g-formula density for $Y$ is a generalization of standardization of effect measures to time varying treatments. See @keiding:2014 ([-@keiding:2014]) for a historical review of standardization.
[^12]: Note that the distribution of $L(a_{1})$ is no longer identified under this weaker assumption.
[^13]: More precisely, we obtain the SWIG independence $Y(a_{1},a_{2}){\protect\mathpalette{\protect\independenT}{\perp}}A_{2}(a_{1}) \mid A_{1},L(a_1)$, that implies (\[eq:ind2\]) by the consistency assumption after instantiating $A_{1}$ at $a_{1}$. Note when checking d-separation on a SWIG all paths containing red “fixed” nonrandom vertices, such as $a_1$, are treated as always being blocked (regardless of the conditioning set).
[^14]: Above we have assumed the variables $A_{1}$, $L$, $A_{2}$, $Y$ occurring in the g-formula are temporally ordered. Interestingly, @robins:1986 ([-@robins:1986], Section 11) showed identification by the g-formula can require a nontemporal ordering. In his analysis of the Healthy Worker Survivor Effect, data were available on temporally ordered variables $(A_{1},L_{1},A_{2},L_{2},Y)$ where the $L_{t}$ are indicators of survival until time year $t$, $A_{t}$ is the indicator of exposure to a lung carcinogen and, there exists substantive background knowledge that carcinogen exposure at $t$ cannot cause death within a year. Under these assumptions, Robins proved that equation (\[eq:statrand\]) was false if one respected temporal order and chose $L$ to be $L_{1}$, but was true if one chose $L=L_{2}$. Thus, $E[Y(a_{1},a_{2})]$ was identified by the g-formula $f_{a_{1},a_{2}}^{\ast}(y)$ only for $L=L_{2}$. See (@richardson:robins:2013, [-@richardson:robins:2013], page 54) for further details.
[^15]: @pearl:biom ([-@pearl:biom]) introduced an identical notation except that he substituted the word “do” for “$g=$,” thus writing $f(y \mid \operatorname{do}(a_{1},a_{2}))$.
[^16]: If the $L\rightarrow Y$ edge is present, then $A_{1}$ still has an effect on $Y$.
[^17]: The dependence of $f(y \mid a_{1},l,a_{2})$ on $a_{1}$ does not represent causation but rather selection bias due to conditioning on the common effect $L$ of $H_{1}$ and $A_{1}$.
[^18]: But see @cox:wermuth:1999 ([-@cox:wermuth:1999]) for another approach.
[^19]: In the literature, semiparametric estimation of the parameters of a SNM based on such estimating functions is referred to as
[^20]: Interestingly, @robins:iv:1989 ([-@robins:iv:1989], page 127 and App. 1), unaware of Bellman’s work, reinvented the method of dynamic programming but remarked that, due to the difficulty of the estimation problem, it would only be of theoretical interest for finding the optimal dynamic regimes from longitudinal epidemiological data.
[^21]: See also @Robi:Gree:prob:1989 ([-@Robi:Gree:esti:1989; -@Robi:Gree:prob:1989]).
[^22]: @balke:pearl:1994 ([-@balke:pearl:1994]) showed that Robins’ bounds were not sharp in the presence of “defiers” (i.e., subjects who would never take the treatment assigned) and derived sharp bounds in that case.
[^23]: A viewpoint recently explored by @mohan:pearl:tian:2013 ([-@mohan:pearl:tian:2013]).
[^24]: IPTW estimators and IPCW estimators are essentially equivalent. For instance, in the censoring example of Section \[sec:censoring\], on the event $A_{2}=0$ of being uncensored, the IPCW denominator $\widehat{pr}(A_{2}=0 \mid L,A_{1}) $ equals $f(A_{2} \mid A_{1},L)$, the IPTW denominator.
[^25]: More formally, recall that under (\[eq:statrand\]), $E[ Y(a_{1},a_{2}) ] =\Phi\{ \beta_{0}^{\ast}+\gamma(a_{1},a_{2};\beta_{1}^{\ast}) \} $ is equal to the g-formula $\int yf_{ a_{1},a_{2} }^{\ast}( y) \,dy$. Now, given the joint density of the data $f( A_{1},L,A_{2},Y) $, define $$\widetilde{f}( A_{1},L,A_{2},Y) =f( Y\mid
A_{1},L,A_{2}) \widetilde{f}_{2}(A_{2})
f( L\mid A_{1}) \widetilde{f}_{1}( A_{1}),$$ where $\widetilde{f}_{1}( A_{1}) \widetilde{f}_{2}( A_{2}) $ are user-supplied densities chosen so that $\widetilde{f}$ is absolutely continuous with respect to $f$. Since the g-formula depends on the joint density of the data only through $f( Y \mid A_{1},L,A_{2}) $ and $f(L \mid A_{1})$, then it is identical under $\widetilde{f}$ and under $f$. Furthermore, for each $a_{1}$, $a_{2}$ the g-formula under $\widetilde{f}$ is just equal to $\widetilde{E}[ Y \mid A_{1}=a_{1},A_{2}=a_{2}] $ since, under $\widetilde{f}$, $A_{2}$ is independent of $ \{L,A_{1} \}$. Consequently, for any $q( A_{1},A_{2})$ $$\begin{aligned}
\everymath{\displaystyle}
\begin{array}{rcl}
0&= & \widetilde{E} \bigl[ q( A_{1},A_{2}) \bigl( Y-\Phi
\bigl\{ \beta _{0}^{\ast}+\gamma \bigl( A_{1},A_{2};
\beta_{1}^{\ast} \bigr) \bigr\} \bigr) \bigr]
\\[6pt]
& =& E \bigl[ q( A_{1},A_{2}) \bigl\{ \widetilde{f}(
A_{1}) \widetilde{f}( A_{2}) / \bigl\{ f( A_{1})
f(A_{2} \mid A_{1},L) \bigr\} \bigr\}
\\[6pt]
&& \hspace*{79pt} {} \cdot
\bigl( Y-\Phi\bigl\{ \beta_{0}^{\ast} + \gamma\bigl(
A_{1},A_{2};\beta_{1}^{\ast}\bigr)\bigr\}
\bigr) \bigr],
\end{array}\end{aligned}$$ where the second equality follows from the Radon–Nikodym theorem. The result then follows by taking $q( A_{1},A_{2}) =v(A_{1},A_{2})/
\{\widetilde{f}( A_{1}) \widetilde{f}( A_{2})\}$.
[^26]: Note that, as observed earlier, in this case identification is achieved through parametric assumptions made by the SNM.
[^27]: See (\[eq:g-formula-for-y\]).
[^28]: The analysis of @Rubi:dire:2004 ([-@Rubi:dire:2004]) was also based on this contrast, with $A_2$ no longer a failure time indicator so that the contrast (\[eq:psde-contrast\]) could be considered as well-defined for any value of $a_{2}$ for which the conditioning event had positive probability.
[^29]: For subjects for whom $A_{2}(a_{1} = 1)\neq A_{2}(a_{1} = 0)$, no principal stratum direct effect (PSDE) is defined.
[^30]: This follows from consistency.
[^31]: This follows by consistency.
[^32]: @Robi:Gree:iden:1992 ([-@Robi:Gree:iden:1992]) also defined the total indirect effect (TIE) of $A_{1}$ on $Y$ through $A_{2}$ to be $$E\bigl[ Y\bigl\{a_{1} = 1,A_{2}(a_{1} = 1)\bigr
\}\bigr] - E\bigl[Y\bigl\{a_{1} = 1,A_{2}(a_{1} =
0)\bigr\}\bigr] .$$ It follows that the total effect $E[ Y\{a_{1} = 1\}]
-E[Y\{a_{1} = 0\}] $ can then be decomposed as the sum of the PDE and the TIE.
[^33]: In more detail, the FFRCISTG associated with Figures \[fig:no-confound\](a) and (b) assumes for all $a_{1}$, $a_{2}$, $$\label{eq:ffrcistgforpde}
\quad Y(a_{1},a_{2}),A_{2}(a_{1})
{\protect\mathpalette{\protect\independenT}{\perp}}A_{1},\quad Y(a_{1},a_{2}) {\protect\mathpalette{\protect\independenT}{\perp}}A_{2}(a_{1})\mid A_{1},$$ which may be read directly from the SWIG shown in Figure \[fig:no-confound\](b); recall that red nodes are always blocked when applying d-separation. In contrast, Pearl’s NPSEM-IE also implies the independence $$\label{eq:npsem-ie} Y(a_{1},a_{2}) {\protect\mathpalette{\protect\independenT}{\perp}}A_{2}
\bigl(a_{1}^{*}\bigr)\mid A_{1},$$ when $a_1\neq a_1^*$. Independence (\[eq:npsem-ie\]), which is needed in order for the PDE to be identified, is a “cross-world” independence since $Y(a_{1},a_{2})$ and $A_{2}(a_{1}^{*})$ could never (even in principle) both be observed in any randomized experiment.
[^34]: A point freely acknowledged by @pearl:myths:2012 ([-@pearl:myths:2012]) who argues that causation should be viewed as more primitive than intervention.
[^35]: This point identification is not a “free lunch”: @robins:mcm:2011 ([-@robins:mcm:2011]) show that it is these additional assumptions that have reduced the FFRCISTG bounds for the PDE to a point. This is a consequence of the fact that these assumptions induce a model *for the original variables $\{A_1, A_2(a_1), Y(a_1,a_2)\}$* that is a strict submodel of the original FFRCISTG model.
Hence to justify applying the mediation formula by this route one must first be able to specify in detail the additional treatment variables and the associated intervention so as to make the relevant potential outcomes well-defined. In addition, one must be able to argue on substantive grounds for the plausibility of the required no direct effect assumptions and deterministic relations.
It should also be noted that even under Pearl’s NPSEM-IE model the PDE is not identified in causal graphs, such as those in Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\] that contain a variable (whether observed or unobserved) that is present both on a directed pathway from $A_1$ to $A_2$ and on a pathway from $A_1$ to $Y$.
[^36]: Note that in a linear structural equation model the PSDE is not defined unless $A_1$ has no effect on $A_2$.
[^37]: Results in @pearl95on ([-@pearl95on]) imply that under the sharp direct effect null the FFRCISTGs associated with the DAGs shown in Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\] also imply inequality restrictions similar to Bell’s inequality in Quantum Mechanics. See @gill:2014 ([-@gill:2014]) for discussion of statistical issues arising from experimental tests of Bell’s inequality.
[^38]: To our knowledge, it is the first such causal null hypothesis considered in Epidemiology for which this is the case.
[^39]: This observation motivated the development of graphical “nested” Markov models that encode constraints such as (\[eq:verma-constraint\]) in addition to ordinary conditional independence relations; see the discussion of “Causal Discovery” in Section \[sec:otherwork\] below.
[^40]: In response @robins:optimal:2004 ([-@robins:optimal:2004], Section 5.2) offered a Bayes–frequentist compromise that combines honest subjective Bayesian decision making under uncertainty with good frequentist behavior even when, as above, the model is so large and the likelihood function so complex that standard (uncompromised) Bayes procedures have poor frequentist performance. The key to the compromise is that the Bayesian decision maker is only allowed to observe a specified vector function of $X$ \[depending on the known $\pi^{\ast}( X) $\] but not $X$ itself.
[^41]: Given complete data $X$, an always observed coarsening variable $R$, and a known coarsening function $x_{(r)}=c(r,x)$, *coarsening at random* (CAR) is said to hold if $\Pr(R=r \mid X)$ depends only on $X_{(r)}$, the observed data part of $X$. @robins:rotnitzky:recovery:1992 ([-@robins:rotnitzky:recovery:1992]), @Gill:van:Robi:coar:1997 ([-@Gill:van:Robi:coar:1997]) and @cator2004 ([-@cator2004]) showed that in certain models assuming CAR places no restrictions on the distribution of the observed data. For such models, we can pretend CAR holds when our goal is estimation of functionals of the observed data distribution. This trick often helps to derive efficient estimators of the functional. In this section, we assume that the distribution of the observables is compatible with CAR, and further, that in the estimation problems that we consider, CAR may be assumed to hold without loss of generality.
In fact, this is the case in the context of our running causal inference example from Section \[sec:tree-graph\]. Specifically, let $X= \{ Y(a_{1},a_{2}),L(a_{1});a_{j}\in \{
0,1 \} ,j=1,2 \} $, $R= ( A_{1},A_{2} ) $, and $X_{ (a_{1},a_{2} ) }= \{ Y(a_{1},a_{2}),L(a_{1}) \} $. Consider a model $M_{X}$ for $X$ that specifies (i) $ \{ Y(1,a_{2}),L(1);a_{2}\in \{ 0,1 \} \} {\protect\mathpalette{\protect\independenT}{\perp}}\{ Y(0,a_{2}),L(0);a_{2}\in \{ 0,1 \} \}$ and (ii) $Y(a_{1},1) {\protect\mathpalette{\protect\independenT}{\perp}}Y(a_{1},0) \mid L(a_{1})$ for $a_{1}\in \{ 0,1
\}$. Results in @gill2001 ([-@gill2001], Section 6) and @robins00marginal ([-@robins00marginal], Sections 2.1 and 4.2) show that (a) model $M_{X}$ places no further restrictions on the distribution of the observed data $ ( A_{1},A_{2},L,Y )
= ( A_{1},A_{2},L( A_{1}),Y(A_{1},A_{2}) )$, (b) given model $M_{X}$, the additional independences $X {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}$ and $X {\protect\mathpalette{\protect\independenT}{\perp}}A_{2} \mid A_{1},L$ together also place no further restrictions on the distribution of the observed data $ ( A_{1},A_{2},L,Y ) $ and are equivalent to assuming CAR. Further, the independences in (b) imply (\[eq:indg\]) so that $f_{Y(g)}(y)$ is identified by the g-formula $f_{g}^{\ast}(y)$.
[^42]: More recently, in the context of a RCT, @tsiatis2008covariate and @moore2009covariate, following the strategy of @robins:rotnitzky:recovery:1992, studied variants of the locally efficient tests and estimators of @Scha:Rotn:Robi:adju:1999 to increase efficiency and power by utilizing data on covariates.
[^43]: A function $b( \cdot) $ lies in the Hölder ball $H(\beta,C)$ with Hölder exponent $\beta>0$ and radius $C>0$, if and only if $b( \cdot) $ is bounded in supremum norm by $C$ and all partial derivatives of $b(x)$ up to order $ \lfloor\beta \rfloor$ exist, and all partial derivatives of order $ \lfloor\beta \rfloor$ are Lipschitz with exponent $ ( \beta- \lfloor\beta \rfloor )$ and constant $C$.
[^44]: If a subcube contains more than two observations, two are selected randomly, without replacement.
[^45]: Observe that $E [ ( Y_{i}-Y_{j} ) ^{2}/2 \mid X_{i},X_{j} ]
=\sigma^{2}+ \{ b( X_{i}) -b( X_{j}) \} ^{2}/2$, ${\vert}b( X_{i}) -b( X_{j}){\vert}=O ({\Vert}X_{i}-X_{j}
{\Vert}^{\beta} )$ as $\beta<1$, and ${\Vert}X_{i}-X_{j}{\Vert}=d^{1/2}O( k^{-1/d})$ when $X_{i}$ and $X_{j}$ are in the same subcube. It follows that the estimator has variance of order $k/n^{2}$ and bias of order $O(k^{-2\beta/d})$. Variance and the squared bias are equated by solving $k/n^{2}=k^{-4\beta/d}$ which gives $k=n^{\fracz{2}{1+4\beta/d}}$.
[^46]: Robins has been trying to find an answer to this question without success for a number of years. He suggested that it is now time for some crowd-sourcing.
[^47]: The estimator given above does not attain this rate when $\beta>1$ because it fails to exploit the fact that $b(\cdot)$ is differentiable. In the interest of simplicity, we have posed this as a problem in variance estimation. However, @robins:higher:2008 ([-@robins:higher:2008]) show that the estimation of the variance is mathematically isomorphic to the estimation of $\theta$ in the semi-parametric regression model $E[Y \mid A,X]=\theta A +h(X)$, where $A$ is a binary treatment. In the absence of confounding, $\theta$ encodes the causal effect of the treatment.
[^48]: But still not uniformly consistent!
|
{
"pile_set_name": "arxiv"
}
|
1911 North East Cork by-election
The North East Cork by-election of 1911 was held on 15 July 1911. The by-election was held due to the resignation of the incumbent All-for-Ireland MP, Moreton Frewen. Frewen resigned in order for Tim Healy, who was prominent in the All-for-Ireland League but who had been lost his seat in North Louth in the previous general election, to take his seat. Healy was unopposed and held the seat.
References
Category:1911 in Ireland
Category:1911 elections in the United Kingdom
Category:By-elections to the Parliament of the United Kingdom in County Cork constituencies
Category:Unopposed by-elections to the Parliament of the United Kingdom (need citation)
Category:July 1911 events
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We report the growth of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconducting single crystal fibers via slow cooling solid state reaction method. Superconducting transition temperature ($T_{c}\sim6.5$K) is confirmed from magnetization and transport measurements. A comparative study is performed for determination of superconducting anisotropy, $\Gamma$, via conventional method (by taking ration of two superconducting parameters) and scaling approach method. Scaling approach, defined within the framework of the Ginzburg-Landau theory is applied to the angular dependent resistivity measurements to estimate the anisotropy. The value of $\Gamma$ close to $T_{c}$ from scaling approach is found to be $\sim2.5$ that is slight higher compare to conventional approach ($\sim2.2$). Further, variation of anisotropy with temperature suggests that it is a type of multi-band superconductor.'
address: '$^{\mbox{a}}$ Department of physics, Indian Institute of Technology Bombay, Mumbai-400076 India'
author:
- 'Anil K. Yadav$^{\mbox{a,b}}$'
- 'Himanshu Sharma$^{\mbox{a}}$'
- 'C. V. Tomy$^{\mbox{a}}$'
- 'Ajay D. Thakur$^{\mbox{c}}$'
title: 'Growth and angular dependent resistivity of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconducting state single crystals fiber'
---
introduction
============
Ternary chalcogenide of non-superconducting compound Nb$_{2}$Pd$_{0.81}$Se$_{5}$ turns into a superconductor with superconducting transition temperature $T_{c}\sim6.5$K when Se is replaced with S [\[]{}1[\]]{}. This superconductor has caused a lot of interest to research community due to its extremely large upper critical fields amongst the known Nb based superconductors and shown a possibility to grow long flexible superconducting fibers [\[]{}1,2[\]]{}. Structurally, this compound crystallizes in the monoclinic structure with symmetry $C2/m$ space group [\[]{}1,2,3[\]]{}. Its structure comprises laminar sheets, stacked along the b-axis, consisting of Pb, Nb and S atoms. Each sheet contains two unique building blocks of NbS$_{6}$ and NbS$_{7}$ atoms inter-linked by the Pd-atoms [\[]{}1,3,4[\]]{}. Yu *et al.*, have constructed the superconducting phase diagram of Nb$_{2}$Pd$_{1-x}$S$_{5\pm\delta}$ ($0.6<x<1$ ) single crystal fibers by varying composition of Pd and S and found maximum $T_{c}\sim7.43$K in Nb$_{2}$Pd$_{1.1}$S$_{6}$ stoichiometry compound [\[]{}2[\]]{}. One of the important parameter which needs to be determine precisely for this compound is the anisotropy ($\Gamma$) as it shown extremely large direction dependent upper critical field [\[]{}1[\]]{}. In the conventional approach, the anisotropy is determined as ration of two superconducting parameters (such as band dependent effective masses, penetration depth, upper critical fields etc.) in two orientations w.r.t. the crystallographic axes and applied magnetic field [\[]{}5[\]]{}. Zhang *et al.*, [\[]{}1[\]]{} have determined the temperature dependent anisotropy in this compound using the above conventional method by taking the ratio of $H_{c2}(T)$ in two orientations. However, in this case, estimation of $H_{c2}(0)$ is subject to different criteria and formalism which may introduce some uncertainty in the anisotropy ($\Gamma$) calculation [\[]{}6[\]]{}. Blatter *et al.*, have given a simple alternate way to estimate the anisotropy of a superconductor, known as the scaling approach [\[]{}7[\]]{}. In this approach, any anisotropic data can be changed into isotropic form by using some scaling rule in which only one parameter has to adjust for which all isotropic curves collapse into single curve, that adjusted parameter is anisotropy of superconductor. Thus its limits the uncertainty in the determination of $\Gamma$ as compared to the conventional approach. Employing scaling approach, Wen et al., have estimated the anisotropy of several Fe-based superconductors such as NdFeAsO$_{0.82}$F$_{0.18}$ [\[]{}8[\]]{}, Ba$_{1-x}$K$_{x}$Fe$_{2}$As$_{2}$ [\[]{}6[\]]{} and Rb$_{0.8}$Fe$_{2}$Se$_{2}$ [\[]{}9[\]]{}. Shahbazi *et al.*, have also performed similar studies on Fe$_{1.04}$Se$_{0.6}$Te$_{0.4}$ [\[]{}10[\]]{} and BaFe$_{1.9}$Co$_{0.1}$As$_{2}$ [\[]{}11[\]]{} single crystals. In this paper, we report the anisotropy estimation of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals via conventional and scaling approach methods near $T_{c}$. We also provide further evidence that the bulk superconducting anisotropy is not universally constant, but is temperature dependent down to $T_{c}$.
method
======
Single crystal fibers of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ were synthesized via slow cooling of the charge in the solid state reaction method, as reported in reference [\[]{}1[\]]{}. Starting raw materials (powder) Nb (99.99%), Pd (99.99%) and S (99.999%) were taken in the stoichiometry ratio of 2:1:6 and mixed in an Ar atmosphere inside a glove box. The well-homogenized powder was sealed in a long evacuated quartz tube and heated to 800ïC with a rate of 10$^{\circ}$C/h. After the reaction for 24 hours at this temperature, the reactants were cooled down at a rate of 2$^{\circ}$C/h to 360$^{\circ}$C, followed by cooling to room temperature by switching the furnace off. As-grown samples look like a mesh of small wires when viewed under an optical microscope. Some part of the as-grown sample was dipped in dilute HNO$_{3}$ to remove the bulk material and to pick up a few fiber rods for further measurements. X-ray diffraction (XRD) was performed on powdered Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystal fibers for structure determination. High energy x-ray diffraction analysis (EDAX) is used to identify the chemical elements and composition. Magnetization measurement was performed using a superconducting quantum interference device - vibrating sample magnetometer (SQUID-VSM, Quantum Design Inc. USA). Angular dependent resistivity was carried out using the resistivity option with horizontal rotator in a physical property measurement system (PPMS) of Quantum Design Inc. USA. Electrical connections were made in four probe configuration using gold wires bonded to the sample with silver epoxy.
Results
=======
Structure analysis
------------------
Figure \[fig1\](a) shows the scanning electron microscope (SEM) image of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals fibers. It is clear from the image that the fibers are grown in different shapes and lengths. Figure \[fig1\](b) shows the XRD patterns of powdered Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals. Rietveld refinement was performed on the powder XRD data using $C2/m$ monoclinic crystal structure of Nb$_{2}$Pd$_{0.81}$Se$_{5}$ as reference in the FullProf suite software. The lattice parameters ($a$= 12.154(1)A, $b$ = 3.283(7)A and $c$ = 15.09(9)A) obtained from the refinement are approximately same as reported earlier in reference [\[]{}1,3[\]]{}, even though the intensities could not be matched perfectly. Peak (200) is found to be the one with the highest intensity even when the XRD was obtained with a bunch of fibers, indicating a preferred crystal plan orientation along the ($l$00) direction in our powdered samples. Similar preferred orientation was also reported for single crystals in reference [\[]{}2[\]]{}. This may be the reason for the discrepancy in the intensities between the observed and the fitted XRD peaks. Further, to confirm the single crystalline nature of the fibers, we have taken the selective area electron diffraction (SAED) pattern of the fibers; a typical pattern is shown in Figure \[fig1\](c). Nicely ordered spotted diffraction pattern confirms the single crystalline nature of the fibers. Figure \[fig1\](d) shows the optical image of a typical cylindrical fiber of diameter $\sim1.2\,\mbox{\ensuremath{\mu}m}$ and of length $\sim1814\,\mbox{\ensuremath{\mu}m},$ which was used for the four probe electrical resistivity measurements (Fig. \[fig1\](e) shows the gold wires and silver paste used for the electrical connections). All chemical elements are found to be present in the compound with slight variation from starting composition in EDAX analysis.
![ \[fig1\] (Color online) (a) SEM image of bunch of single crystal fibers of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$. (b) X-ray diffraction patterns: observed (green), calculated (red) and difference (blue) (c) SAED pattern of a single crystal fiber (d) optical image of a typical cylindrical wire used for transport study (e) Four probe connections on a fiber.](1)
Confirmation of superconducting properties
------------------------------------------
In order to confirm the occurrence of superconductivity in the prepared single crystals, magnetic measurement was performed on a bunch of fibers (as alone single crystal fiber did not give large enough signal in magnetization). Figure \[fig2\] shows a part of the temperature dependent zero field-cooled (ZFC) and field-cooled (FC) magnetization measurements at H = 20Oe. The onset superconducting transition temperature ($T_{c}^{{\rm on;M}})$ is observed to be $\sim6.5$K which is taken from the bifurcation point of ZFC and FC curves. In order to confirm the superconducting nature of the grown single crystal fibers, resistivity was measured using one of the fibers removed from the ingot. We have plotted a part of resistivity measurement (in zero applied magnetic fields) in Fig. \[fig2\] along with the magnetization curve where zero resistivity transition temperature, $T_{c}^{{\rm zero}}$ matches well with the onset transition temperature of magnetization, $T_{c}^{{\rm on;M}}$ as well as the $T_{c}$ reported in reference [\[]{}1,2[\]]{}. However, the onset transition temperature from resistivity ($T_{c}^{{\rm on}}$ : the temperature at which resistivity drop to 90% from normal state resistivity) is found to be $\sim7.8$K, which is comparable to the optimized maximum $T_{c}^{{\rm on}}$ for this compound reported by Yu *et al.*, [\[]{}2[\]]{}. The narrow superconducting transition width ($\sim1.3$K) in resistivity indicates the quality of the single crystal fibers (see Fig. \[fig2\]). The residual resistivity ratio $(RRR\thickapprox\frac{R(300\,{\rm K)}}{R(8\,{\rm K)}})$, which indicates the metallicity of a material, is found to be $\sim3.4$ for our sample. This value of RRR is much less than the corresponding value for good conductors, that categorized it as bad metals.
 Zero field-cooled (ZFC) and field-cooled (FC) magnetization curves at 20Oe (open circle) and resistivity measurement at zero field (open triangle). Onset superconducting transition temperature, $T_{c}^{{\rm on;M}}$, from magnetization and zero resistivity transition temperature, $T_{c}^{{\rm zero}}$, from resistivity measurements confirm the $T_{c}$ of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconductor.](2)
Angular dependent transport properties
--------------------------------------
In order to estimate the superconducting anisotropy properties, we have to assign single crystal fibers orientation axis. Since we cannot assign a growth direction for our cylindrical single crystal fibers from XRD due to very fine single crystals therefore we have adopted b-axis along length of fibers as given in reference [\[]{}1[\]]{} because of same synthesis method is followed to grow these single crystal fibers. Figures \[fig3\](a) and \[fig3\](c) show the resistivity plots as function of temperature in different applied magnetic fields from zero to 90kOe along H || b-axis and H $\bot$ b-axis. Three transition temperatures, $T_{c}^{{\rm on}}$, $T_{c}^{{\rm mid}}$and $T_{c}^{{\rm off}}$ are marked in the figure using the criteria, 90%$\rho_{n}$, 50%$\rho_{n}$ and 10%$\rho_{n}$ (where $\rho_{n}$ is normal state resistivity at 8K), respectively. The $T_{c}$ shifts toward the lower temperatures as field increases with the rate of 0.05K/kOe and 0.02K/kOe along H || b-axis and H $\bot$ b-axis, respectively. The HT phase diagrams are plotted at three transition temperatures in Figs. \[fig3\](b) and \[fig3\](d) for both the orientations. In order to find out upper critical fields ($H_{c2}(0)$), these HT curves are fitted with the empirical formula, $H_{c2}(0)=H_{c2}(T)(1-(T/T_{c})^{2})$ [\[]{}1, 2[\]]{}, further these fitted curves have been extrapolated to the zero temperature to extract the $H_{c2}(0)$ values, that come out to be $\sim$180kOe and $\sim$390kOe at $T_{c}^{{\rm on}}$ along H || b-axis and H $\bot$ b-axis, respectively. Conventionally the anisotropy is found to be $\sim2.2$, estimated by taking ratio of $H_{c2}(0)$ values in two orientations. In order to corroborate the $\Gamma$ values further, we have measured the angular dependent resistivity $\rho(\theta)$ at different magnetic fields at certain temperatures close to $T_{c}$.
 Temperature dependent resistivity plots at different applied fields vary from 0kOe to 90kOe (a) for H || b-axis (c) for H $\bot$ b-axis. (b) and (d) plots show H–T phase diagrams at $T_{c}^{{\rm on}}$, $T_{c}^{{\rm mid}}$ and $T_{c}^{{\rm off}}$ transition temperatures. Dashed curves show the fitting curves corresponding empirical formula, $H_{c2}(0)=H_{c2}(T)(1-(T/T_{c})^{2})$. ](3)
The insets of Figs. \[fig4\](a), (b), (c) and (d) show $\rho(\theta)$ curves at 10kOe, 30kOe, 50kOe, 70kOe and 90kOe for T = 5.0K, 5.5K, 6.0K and 6.5K, respectively. All the $\rho(\theta)$ curves show a symmetric dip at $\theta=90$$^{\circ}$ and a maximum at 0$^{\circ}$ and 180$^{\circ}$. In all the curves, the center of the dip shifts from zero to non-zero resistivity as the temperature and field increases. The main panel of the Fig. \[fig4\] shows rescaled $\rho(\theta)$ curves of 10kOe, 30kOe, 50kOe, 70kOe and 90kOe fields at temperatures (a) 5.0K (b) 5.5K (c) 6.0K and (d) 6.5K, respectively using the rescaling function: $$\tilde{H}=H\,\sqrt{{\rm sin^{2}}\theta+\Gamma^{2}{\rm cos^{2}}\theta}$$ where $\Gamma$ is anisotropy and $\theta$ is angle between the field and crystal axis. All rescaled curves at fixed temperature are now isotropic, i.e., all curves collapse on the single curve. In this method only anisotropic parameter, $\Gamma$, was adjusted to convert data into the isotropic form, that value of $\Gamma$ is the anisotropy at that temperature.
 Insets of figure (a) to (d) show resistivity ($\rho$) plots as a function of angle, $\theta$ (angle between b-axis and applied magnetic fields) at fields 10kOe, 30kOe, 50kOe, 70kOe and 90kOe for temperatures (a) 5K (b) 5.5K (c) 6.0K and (d) 6.5K and main panels of figure show the resistivity plots as function of scaling field $\tilde{H}$ = $H\,\sqrt{{\rm sin^{2}}\theta+\Gamma^{2}{\rm cos^{2}}\theta}$.](4)
Figure \[fig5\] shows temperature dependent anisotropy ($\Gamma(T)$) plot which is obtained from the angular resistivity data. Anisotropy decreases slowly as the temperature goes down in superconducting state. As Zhang *et al.* [\[]{}1[\]]{} have explained that this dependency of anisotropy in temperature may be due to the opening of superconducting gap of different magnitude on different Fermi surface sheets where each associated with bands of distinct electronic anisotropy. Li *et al.*, have reported similar temperature dependent anisotropy behavior for Rb$_{0.76}$Fe$_{2}$Se$_{1.6}$ , Rb$_{0.8}$Fe$_{1.6}$Se$_{2}$ , Ba$_{0.6}$K$_{0.4}$Fe$_{2}$As$_{2}$, Ba(Fe$_{0.92}$Co$_{0.08})_{2}$As$_{2}$ single crystals and explained that this may be due to the multiband effect or gradual setting of pair breaking due to spin-paramagnetic effect [\[]{}9[\]]{}. Shahbazi *et al.*, have also reported similar results for Fe$_{1.04}$Te$_{0.6}$Se$_{0.4}$ and BaFe$_{1.9}$Co$_{0.8}$As$_{2}$ single crystal through angular dependent transport measurements [\[]{}10,11[\]]{}. Various theoretical models for study of Fermi surface have supported the presence of multiband superconducting gap in Fe-based superconductors [\[]{}12,13,14[\]]{}. Here, the density functional theory (DFT) calculation indeed has shown that the Nb$_{2}$Pd$_{0.81}$S$_{5}$ superconductor is a multi-band superconductor [\[]{}1[\]]{}. Compared to MgB$_{2}$ [\[]{}15,16[\]]{} and cuprate superconductors [\[]{}17[\]]{} the anisotropy of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ is very small; however, it is comparable with some of the iron based (Fe-122 type) superconductors [\[]{}9[\]]{}.
 Anisotropy variation with temperature measured from angular dependent resistivity.](5)
Conclusions
===========
In conclusion, we have successfully synthesized the Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystal fibers via slow cooling solid state reaction method. Superconducting properties of sample have been confirmed via magnetic and transport measurements. Conventionally, upper critical fields are measured from magneto-transport study. Angular dependence of resistivity are measured in presence of magnetic fields at different temperatures in superconducting state which further rescaled using a scaling function to convert isotropic form that direct provides anisotropy. The anisotropy is found to be $\sim2.5$ near $T_{c}$ which is less $\sim2.2$ compare to achieve from conventional method. Anisotropy decreases slowly with decreasing temperature, which is attributed to the multi-band nature of the superconductor.
AKY would like to thank CSIR, India for SRF grant.
[References]{} Q. Zhang, G. Li, D. Rhodes, A. Kiswandhi, T. Besara, B. Zeng, J. Sun, T. Siegrist, M. D. Johannes, L. Balicas, Scientific Reports, **3**, 1446 (2013).H. Yu, M. Zuo, L. Zhang, S. Tan, C. Zhang, Y. Zhang , J. Am. Chem. Soc. **135**, 12987 (2013).
H. Yu, M. Zuo, L. Zhang, S. Tan, C. Zhang, Y. Zhang , J. Am. Chem. Soc.**135**, 12987 (2013).
R. Jha, B. Tiwari, P. Rani, V. P. S. Awana, arXiv:1312.0425 (2013).
D. A. Keszler, J. A. Ibers, M. Y. Shang and J. X. Lu, J. solid state chem. **57**, 68 (1985).
W. E. Lawrence and S. Doniach, in Proceedings of the 12th International Conference Low Temperature Physics, edited by E. Kanda Keigaku, Tokyo (1971).
Z. S. Wang, H. Q. Luo, C. Ren, H. H. Wen, Phys. Rev. B **78**, 140501(R) (2008).
G. Blatter, V. B. Geshkenbein, and A. I. Larkin, Phys. Rev. Lett. **68**, 875 (1992).
Y. Jia, P. Cheng, L. Fang, H. Yang, C. Ren, L. Shan, C. Z. Gu, H. H. Wen , Supe. Science and Technology **21**, 105018 (2008).
C. H. Li, B. Shen, F. Han, X. Zhu, and H. H. Wen, Phys. Rev. B **83**, 184521 (2011).
M. Shahbazi, X. L. Wang, S. X. Dou, H. Fang, and C. T. Lin, J. Appl. Phys. **113**, 17E115 (2013).
M. Shahbazi, X. L. Wang, S. R. Ghorbani, S. X. Dou, and K. Y. Choi, Appl. Phys. Lett. **100**, 102601 (2012).
Q. Han, Y. Chen and Z. D. Wang, EPL **82**, 37007 (2008).
C. Ren, Z. S. Wang, H. Q. Luo, H. Yang, L. Shan, and H. H. Wen, Phys. Rev. Lett. **101**, 257006 (2008).
V. Cvetkovic, Z. Tesanovic, Europhysics Letters **85**, 37002 (2009).
A. Rydh, U. Welp, A. E. Koshelev, W. K. Kwok, G. W. Crabtree, R. Brusetti, L. Lyard, T. Klein, C. Marcenat, B. Kang, K. H. Kim, K. H. P. Kim, H.-S. Lee, and S.-I. Lee , Phys. Rev. B **70**, 132503 (2004).
K. Takahashi, T. Atsumi, N. Yamamoto, M. Xu, H. Kitazawa, and T. Ishida, Phys. Rev. B **66**, 012501 (2002).
C. P. Poole, Jr. H. A. Farach, J. Richard. Creswick, Superconductivity (Elsevier) (2007).
|
{
"pile_set_name": "arxiv"
}
|
Teresa Carlson
Teresa Carlson is the current vice president for Amazon Web Services' worldwide public sector business. Prior to working for Amazon, Carlson served as Microsoft's Vice President of Federal Government business. Carlson was named Executive of the Year in 2016 for companies greater than $300 million by the Greater Washington GovCon Awards, which is administered by the Northern Virginia Chamber of Commerce.
Education
Carlson graduated from Western Kentucky University with a bachelor's degree in communications and a master's in speech and language pathology.
References
Category:Amazon.com people
Category:Living people
Category:Year of birth missing (living people)
|
{
"pile_set_name": "wikipedia_en"
}
|
--recursive
--require @babel/register
|
{
"pile_set_name": "github"
}
|
Major League Baseball All-Century Team
In 1999, the Major League Baseball All-Century Team was chosen by popular vote of fans. To select the team, a panel of experts first compiled a list of the 100 greatest Major League Baseball players from the past century. Over two million fans then voted on the players using paper and online ballots.
The top two vote-getters from each position, except outfielders (nine), and the top six pitchers were placed on the team. A select panel then added five legends to create a thirty-man team:—Warren Spahn (who finished #10 among pitchers), Christy Mathewson (#14 among pitchers), Lefty Grove (#18 among pitchers), Honus Wagner (#4 among shortstops), and Stan Musial (#11 among outfielders).
The nominees for the All-Century team were presented at the 1999 All-Star Game at Fenway Park. Preceding Game 2 of the 1999 World Series, the members of the All-Century Team were revealed. Every living player named to the team attended.
For the complete list of the 100 players nominated, see The MLB All-Century Team.
Selected players
Pete Rose controversy
There was controversy over the inclusion in the All-Century Team of Pete Rose, who had been banned from baseball for life 10 years earlier. Some questioned Rose's presence on a team officially endorsed by Major League Baseball, but fans at the stadium gave him a standing ovation. During the on-field ceremony, which was emceed by Hall of Fame broadcaster Vin Scully, NBC Sports' Jim Gray questioned Rose about his refusal to admit to gambling on baseball. Gray's interview became controversial, with some arguing that it was good journalism, while others objected that the occasion was an inappropriate setting for Gray's persistence. After initially refusing to do so, Gray apologized a few days later. On January 8, 2004, more than four years later, Rose admitted publicly to betting on baseball games in his autobiography My Prison Without Bars.
See also
Major League Baseball All-Time Team, a similar team chosen by the Baseball Writers' Association of America in
Latino Legends Team
DHL Hometown Heroes (2006): the most outstanding player in the history of each MLB franchise, based on on-field performance, leadership quality and character value
List of MLB awards
Team of the century
National Baseball Hall of Fame and Museum
References
External links
All-Century Team Vote Totals from ESPN.com
All-Century Team DVD from Amazon.com
All-Century Team Information from Baseball Almanac
Category:1999 Major League Baseball season
Category:Major League Baseball trophies and awards
Category:History of Major League Baseball
Category:Awards established in 1999
|
{
"pile_set_name": "wikipedia_en"
}
|
Michele Orecchia
Michele Orecchia (26 December 1903 – 11 December 1981) was an Italian professional road bicycle racer, who won one stage in the 1932 Tour de France. He also competed in the individual and team road race events at the 1928 Summer Olympics.
Major results
1927
Giro del Sestriere
1929
Giro d'Italia:
9th place overall classification
1932
Tour de France:
Winner stage 8
References
External links
Official Tour de France results for Michele Orecchia
Category:1903 births
Category:1981 deaths
Category:Italian male cyclists
Category:Italian Tour de France stage winners
Category:Sportspeople from Marseille
Category:Olympic cyclists of Italy
Category:Cyclists at the 1928 Summer Olympics
Category:Tour de France cyclists
Category:French male cyclists
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We numerically investigate the self-diffusion coefficient and correlation length of the rigid clusters (i.e., the typical size of the collective motions) in sheared soft athermal particles. Here we find that the rheological flow curves on the self-diffusion coefficient are collapsed by the proximity to the jamming transition density. This feature is in common with the well-established critical scaling of flow curves on shear stress or viscosity. We furthermore reveal that the divergence of the correlation length governs the critical behavior of the diffusion coefficient, where the diffusion coefficient is proportional to the correlation length and the strain rate for a wide range of the strain rate and packing fraction across the jamming transition density.'
author:
- Kuniyasu Saitoh
- Takeshi Kawasaki
bibliography:
- 'diffusion\_overdamp.bib'
title: Critical scaling of diffusion coefficients and size of rigid clusters of soft athermal particles under shear
---
Introduction
============
*Transport properties* of soft athermal particles, e.g. emulsions, foams, colloidal suspensions, and granular materials, are important in science and engineering technology [@bird]. In many manufacturing processes, these particles are forced to flow (through pipes, containers, etc.) and the transportation of “flowing particles" is of central importance for industrial applications [@larson]. Therefore, there is a need to understand how the transport properties are affected by rheological flow properties of soft athermal particles.
Recently, the rheological flow properties of soft athermal particles have been extensively studied and it has been revealed that the rheology of such particulate systems depends not only on strain rate but also on packing fraction of the particles [@rheol0; @pdf1; @rheol1; @rheol2; @rheol3; @rheol4; @rheol5; @rheol6; @rheol7; @rheol8; @rheol9; @rheol10; @rheol11; @rheol12; @rheol13]. If the packing fraction $\phi$ is lower than the so-called jamming transition density $\phi_J$, steady state stress is described by either Newtonian [@rheol0; @pdf1] or Bagnoldian rheology [@rheol1; @rheol2; @rheol3; @rheol4; @rheol5] (depending on whether particle inertia is significant or not). If the packing fraction exceeds the jamming point ($\phi>\phi_J$), one observes yield stress at vanishing strain rate [@review-rheol0]. These two trends are solely determined by the proximity to the jamming transition $|\Delta\phi|\equiv|\phi-\phi_J|$ [@rheol0] and rheological flow curves of many types of soft athermal particles have been explained by the critical scaling near jamming [@pdf1; @rheol1; @rheol2; @rheol3; @rheol4; @rheol5; @rheol6; @rheol7; @rheol8; @rheol9; @rheol10; @rheol11].
On the other hand, the mass transport or *self-diffusion* of soft athermal particles seems to be controversial. As is the rheological flow behavior on shear stress or viscosity, the diffusivity of the particles under shear is also dependent on both the strain rate and packing fraction. Its dependence on the shear rate $\dot{\gamma}$ is weakened with the increase of $\dot{\gamma}$,i.e. the diffusivity $D$ exhibits a crossover from a linear scaling $D\sim\dot{\gamma}$ to the sub-linear scaling $D\sim\dot{\gamma}^q$ at a characteristic shear rate $\dot{\gamma}_c$, where the exponent is smaller than unity, $q<1$ [@diff_shear_md7; @diff_shear_md6; @dh_md2; @diff_shear_exp2; @diff_shear_exp1; @diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. For example, in molecular dynamics (MD) simulations of Durian’s bubble model in two dimensions [@diff_shear_md7; @diff_shear_md6] and frictionless granular particles in three dimensions [@dh_md2], the diffusivity varies from $D\sim\dot{\gamma}$ ($\dot{\gamma}<\dot{\gamma}_c$) to $D\sim\dot{\gamma}^{0.8}$ ($\dot{\gamma}>\dot{\gamma}_c$). These results agree with laboratory experiments of colloidal glasses under shear [@diff_shear_exp2; @diff_shear_exp1] and also suggest that the diffusivity does not depend on spatial dimensions. However, another crossover, i.e. from $D\sim\dot{\gamma}$ to $D\sim\dot{\gamma}^{1/2}$, was suggested by the studies of amorphous solids (though the scaling $D\sim\dot{\gamma}^{1/2}$ is the asymptotic behavior in rapid flows $\dot{\gamma}\gg\dot{\gamma}_c$) [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. In addition, it was found in MD simulations of soft athermal disks that in a sufficiently small flow rate range, the diffusivity changes from $D\sim\dot{\gamma}$ ($\phi<\phi_J$) to $\dot{\gamma}^{0.78}$ ($\phi\simeq\phi_J$) [@diff_shear_md0], implying that the crossover shear rate $\dot{\gamma}_c$ vanishes as the system approaches jamming from below $\phi\rightarrow\phi_J$.
Note that the self-diffusion of soft athermal particles shows a clear difference from the diffusion in glass; *no plateau* is observed in (transverse) mean square displacements (MSDs) [@diff_shear_md0; @diff_shear_md2; @diff_shear_md3; @diff_shear_md4; @diff_shear_md7; @dh_md2]. The absence of sub-diffusion can be also seen in quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of soft athermal disks [@dh_qs1] and MD simulations of granular materials sheared under constant pressure [@diff_shear_md1].
Because the self-diffusion can be associated with collective motions of soft athermal particles, researchers have analyzed spatial correlations of velocity fluctuations [@rheol0] or non-affine displacements [@nafsc2] of the particles under shear. Characteristic sizes of collectively moving regions, i.e. *rigid clusters*, are then extracted as functions of $\dot{\gamma}$ and $\phi$, however, there is a lack of consensus on the scaling of the sizes. For example, the size of rigid clusters $\xi$ diverges as the shear rate goes to zero $\dot{\gamma}\rightarrow 0$ so that the power-law scaling $\xi\sim\dot{\gamma}^{-s}$ was suggested, where the exponent varies from $s=0.23$ to $0.5$ depending on numerical models and flow conditions [@dh_md2; @diff_shear_md1]. The dependence of the rigid cluster size on packing fraction is also controversial. If the system is below jamming, critical scaling of the size is given by $\xi\sim|\Delta\phi|^{-w}$, where different exponents (in the range between $0.5\le w\le 1.0$) have been reported by various simulations [@rheol0; @nafsc2; @rheol16]. In contrast, if the system is above jamming, the size becomes insensitive to the packing fraction (or exceeds the system size $L$) as only $L$ is the relevant length scale, i.e. $\xi\sim L$, in a quasi-static regime [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4; @pdf1]. From a scaling argument, a relation between the diffusivity and size of rigid clusters was proposed as $$D\sim d_0\xi\dot{\gamma}~,
\label{eq:rigid_cluster}$$ where $d_0$ is the particle diameter [@diff_shear_md1]. It seems that previous results above jamming,i.e. as $\dot{\gamma}$ is increased, $D/\dot{\gamma}$ changes from constant to $\dot{\gamma}^{-1/2}$ and corresponding $\xi$ undergoes from $L$ to $\dot{\gamma}^{-1/2}$, support this argument [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. However, the link between the diffusivity and rigid clusters *below jamming* is still not clear.
In this paper, we study the self-diffusion of soft athermal particles and the size of rigid clusters. The particles are driven by simple shear flows and their fluctuating motions around a mean velocity field are numerically calculated. From numerical results, we extract the diffusivity of the particles and explain its dependence on the control parameters (i.e. $\dot{\gamma}$ and $\phi$). We investigate wide ranges of the control parameters in order to unify our understanding of the diffusivity in both fast and slow flows, and both below and above jamming. Our main result is critical scaling of the diffusivity $D$, which parallels the critical scaling of the size of rigid clusters $\xi$. We find that the linear relation between the diffusivity and size \[Eq. (\[eq:rigid\_cluster\])\] holds over the whole ranges of $\dot{\gamma}$ and $\phi$ if finite-size effects are not important. In the following, we show our numerical method in Sec. \[sec:method\] and numerical results in Sec. \[sec:result\]. In Sec. \[sec:disc\], we discuss and conclude our results and outlook for future.
Methods {#sec:method}
=======
We perform MD simulations of two-dimensional disks. In order to avoid crystallization of the system, we randomly distribute an equal number of small and large disks (with diameters $d_S$ and $d_L=1.4d_S$) in a $L\times L$ square periodic box [@gn1]. The total number of disks is $N=8192$ and the packing fraction of the disks $\phi$ is controlled around the jamming transition density $\phi_J\simeq0.8433$ [@rheol0]. We introduce an elastic force between the disks, $i$ and $j$, in contact as $\bm{f}_{ij}^\mathrm{e}=k\delta_{ij}\bm{n}_{ij}$, where $k$ is the stiffness and $\bm{n}_{ij}\equiv\bm{r}_{ij}/|\bm{r}_{ij}|$ with the relative position $\bm{r}_{ij}\equiv\bm{r}_i-\bm{r}_j$ is the normal unit vector. The elastic force is linear in the overlap $\delta_{ij}\equiv R_i+R_j-|\bm{r}_{ij}|>0$, where $R_i$ ($R_j$) is the radius of the disk $i$ ($j$). We also add a damping force to every disk as $\bm{f}_i^\mathrm{d}=-\eta\left\{\bm{v}_i-\bm{u}(\bm{r}_i)\right\}$, where $\eta$, $\bm{v}_i$, and $\bm{u}(\bm{r})$ are the damping coefficient, velocity of the disk $i$, and external flow field, respectively. Note that the stiffness and damping coefficient determine a time scale as $t_0\equiv\eta/k$.
To simulate simple shear flows of the system, we impose the external flow field $\bm{u}(\bm{r})=(\dot{\gamma}y,0)$ under the Lees-Edwards boundary condition [@lees], where $\dot{\gamma}$ is the shear rate. Then, we describe motions of the disks by overdamped dynamics [@rheol0; @rheol7; @pdf1], i.e. $\sum_{j\neq i}\bm{f}_{ij}^\mathrm{e}+\bm{f}_i^\mathrm{d}=\bm{0}$, where we numerically integrate the disk velocity $\bm{v}_i=\bm{u}(\bm{r}_i)+\eta^{-1}\sum_{j\neq i}\bm{f}_{ij}^\mathrm{el}$ with a time increment $\Delta t = 0.1t_0$. In the following, we analyze the data in a steady state, where shear strain applied to the system is larger than unity. In addition, we scale every time and length by $t_0$ and the mean disk diameter $d_0\equiv(d_S+d_L)/2$, respectively.
Results {#sec:result}
=======
In this section, we show our numerical results of the self-diffusion of soft athermal particles (Sec. \[sub:diff\]). We also extract rigid clusters from numerical data in order to relate their sizes to the diffusivity (Sec. \[sub:rigid\]). We explain additional data of the rheology and non-affine displacements in Appendixes.
Diffusion {#sub:diff}
---------
We analyze the self-diffusion of soft athermal particles by the transverse component of *mean squared displacement* (MSD) [@diff_shear_md0; @diff_shear_md1; @diff_shear_md3; @diff_shear_md4], $$\Delta(\tau)^2 = \left\langle\frac{1}{N}\sum_{i=1}^N\Delta y_i(\tau)^2\right\rangle~.
\label{eq:MSD}$$ Here, $\Delta y_i(\tau)$ is the $y$-component of particle displacement and the ensemble average $\langle\dots\rangle$ is taken over different choices of the initial time (see Appendix \[sec:nona\] for the detail) [^1]. Figure \[fig:msdy\] displays the MSDs \[Eq. (\[eq:MSD\])\] with different values of (a) $\phi$ and (b) $\dot{\gamma}$. The horizontal axes are the time interval scaled by the shear rate, $\gamma\equiv\dot{\gamma}\tau$, i.e. the shear strain applied to the system for the duration $\tau$. As can be seen, every MSD exhibits a crossover to the normal diffusive behavior, $\Delta(\tau)^2\sim\dot{\gamma}\tau$ (dashed lines), around a crossover strain $\gamma=\gamma_c\simeq 1$ regardless of $\phi$ and $\dot{\gamma}$. The MSDs below jamming ($\phi<\phi_J$) monotonously increase with the increase of packing fraction, while they (almost) stop increasing once the packing fraction exceeds the jamming point ($\phi>\phi_J$) \[Fig. \[fig:msdy\](a)\]. The dependence of MSDs on the shear rate is monotonous; their heights decrease with the increase of $\dot{\gamma}$ \[Fig. \[fig:msdy\](b)\]. These trends well correspond with the fact that the non-affine displacements are amplified in slow flows of dense systems, i.e. $\dot{\gamma}t_0\ll 1$ and $\phi>\phi_J$ [@saitoh11]. In addition, different from thermal systems under shear [@rheol10; @nafsc5; @th-dh_md1], any plateaus are not observed in the MSDs. Therefore, neither “caging" nor “sub-diffusion" of the particles exists in our sheared athermal systems [@dh_md2; @dh_qs1; @dh_md1].
![ The transverse MSDs $\Delta^2$ \[Eq. (\[eq:MSD\])\] as functions of the shear strain $\gamma\equiv\dot{\gamma}\tau$. (a) The packing fraction $\phi$ increases as indicated by the arrow and listed in the legend, where the shear rate is $\dot{\gamma}=10^{-6}t_0^{-1}$. (b) The shear rate $\dot{\gamma}$ increases as indicated by the arrow and listed in the legend, where the packing fraction is $\phi=0.84$. \[fig:msdy\]](msdy.png){width="\columnwidth"}
To quantify the normal diffusion of the disks, we introduce the diffusivity (or diffusion coefficient) as [^2] $$D=\lim_{\tau\rightarrow\infty}\frac{\Delta(\tau)^2}{2\tau}~.
\label{eq:D}$$ Figure \[fig:diff\](a) shows double logarithmic plots of the diffusivity \[Eq. (\[eq:D\])\] over the shear rate $D/\dot{\gamma}$, where symbols represent the packing fraction $\phi$ (as listed in the legend). The diffusivity over the shear rate increases with $\phi$. If the system is above jamming $\phi>\phi_J$, it is a monotonously decreasing function of $\dot{\gamma}$. On the other hand, if the system is below jamming $\phi<\phi_J$, it exhibits a crossover from plateau to a monotonous decrease around a characteristic shear rate, e.g. $\dot{\gamma}_0t_0\simeq 10^{-3}$ for $\phi=0.80$ [@diff_shear_md3; @diff_shear_md4].
In Appendix \[sec:rheo\], we have demonstrated *scaling collapses* of rheological flow curves [@rheol0]. Here, we also demonstrate scaling collapses of the diffusivity. As shown in Fig. \[fig:diff\](b), all the data are nicely collapsed [^3] by the scaling exponents, $\lambda=1.0$ and $\nu=4.0$. If the shear rate is smaller than a characteristic value as $\dot{\gamma}/|\Delta\phi|^\nu \lesssim 10^4$,i.e. $\dot{\gamma}<\dot{\gamma}_c\simeq 10^4|\Delta\phi|^\nu$, the data below jamming ($\phi<\phi_J$) are constant. However, the data above jamming ($\phi>\phi_J$) show the power-law decay, where the slope is approximately given by $-0.3$ (solid line). Therefore, we describe the diffusivity in a *quasi-static regime* ($\dot{\gamma}<\dot{\gamma}_c$) as $|\Delta\phi|^\lambda D/\dot{\gamma}\sim\mathcal{G}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)$, where the scaling functions are given by $\mathcal{G}_-(x)\sim\mathrm{const.}$ for $\phi<\phi_J$ and $\mathcal{G}_+(x)\sim x^{-0.3}$ otherwise. On the other hand, if $\dot{\gamma}>\dot{\gamma}_c$, all the data follow a single power law (dotted line). This means that the scaling functions are given by $\mathcal{G}_\pm(x) \sim x^{-z}$ in a *plastic flow regime* ($\dot{\gamma}>\dot{\gamma}_c$), where the diffusivity scales as $D\sim\dot{\gamma}|\Delta\phi|^{-\lambda}\mathcal{G}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)\sim\dot{\gamma}^{1-z}|\Delta\phi|^{\nu z-\lambda}$. Because this scaling should be independent of whether the system is below or above jamming,i.e. independent of $|\Delta\phi|$, the power-law exponent is given by $z=\lambda/\nu=1/4$ as confirmed in Fig. \[fig:diff\](b).
![ (a) The diffusivity over the shear rate, $D/\dot{\gamma}$, as a function of $\dot{\gamma}$, where $\phi$ increases as indicated by the arrow and listed in the legend. (b) *Scaling collapses* of the diffusivity, where $\Delta\phi\equiv\phi-\phi_J$. The critical exponents are given by $\lambda=1.0$ and $\nu=4.0$, where slopes of the dotted and solid lines are $-\lambda/\nu$ and $-0.3$, respectively. \[fig:diff\]](diff_coeff.png){width="\columnwidth"}
In summary, the diffusivity of the disks scales as $$D \sim
\begin{cases}
|\Delta\phi|^{-\lambda}\dot{\gamma} & (\phi<\phi_J) \\
|\Delta\phi|^{0.3\nu-\lambda}\dot{\gamma}^{0.7} & (\phi>\phi_J)
\end{cases} \label{eq:D1}$$ in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) and $$D \sim \dot{\gamma}^{1-\lambda/\nu}
\label{eq:D2}$$ in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$), where the critical exponents are estimated as $\lambda=1.0$ and $\nu=4.0$. From Eqs. (\[eq:D1\]) and (\[eq:D2\]), we find that the diffusivity below jamming ($\phi<\phi_J$) is linear in the shear rate $D\sim\dot{\gamma}$ in slow flows, whereas its dependence on the shear rate is algebraic $D\sim\dot{\gamma}^{3/4}$ in fast flows. A similar trend has been found in molecular dynamics studies of simple shear flows below jamming [@diff_shear_md0; @dh_md2; @dh_md1] and experiments of colloidal glasses under shear [@diff_shear_exp1]. In addition, the proportionality for the diffusivity below jamming diverges at the transition as $|\Delta\phi|^{-1}$ \[Eq. (\[eq:D1\])\], which we will relate to a length scale diverging as the system approaches jamming from below (Sec. \[sub:rigid\]). The diffusivity above jamming ($\phi>\phi_J$) implies the crossover from $D\sim|\Delta\phi|^{0.2}\dot{\gamma}^{0.7}$ to $\dot{\gamma}^{3/4}=\dot{\gamma}^{0.75}$, which reasonably agrees with the prior work on soft athermal disks under shear [@diff_shear_md0]. Interestingly, the crossover shear rate vanishes at the transition as $\dot{\gamma}_c\sim|\Delta\phi|^{4.0}$, which is reminiscent of the fact that the crossover from the Newtonian or yield stress to the plastic flow vanishes at the onset of jamming (see Appendix \[sec:rheo\]).
Rigid clusters {#sub:rigid}
--------------
We now relate the diffusivity to rigid clusters of soft athermal particles under shear. The rigid clusters represent collective motions of the particles which tend to move in the same direction [@saitoh11]. According to the literature of jamming [@rheol0; @pdf1; @corl3], we quantify the collective motions by a spatial correlation function $C(x)=\langle v_y(x_i,y_i)v_y(x_i+x,y_i)\rangle$, where $v_y(x,y)$ is the transverse velocity field and the ensemble average $\langle\dots\rangle$ is taken over disk positions and time (in a steady state). Figure \[fig:corl\] shows the normalized correlation function $C(x)/C(0)$, where the horizontal axis ($x$-axis) is scaled by the mean disk diameter $d_0$. As can be seen, the correlation function exhibits a well-defined minimum at a characteristic length scale $x=\xi$ (as indicated by the vertical arrow for the case of $\phi=0.84$ in Fig. \[fig:corl\](a)). Because the minimum is negative $C(\xi)<0$, the transverse velocities are most “anti-correlated" at $x=\xi$. Therefore, if we assume that the rigid clusters are circular, their mean diameter is comparable in size with $\xi$ [@diff_shear_md1]. The length scale $\xi$ increases with the increase of $\phi$ \[Fig. \[fig:corl\](a)\] but decreases with the increase of $\dot{\gamma}$ \[Fig. \[fig:corl\](b)\]. These results are consistent with the fact that the collective behavior is most enhanced in slow flows of dense systems [@saitoh11].
![ Normalized spatial correlation functions of the transverse velocities $C(x)/C(0)$, where symbols are as in Fig. \[fig:msdy\]. (a) The packing fraction $\phi$ increases as indicated by the arrow and listed in the legend, where $\dot{\gamma}=10^{-6}t_0^{-1}$. The minimum of the data for $\phi=0.84$ is indicated by the vertical (gray) arrow. (b) The shear rate $\dot{\gamma}$ increases as indicated by the arrow and listed in the legend, where $\phi=0.84$. \[fig:corl\]](corl.png){width="\columnwidth"}
As reported in Ref. [@rheol0], we examine critical scaling of the length scale. Figure \[fig:xi\](a) displays scaling collapses of the data of $\xi$, where the critical exponents, $\lambda=1.0$ and $\nu=4.0$, are the same with those in Fig. \[fig:diff\](b). If the shear rate is smaller than the characteristic value, i.e. $\dot{\gamma}<\dot{\gamma}_c\simeq 10^4|\Delta\phi|^\nu$, the data below jamming ($\phi<\phi_J$) exhibit plateau, whereas those above jamming ($\phi>\phi_J$) diverge with the *decrease* of shear rate. Therefore, if we assume that the data above jamming follow the power-law with the slope $-0.4$ (solid line), the length scale in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) can be described as $|\Delta\phi|^\lambda\xi\sim\mathcal{J}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)$ with the scaling functions, $\mathcal{J}_-(x)\sim\mathrm{const.}$ for $\phi<\phi_J$ and $\mathcal{J}_+(x)\sim x^{-0.4}$ otherwise. Note that, however, the length scale is limited to the system size $L$ \[shaded region in Fig. \[fig:xi\](a)\] and should be scaled as $\xi\sim L$ above jamming in the quasi-static limit $\dot{\gamma}\rightarrow 0$ [@pdf1; @diff_shear_md3; @nafsc2]. This means that the system size is the only relevant length scale [@nafsc0] and thus we conclude $\xi\sim L$ in slow flows of jammed systems. On the other hand, if $\dot{\gamma}>\dot{\gamma}_c$, all the data are collapsed onto a single power law \[dotted line in Fig. \[fig:xi\](a)\]. Therefore, the scaling functions are given by $\mathcal{J}_\pm(x)\sim x^{-z}$ such that the length scale scales as $\xi\sim\dot{\gamma}^{-z}|\Delta\phi|^{\nu z-\lambda}$. Because this relation is independent of $|\Delta\phi|$, the exponent should be $z=\lambda/\nu$ as confirmed in Fig. \[fig:xi\](a).
, where slopes of the dotted and solid lines are given by $-\lambda/\nu$ and $-0.4$, respectively. The shaded region exceeds the system size $|\Delta\phi|^\lambda L/2$ for the case of $\phi=0.90$. (b) Scatter plots of the diffusivity over the shear rate $D/\dot{\gamma}$ and the length scale $\xi$, where $\phi$ increases as listed in the legend. The dotted line represents a linear relation $D/\dot{\gamma}\sim\xi$ and the shaded region exceeds the system size $L/2\simeq 44d_0$. \[fig:xi\]](xi.png){width="\columnwidth"}
In summary, the length scale, or the mean size of rigid clusters, scales as $$\xi \sim
\begin{cases}
|\Delta\phi|^{-\lambda} & (\phi<\phi_J) \\
L & (\phi>\phi_J)
\end{cases} \label{eq:xi1}$$ in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) and $$\xi \sim \dot{\gamma}^{-\lambda/\nu}
\label{eq:xi2}$$ in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$), where the critical exponents, $\lambda$ and $\nu$, are the same with those for the diffusivity \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\]. The critical divergence below jamming in the quasi-static regime, i.e. $\xi\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:xi1\])\], is consistent with the result of quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of sheared athermal disks [@nafsc2]. In addition, the scaling $\xi\sim\dot{\gamma}^{-1/4}$ in the plastic flow regime \[Eq. (\[eq:xi2\])\] is very close to the prior work on athermal particles under shear [@dh_md2].
From the results of the diffusivity \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\] and length scale \[Eqs. (\[eq:xi1\]) and (\[eq:xi2\])\], we discuss how the rigid clusters contribute to the diffusion of the particles. The linear relation $D\sim d_0\xi\dot{\gamma}$ \[Eq. (\[eq:rigid\_cluster\])\] holds below jamming (regardless of $\dot{\gamma}$) and in the plastic flow regime (regardless of $\phi$). We stress that the divergence of the diffusivity over the shear rate in the quasi-static regime, i.e. $D/\dot{\gamma}\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:D1\])\], is caused by the diverging length scale below jamming, i.e. $\xi\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:xi1\])\]. As shown in Fig. \[fig:xi\](b), the linear relation (dotted line) well explains our results if the length scale $\xi$ is smaller than $10d_0$. If the system is above jamming, the length scale increases (more than $10d_0$) with the increase of $\phi$. However, the diffusivity over the shear rate $D/\dot{\gamma}$ starts to deviate from the linear relation (dotted line) and the length scale reaches the system size $L/2\simeq 44d_0$ (shaded region). We conclude that this deviation is caused by finite-size effects and further studies of different system sizes are necessary (as in Refs. [@diff_shear_md3; @diff_shear_md4]) to figure out the relation between $D/\dot{\gamma}$ and $\xi$ in this regime, which we postpone as a future work.
Discussions {#sec:disc}
===========
In this study, we have numerically investigated rheological and transport properties of soft athermal particles under shear. Employing MD simulations of two-dimensional disks, we have clarified how the rheology, self-diffusion, and size of rigid clusters vary with the control parameters,i.e. the externally imposed shear rate $\dot{\gamma}$ and packing fraction of the disks $\phi$. Our main result is the critical scaling of the diffusivity (Sec. \[sub:diff\]) and size of rigid clusters (Sec. \[sub:rigid\]), where their dependence on both $\dot{\gamma}$ and $\phi$ is reported \[Eqs. (\[eq:D1\]), (\[eq:D2\]), (\[eq:xi1\]), and (\[eq:xi2\])\]. The diffusivity has been calculated on both sides of jamming (by a single numerical protocol) to unify the understanding of self-diffusion of soft particulate systems: We found that (i) the diffusivity below jamming exhibits a crossover from the linear scaling $D\sim\dot{\gamma}$ to the power-law $D\sim\dot{\gamma}^{3/4}$. Such a crossover can be also seen in previous simulations [@diff_shear_md7; @diff_shear_md6; @dh_md2] and experiments [@diff_shear_exp2; @diff_shear_exp1]. In addition, (ii) the diffusivity below jamming diverges as $D\sim|\Delta\phi|^{-1}$ if the system is in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$), whereas (iii) the diffusivity (both below and above jamming) is insensitive to $\phi$ if the system is in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$). Note that (iv) the crossover shear rate vanishes at the onset of jamming as $\dot{\gamma}_c\sim|\Delta\phi|^{4.0}$. These results (ii)-(iv) are the new findings of this study. On the other hand, we found that (v) the diffusivity above jamming is weakly dependent on $\phi$ (as $D\sim|\Delta\phi|^{0.2}$) in the quasi-static regime and (vi) shows a crossover from $D\sim\dot{\gamma}^{0.7}$ to $\dot{\gamma}^{3/4}$. Though the result (v) is the new finding, the result (vi) contrasts with the prior studies of sheared amorphous solids and granular materials under constant pressure, where the diffusivity exhibits a crossover from $D\sim\dot{\gamma}$ to $\dot{\gamma}^{1/2}$ [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4]. Because our scaling $D\sim\dot{\gamma}^{0.7}$ in the quasi-static regime is consistent with Ref. [@diff_shear_md0], where the same overdamped dynamics are used, we suppose that the discrepancy is caused by numerical models or flow conditions.
We have also examined the relation between the diffusivity and typical size of rigid clusters $\xi$ (Sec. \[sub:rigid\]). Below jamming, we found the critical divergence $\xi\sim|\Delta\phi|^{-1}$ in the quasi-static regime as previously observed in quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of sheared athermal disks [@nafsc2]. In the plastic flow regime, the size becomes independent of $\phi$ and scales as $\xi\sim\dot{\gamma}^{-1/4}$. This is consistent with the previous result of sheared athermal particles [@dh_md2] (and is also close to the result of thermal glasses under shear [@th-dh_md1]). Above jamming, however, the size exhibits a crossover from $\xi\sim L$ to $\dot{\gamma}^{-1/4}$ which contrasts with the crossover from $\xi\sim\mathrm{const.}$ to $\dot{\gamma}^{-1/2}$ previously reported in simulations of amorphous solids [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4]. From our scaling analyses, we found that the linear relation $D\sim d_0\xi\dot{\gamma}$ \[Eq. (\[eq:rigid\_cluster\])\] holds below jamming (for $\forall\dot{\gamma}$) and in the plastic flow regime (for $\forall\phi$), indicating that the self-diffusion is enhanced by the rotation of rigid clusters [@rheol0; @diff_shear_md1].
In our MD simulations, we fixed the system size to $L\simeq 88d_0$. However, systematic studies of different system sizes are needed to clarify the relation between $D$ and $\xi\sim L$ above jamming, especially in the quasi-static limit $\dot{\gamma}\rightarrow 0$ [@diff_shear_md3; @diff_shear_md4]. In addition, our analyses are limited to two dimensions. Though previous studies suggest that the diffusivity is independent of the dimensionality [@diff_shear_md7; @diff_shear_md6; @dh_md2], a recent study of soft athermal particles reported that the critical scaling of shear viscosity depends on dimensions [@rheol15]. Therefore, it is important to check whether the critical scaling \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\] is different (or not) in three-dimensional systems. Because we observed qualitative difference from the results of sheared amorphous solids and granular materials under constant pressure [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4], further studies of different numerical models and flow conditions are necessary to complete our understanding of self-diffusion of soft athermal particles. Moreover, the relation between the diffusivity and shear viscosity may be interesting because it gives a Stokes-Einstein like relation for the non-equilibrium systems studied here.
We thank H. Hayakawa, M. Otsuki, and S. Takada for fruitful discussions. K.S. thanks F. Radjai and W. Kob for fruitful discussions and warm hospitality in Montpellier. This work was supported by KAKENHI Grant No. 16H04025, No. 18K13464 and No. 19K03767 from JSPS. Some computations were performed at the Yukawa Institute Computer Facility, Kyoto, Japan.
Rheology {#sec:rheo}
========
The rheology of soft athermal particles is dependent on both the shear rate $\dot{\gamma}$ and area fraction $\phi$ [@rheol0; @pdf1; @rheol7]. Figure \[fig:rheo\] displays our numerical results of *flow curves*, i.e. (a) the pressure $p$ and (b) shear stress $\sigma$ as functions of the shear rate $\dot{\gamma}$. Here, different symbols represent different values of $\phi$ (as listed in the legend of (a)). The pressure and shear stress are defined as $p=(\tau_{xx}+\tau_{yy})/2$ and $\sigma=-\tau_{xy}$, respectively, where the stress tensor is given by the virial expression $$\tau_{\alpha\beta}=\frac{1}{L^2}\sum_i\sum_{j~(>i)}f_{ij\alpha}^\mathrm{e}r_{ij\beta}
\label{eq:stress}$$ ($\alpha,\beta=x,y$) with the $\alpha$-component of elastic force $f_{ij\alpha}^\mathrm{e}$ and the $\beta$-component of relative position $r_{ij\beta}$. As shown in Fig. \[fig:rheo\], both the pressure and shear stress exhibit the Newtonian behavior,i.e. they are proportional to the shear rate, $p\sim\dot{\gamma}$ and $\sigma\sim\dot{\gamma}$ (dotted lines), only if the area fraction is lower than the jamming transition density ($\phi<\phi_J$) and the shear rate is small enough ($\dot{\gamma}t_0\lesssim 10^{-4}$). However, a finite yield stress, $p_Y>0$ and $\sigma_Y>0$, emerges in the zero shear limit $\dot{\gamma}\rightarrow 0$ if the system is above jamming ($\phi>\phi_J$).
![ *Flow curves*, i.e. (a) the pressure $p$ and (b) shear stress $\sigma$ as functions of the shear rate $\dot{\gamma}$. The area fraction $\phi$ increases as indicated by the arrow (listed in the legend) in (a). The dotted lines represent the Newtonian behavior, i.e. (a) $p\sim\dot{\gamma}$ and (b) $\sigma\sim\dot{\gamma}$, for low area fractions, $\phi<\phi_J$, where $\phi_J\simeq 0.8433$ is the jamming transition density. \[fig:rheo\]](flow_curves.png){width="\columnwidth"}
In the literature of jamming [@rheol0; @pdf1; @rheol7], rheological flow curves are collapsed by critical scaling. This means that the crossover from the Newtonian behavior ($p\sim\dot{\gamma}$ and $\sigma\sim\dot{\gamma}$) or the yield stress ($p\sim p_Y$ and $\sigma\sim\sigma_Y$) to plastic flow regime vanishes as the system approaches jamming $\phi\rightarrow\phi_J$. To confirm this trend, we collapse the data in Fig. \[fig:rheo\] by the proximity to jamming $|\Delta\phi|\equiv|\phi-\phi_J|$ as in Fig. \[fig:rheo-clp\]. Though the critical exponents are slightly different, i.e. $\kappa_p=1.1$ and $\mu_p=3.5$ for the pressure \[Fig. \[fig:rheo-clp\](a)\] and $\kappa_\sigma=1.2$ and $\mu_\sigma=3.3$ for the shear stress \[Fig. \[fig:rheo-clp\](b)\], all the data are nicely collapsed on top of each other. If the shear rate is small enough, the data below jamming ($\phi<\phi_J$) follow the lower branch, whereas the data above jamming ($\phi>\phi_J$) are almost constant. Therefore, the pressure and shear stress can be described as $p/|\Delta\phi|^{\kappa_p}\sim\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_p})$ and $\sigma/|\Delta\phi|^{\kappa_\sigma}\sim\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_\sigma})$ with the scaling functions, $\mathcal{F}_-(x)\sim x$ for $\phi<\phi_J$ and $\mathcal{F}_+(x)\sim\mathrm{const.}$ for $\phi>\phi_J$.
On the other hand, if the shear rate is large enough, the system is in plastic flow regime, where all the data (both below and above jamming) follow a single power law (dotted lines in Fig. \[fig:rheo-clp\]). This implies that the scaling functions are given by $\mathcal{F}_\pm(x)\sim x^z$ (for both $\phi<\phi_J$ and $\phi>\phi_J$) with a power-law exponent $z$. Then, the pressure and shear stress scale as $p\sim|\Delta\phi|^{\kappa_p}\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_p})\sim\dot{\gamma}^z|\Delta\phi|^{\kappa_p-\mu_p z}$ and $\sigma\sim|\Delta\phi|^{\kappa_\sigma}\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_\sigma})\sim\dot{\gamma}^z|\Delta\phi|^{\kappa_\sigma-\mu_\sigma z}$, respectively. These scaling relations should be independent of whether the system is below or above jamming, i.e. independent of $|\Delta\phi|$. Thus, the power-law exponent is $z=\kappa_p/\mu_p\simeq 0.31$ for the pressure and $z=\kappa_\sigma/\mu_\sigma\simeq 0.36$ for the shear stress as confirmed in Fig. \[fig:rheo-clp\] (dotted lines). Note that the scaling collapses in Fig. \[fig:rheo-clp\] also confirm that the jamming transition density $\phi_J\simeq 0.8433$ is correct in our sheared systems [@rheol0].
![ Scaling collapses of (a) the pressure and (b) shear stress, where $\Delta\phi\equiv\phi-\phi_J$ is the proximity to jamming. See the text for critical exponents, $\kappa_p$, $\mu_p$, $\kappa_\sigma$, and $\mu_\sigma$, where the dotted lines have the slopes (a) $\kappa_p/\mu_p$ and (b) $\kappa_\sigma/\mu_\sigma$. \[fig:rheo-clp\]](flow_curves_clp.png){width="\columnwidth"}
In summary, the rheological flow properties of the disks are described as $$\begin{aligned}
p &\sim&
\begin{cases}
|\Delta\phi|^{\kappa_p-\mu_p}\dot{\gamma} & (\phi<\phi_J) \\
|\Delta\phi|^{\kappa_p} & (\phi>\phi_J)
\end{cases}~, \label{eq:pressure1} \\
\sigma &\sim&
\begin{cases}
|\Delta\phi|^{\kappa_\sigma-\mu_\sigma}\dot{\gamma} & (\phi<\phi_J) \\
|\Delta\phi|^{\kappa_\sigma} & (\phi>\phi_J)
\end{cases}~, \label{eq:shear_stress1}\end{aligned}$$ in the quasi-static regime and $$\begin{aligned}
p &\sim& \dot{\gamma}^{\kappa_p/\mu_p}~, \label{eq:pressure2} \\
\sigma &\sim& \dot{\gamma}^{\kappa_\sigma/\mu_\sigma}~, \label{eq:shear_stress2}\end{aligned}$$ in the plastic flow regime. The critical exponents are estimated as $\kappa_p=1.1$, $\mu_p=3.5$, $\kappa_\sigma=1.2$, and $\mu_\sigma=3.3$. In Eqs. (\[eq:pressure1\]) and (\[eq:shear\_stress1\]), the Newtonian behavior is given by $p\sim|\Delta\phi|^{-2.4}\dot{\gamma}$ and $\sigma\sim|\Delta\phi|^{-2.1}\dot{\gamma}$, where the exponents are comparable to those for viscosity divergence below jamming [@rheol7]. The yield stress vanishes as $p_Y\sim|\Delta\phi|^{1.1}$ and $\sigma_Y\sim|\Delta\phi|^{1.2}$ when the system approaches jamming from above \[Eqs. (\[eq:pressure1\]) and (\[eq:shear\_stress1\])\], which is consistent with the previous study of two-dimensional bubbles under shear [@pdf1]. The scaling in the plastic flow regime, $p\sim\dot{\gamma}^{0.31}$ and $\sigma\sim\dot{\gamma}^{0.36}$ \[Eqs. (\[eq:pressure2\]) and (\[eq:shear\_stress2\])\], is close to the prior work on sheared athermal disks [@rheol14], indicating *shear thinning* as typical for particulate systems under shear [@larson].
Non-affine displacements {#sec:nona}
========================
The self-diffusion of soft athermal particles is also sensitive to both $\dot{\gamma}$ and $\phi$. Because our system is homogeneously sheared (along the $x$-direction), the self-diffusion is represented by fluctuating motions of the disks around a mean flow. In our MD simulations, the mean velocity field is determined by the affine deformation as $\dot{\gamma}y\bm{e}_x$, where $\bm{e}_x$ is a unit vector parallel to the $x$-axis. Therefore, subtracting the mean velocity field from each disk velocity $\bm{u}_i(t)$, we introduce non-affine velocities as $\Delta\bm{u}_i(t)=\bm{u}_i(t)-\dot{\gamma}y_i\bm{e}_x$ ($i=1,\dots,N$). *Non-affine displacements* are then defined as the time integrals $$\Delta\bm{r}_i(\tau) = \int_{t_a}^{t_a+\tau}\Delta\bm{u}_i(t)dt~,
\label{eq:non-affine}$$ where $\tau$ is the time interval. Note that the initial time $t_a$ can be arbitrary chosen during a steady state.
It is known that the non-affine displacements \[Eq. (\[eq:non-affine\])\] are sensitive to the rheological flow properties (Sec. \[sec:rheo\]) [@saitoh11]. Their magnitude significantly increases if the packing fraction exceeds the jamming point. In addition, their spatial distributions become more “collective" (they tend to align in the same directions with neighbors) with the decrease of the shear rate. This means that the self-diffusion is also strongly dependent on both the shear rate and density. Especially, the collective behavior of the non-affine displacements implies the growth of rigid clusters in slow flows $\dot{\gamma}t_0\ll 1$ of jammed systems $\phi>\phi_J$, where the yield stress $\sigma\sim\sigma_Y$ is observed in the flow curves (Fig. \[fig:rheo\]).
[^1]: The MSDs defined by the *total* non-affine displacements show quantitatively the same results (data are not shown).
[^2]: We define the diffusivity \[Eq. (\[eq:D\])\] as the slope of the MSD \[Eq. (\[eq:MSD\])\] in the normal diffusive regime $\gamma=\dot{\gamma}\tau>1$, where we take sample averages of $\Delta(\tau)^2/2\tau$ as $D/\dot{\gamma}\equiv <\Delta(\gamma)^2/2\gamma>$ in the range between $1<\gamma<10^2$.
[^3]: The data for the highest shear rate, $\dot{\gamma}=10^{-1}t_0^{-1}$, is removed from the scaling collapses in Figs. \[fig:diff\](b) and \[fig:xi\](a).
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We present here, for the first time, a 2D study of the overshoot convective mechanism in nova outbursts for a wide range of possible compositions of the layer underlying the accreted envelope. Previous surveys studied this mechanism only for solar composition matter accreted on top of carbon oxygen (C-O) white dwarfs. Since, during the runaway, mixing with carbon enhances the hydrogen burning rates dramatically, one should question whether significant enrichment of the ejecta is possible also for other underlying compositions (He, O, Ne, Mg), predicted by stellar evolution models. We simulated several non-carbon cases and found significant amounts of those underlying materials in the ejected hydrogen layer. Despite large differences in rates, time scales and energetics between the cases, our results show that the convective dredge up mechanism predicts significant enrichment in all our non-carbon cases, including helium enrichment in recurrent novae. The results are consistent with observations.'
date: Released 2012 June 28
title: 'Convective overshoot mixing in Nova outbursts - The dependence on the composition of the underlying white dwarf'
---
\[firstpage\]
convection-hydrodynamics,binaries:close-novae,stars:abundances
Introduction {#intro}
============
Almost all classical and recurrent novae for which reliable abundance determinations exist show enrichment (relative to solar composition) in heavy elements and/or helium. It is now widely accepted that the source for such enrichment is dredge up of matter from the underlying white dwarf to the accreted envelope. A few mechanisms for such mixing were proposed to explain the observations : Mixing by a diffusion layer, for which diffusion during the accretion phase builds a layer of mixed abundances ([@pk84; @kp85; @IfMc91; @IfMc92; @FuI92]); Mixing by shear instability induced by differential rotation during the accretion phase [@Du77; @kt78; @Macd83; @lt87; @ks87; @ks89]; Mixing by shear gravity waves breaking on the white dwarf surface in which a resonant interaction between large-scale shear flows in the accreted envelope and gravity waves in the white dwarf’s core can induce mixing of heavy elements into the envelope [@Rosner01; @alex02; @Alex2D] and finally - Mixing by overshoot of the convective flow during the runaway itself [@woo86; @saf92; @sha94; @gl95; @glt97; @Ker2D; @Ker3D; @glt2007; @Cas2010; @Cas2011a; @Cas2011b].
In this work we focus on the last of these mechanisms that proved efficient for C-O white dwarfs. Mixing of carbon from the underlying layer significantly enhances the hydrogen burning rate. The enhanced burning rate drives higher convective fluxes, inducing more mixing [@gl95; @glt97; @glt2007; @Cas2010; @Cas2011a; @Cas2011b]. Therefore, the fact that the underlying layer is rich in C is a critical feature of all the overshoot convective models that have been analyzed up to this work. According to the theory of stellar evolution for single stars, we expect the composition of the underlying white dwarf to be C-O for masses in the range $0.5-1.1 M\sun$ and ONe(Mg) for more massive white dwarfs [@GpGb01; @dom02; @Gb02]. Observations show enrichment in helium, CNO, Ne, Mg and heavier elements [@sst86; @Gehrz98; @Gehrz08; @Ili02]. For recurrent novae, helium enrichment can achieve levels of $40-50\%$ [@web1987; @anu2000; @diaz2010]. High helium abundances can simply be explained as the ashes of hydrogen burning during the runaway [@Hernanz2008], but one can not exclude the possibility that the source of He enrichment is dredge up from an underlying helium layer. We therefore found it essential to study nova outbursts for which the composition of the underlying layer is different from C-O. The models studied here extend the work we publish in the past. As a first step we study here the runaway of the accreted hydrogen layer on top of a single white dwarf, changing only its composition. Having a fixed mass and radius we can compare the timescales, convective flow, energetics and dredge up in the different cases. A more comprehensive study which varies the white dwarf’s mass with compositions is left to future work (CO, ONe(Mg) or He rich).
The present study is limited to 2D axially symmetric configurations. The well known differences between 2D and 3D unstable flows can yield uncertainties of few percents on our results, but can not change the general trends, as previous studies showed reasonable agreement between 2D and 3D simulations with regard to integral quantities, although larger differences persist in the local structure ( [@Cas2011b]). We therefore regard our present results as a good starting point to more elaborated 3D simulations. In the next section we describe the tools and the setup of the simulations. In subsequent sections we describe the results for each initial composition and then summarize our conclusions.
Tools and initial configurations {#tools}
================================
All the 2D simulations presented in this work start from a 1D hydrostatic configurations, consisting of a $1.147 M\sun$ CO core with an outer layer of $~10^{-4}\,{\ensuremath{M_{\odot}}}$ composed of CO, ONe(mg) or helium, according to the studied case as we explained in the introduction. The original core was built as a hydrostatic CO politrop that cooled by evolution to the desired central temperature ($2\times10^7 {\rm K}$). The 1D model evolves using Lagrangian coordinates and does not include any prescription for mixing at the bottom of the envelope. The core accretes matter with solar abundance and the accreted matter is compressed and heated. Once the maximal temperature at the base of the accreted envelope reaches a temperature of $9\times10^7 {\rm K}$, the whole accreted envelope $3.4\times10^{-5}\,{\ensuremath{M_{\odot}}}$ and most of the underlying zone $4.7\times10^{-5}\,{\ensuremath{M_{\odot}}}$ , are mapped onto a 2D grid, and the simulations continue to runaway and beyond using the 2D hydro code VULCAN-2D [@liv93]. This total mass of $8.1\times10^{-5}\,{\ensuremath{M_{\odot}}}$ is refered as the total computed envelope mass. Using same radial zoning in the 1D grid and in its 2D counterpart, the models preserve hydrostatic equilibrium, accurate to better than one part in ten thousand. Since the configurations are unstable, non radial velocity components develop very quickly from the small round-off errors of the code, without introducing any artificial initial perturbation.
Further computational details of the 2D simulations are as follows. The inner boundary is fixed, with assumed zero inward luminosity. The outer boundary follows the expansion of the envelope taking advantage of the arbitrary Lagrangian-Eulerian (ALE) semi-Lagrangian option of the VULCAN code whereas the burning regions of the grid at the base of the hydrogen rich envelope are purely Eulerian. More details are presented in Glasner et al. (2005, 2007). The flexibility of the ALE grid enables us to model the burning zones at the bottom of the hydrogen rich envelope with very delicate zones in spite of the post-runaway radial expansion of the outer layers. The typical computational cell at the base of the envelope, where most of the burning takes place, is a rectangular cell with dimensions of about $ 1.4 km \times 1.4 km $ . Reflecting boundary conditions are imposed at the lateral boundaries of the grid. Gravity is taken into account as a point source with the mass of the core, and the self gravity of the envelope is ignored. The reaction network includes 15 elements essential for the hydrogen burning in the CNO cycle, it includes the isotopes: H, He, He Be, B, C, C, N, N, N, O, O, O , O and F.
Results
=======
In order to compare with 1D models and with previous studies we present here five basic configurations:
1\) The outburst of the 1D original model without any overshoot mixing.
2\) An up to date model with an underlying C-O layer.
3\) A model with a Helium underlying layer.
4\) A model with an underlying O(Ne) layer.
5\) A toy model with underlying Mg.
This model demonstrates the effects of possible mixing of hydrogen with $^{24}$Mg on the runaway. (In a realistic model the amounts of Mg in the core are expected to be a few percents ([@GpGb01; @siess06]), but higher amounts can be found in the very outer layers of the core ([@berro94] ).
The computed models are listed in Table \[tab:models\]. In the next sections we present the energetics and mixing results for each of the models in the present survey.
Model $Underlying$ $T_{max}$ $Q_{max}$ $remarks$
------- -------------- ----------- ------------ ----------------------------------------
m12 - $2.05 $ $ 4.0 $ 1D
m12ad CO $2.45 $ $ 1000.0 $ -
m12ag He $ - $ $ - $ -
m12dg He $ - $ $ - $ $T_{base}=1.5\times10^8 K$
m12al O $ - $ $ - $ -
m12bl O $ - $ $ - $ $T_{base}=1.05\times10^8 K$
m12cl O $ - $ $ - $ $T_{base}=1.125\times10^8 K$
m12dl O $2.15 $ $ 82.4 $ $T_{base}=1.22\times10^8 K$
m12kk O $ - $ $ - $ Mg rates
m12jj O $2.46 $ $1000.0 $ Mg rates+ $T_{base}=1.125\times10^8 K$
: Parameters of the Simulated Initial Configurations[]{data-label="tab:models"}
$T_{max}$; maximal achieved temperature \[$10^8$ Kelvin\]. $Q_{max}$; maximal achieved energy generation rate \[$ 10^{42} erg/sec$\]
The 1D original model
---------------------
The initial (original) model, composed of a $1.147 M\sun$ degenerate core, accretes hydrogen rich envelope at the rate of $1.0 \times 10^{-10} {\ensuremath{M_{\odot}}}/year$. When the temperature at the base of the accreted envelope reaches $9\times10^7 {\rm K}$ the accreted mass is $3.4\times10^{-5}\,{\ensuremath{M_{\odot}}}$. We define this time as t=0, exceptional test cases are commented in Table \[tab:models\]. Convection sets in for the original model a few days earlier, at $t=-10^6 sec$, when the base temperature is $3\times10^7 {\rm K}$. The 1D convective model assumes no overshoot mixing; therefore, convection has an effect only on the heat transport and abundances within the convective zone. Without the overshoot mixing burning rates are not enhanced by overshoot convective mixing of CO rich matter. As a reference for the 2D simulations, we evolved the 1D model all through the runaway phase. The time to reach the maximal energy production rate is about $ 2400 sec $, the maximal achieved temperature is $2.05\times10^8 K$, the maximal achieved burning rate is $4.0\times10^{42} erg/sec$ and the total nuclear energy generated up to the maximum production rate (integrated over time) is $0.77\times10^{46} erg $.
The C-O underlying layer
------------------------
We summarize here the main results of the underlying C-O case (computed already in [@glt2007] and repeated here), which is the most energetic case. A comparison of the history of the burning rate (Fig. \[co\]) with Figure 3 in [@glt2007] confirms that our current numerical results agree with those of our earlier publication. In this figure (Fig. \[co\]) we also present the amount of mixing at various stages of the runaway. The main effects of the convective underlying dredge up are:
- The convective cells are small at early stages with moderate velocities of a few times $ 10^{6} $ cm/sec. As the energy generation rate increases during the runaway, the convective cells merge and become almost circular. The size of the cells is comparable to the height of the entire envelope, i.e. a few pressure scale heights. The velocity magnitude within these cells, when the burning approaches the peak of the runaway, is a few times $ 10^{7} $ cm/sec.
- The shear convective flow is followed by efficient mixing of C-O matter from the core to the accreted solar abundant envelope. The amount of C-O enrichment increases as the burning becomes more violent and the total amount of mixing is above $30\%$ (Fig. \[co\]).
- Mixing induces an enhanced burning rate, relative to the non mixing 1D case, by more than an order of magnitude. The maximum rate grows from $ 4.0\times10^{42} erg/sec $ to $ 1000.0\times 10^{42} erg/sec $. The enhanced rates raise the burning temperature and shorten the time required to reach the maximal burning rate. The maximal achieved temperature increases from $ 2.05\times 10^8 K$ to $ 2.45\times 10^8 K$ and the rise time to maximum burning decreases from $ 2440 sec $ to $ 140 sec $. The total energy production rates of the 1D and the 2D simulations are given in Fig. \[co\]. The enhanced burning rate in the 2D case will give rise, at later stages of the outburst, to an increase in the kinetic energy of the ejecta. Unfortunately, since the hydro solver time step is restricted by the Courant condition we can not run the 2D models through to the coasting phase. A consideration of the integrated released energy at the moment of maximal burning rate reveals that the burning energy grows from $ 0.77\times 10^{46} erg$ in the 1D model to $ 1.45\times 10^{46} erg$ in the 2D model, a factor of 2.
Another interesting feature of the 2D C-O simulations is the appearance of fluctuations, observed during the initial stages of the runaway (Fig. \[co\]). Such fluctuations are not observed in the 1D model. The fluctuations are a consequence of the mixing of the hot burning envelope matter with the cold underlying white dwarf matter. The mixing has two effects. The first is cooling as we mix hot matter with cold matter. The second is heating by the enhancement of the reaction rate. It is apparent that in this case, after a small transient, the heating by enhanced reaction rates becomes dominant and the runaway takes place on a short timescale. For other underlying compositions the effect is a bit more complicated. We discuss this issue in the next subsections.
The Helium underlying layer
---------------------------
{width="84mm"}
\[Qhelium\]
The helium enrichment, observed in recurrent nova (without enrichment by heavier isotopes) and in other classical nova was mentioned in the past as an obstacle to the underlying convective overshoot mechanism ([@lt87]). Helium is the most abundant end product of the hydrogen burning in nova outbursts and it does not enhance the hydrogen burning in any way. In recurrent novae, helium may be accumulated upon the surface of the white dwarfs. If so, can the dredge-up mechanism lead to the observed helium enrichment ? We examine this question using the case m12ag in Table \[tab:models\]. Energetically, as expected, the model follows exactly the 1D model (Fig. \[Qhelium\]). The slow rise of the burning rate in this case makes the 2D simulation too expensive. To overcome this problem we artificially ’jump in time’ by jumping to another helium model at a later stage of the runaway, in which the 1D temperature is $1.22\times10^8 {\rm K}$. The rise time of this model is much shorter, and by adjusting its time axis to that of the 1D model the two curves in Fig. \[Qhelium\] may be seen to coincide. The fluctuations of the 2D curve are absent in its 1D counterpart, both because the latter has no convection and because the 1D simulation is performed using implicit algorithm with much larger time steps. The precise way in which the 2D evolution agrees with the 1D evolution increases our confidence in the validity of the 2D simulations.
The convective flow is indeed moderate but overshoot mixing is observed at a certain level (Fig. \[Xhelium\]). In the first (slower) phase (m12ag in Table \[tab:models\]) the level of mixing is small, converging to about $10\%$. In the later (faster) phase (m12dg in Table \[tab:models\]) the mixing rate increases in step with the increasing burning rate and the higher velocities in the convective cells. However, as the rates are still low relative to the C-O case, the added amount of mixing is again about $10\%$, summing up to a total amount of about $20\%$. Since we begin with matter of solar composition, the total He mass fraction at the end of the second phase exceeds $40 \%$.
The color maps of the absolute value of the velocity in the 2D models at different times along the development of the runaway are presented in Fig. \[fig:flow-he\]. In the first slower phase (model m12ag Table \[tab:models\]), the burning rate is low and grows mildly with time. The convective velocities are converging to a value of a few $ 10^{6} $ cm/sec and the cell size is only a bit larger than a scale height. In the second (faster) phase (model m12dg Table \[tab:models\]), the burning rate is somewhat higher and is seen to grow with time. The convective velocities are increasing with time up to a value of about $1.5\times10^{7} $ cm/sec. The convective cells in the radial direction converge with time to an extended structure of a few scale heights.
The ONe(Mg) underlying layer
----------------------------
The rate of proton capture by oxygen is much slower and less energetic than the capture by carbon but it still has an enhancing effect relative to the 1D model without mixing (Fig. \[Qoxygen\]). This can be well overstood once we notice that for the initial temperature of the 2D model i.e. $ 9.0\times 10^7 K$ the energy generation rate by proton capture by oxygen is more than three orders of magnitude lower than the energy generation rate by carbon capture (Fig. \[fig:Qcapture\]). We can also observe that the energy generation rate for capture by oxygen stays much smaller than the energy generation rate for capture by carbon for the entire range of temperature relevant to nova outbursts. For this range, we also notice that the capture by Ne is lower by an order of magnitude than the capture rate by O. Being interested only in the energetics, convective flow and mixing by dredge up, we choose to seperate varibles and study the case of a pure O underlying layer as a test case (ignoring any possible Mg that is predicted by evolutionary codes).
We computed the model for an integrated time of 300 seconds (about one million time steps). The trend is very clear and we can extrapolate and predict a runaway a bit earlier and more energetic than the 1D case. Again, we could not continue that 2D simulation further due to the very low burning rates and a small hydrodynamical time step. As before, we computed three different phases, where each of them starts from a different 1D model along its evolution. The maximal 1D temperatures at the base of the burning shell, when mapped to the 2D grid, are : $ 1.05\times 10^8 K$, $ 1.125\times 10^8 K$ and $ 1.22\times 10^8 K$ respectively (Table \[tab:models\]). All three phases start with a transient related to the buildup of the convective flow. In Fig. \[Qoxygen\] we shifted the curves of burning rates in time in a way that permits a continuous line to be drawn. Along this continuous line, evolution proceeds faster towards a runaway than the 1D burning rate, also shown in Fig. \[Qoxygen\].
![Log of the total energy production rate for all the oxygen models compared to the 1D model and to the CO model. The models with initial temperature higher than the default temperature of $ 9\times10^7 K $ were shifted in time in order to produce a smooth continuous line.[]{data-label="Qoxygen"}](v-q-oxygen-new.eps){width="84mm"}
![Log of the energy generation rate for proton capture on: C, O, Ne, and Mg, the rate is calculated for $\rho=1000.0$ gr/cc.[]{data-label="fig:Qcapture"}](Qcapture.eps){width="84mm"}
We computed the last phase, with an initial base temperature of $ 1.22\times 10^8 K$ , for 350 sec until it reached a maximum. The maximal achieved temperature is $ 2.15\times 10^8 K$ and the maximal achieved burning rate is $ 82.4\times 10^{42} erg/sec $. As expected, this case lies somewhere between the 1D case and the C-O model. The convective flow resembles the C-O case, a significant feature being the strong correlation between the burning rate and the convective velocities. Most importantly for the case of an underlying oxygen layer, dredge up of substantial amounts of matter from the core into the envelope occurs in all our simulations. The trends are about the same as for the C-O models (Fig. \[Xoxygen\]). The correlation of the amount of mixing with the intensity of the burning is easily observable.
Approximate models for Mg underlying layer {#mgreac}
------------------------------------------
Nova outbursts on massive ONe(Mg) white dwarfs are expected to be energetic fast nova [@sst86; @Gehrz98; @Gehrz08; @Ili02]. The problem we face is whether, in absence of the enhancement by C, the overshoot mixing mechanism can generate such energetic outbursts by mixing the solar abundance accreted matter with the underlying ONe(Mg) core. Based upon examination of the energy generation rate of proton capture reactions $(p,\gamma)$ on C, O, Ne , and Mg the results shown in Fig. \[fig:Qcapture\] make it evident that only Mg can compete with C in the range of temperatures relevant to nova runaways. Therefore, in spite of the fact that the abundance of Mg in the core sums up to only a few percent ([@GpGb01; @siess06]), the high capture rate might compensate for the low abundances and play an important role in the runaway. Furthermore, previous studies show that in the outer parts of the core, the parts important for our study, Mg is more abundant and can represent up to about $25 \%$ ([@berro94]). Restricting ourselves to the reaction network that includes only 15 elements we assume, as a demonstration, an artificial case of a homogeneous underlying layer with only one isotope (Mg). For this homogeneous layer model we replaced the energy generation rate of proton capture by O with the values of proton capture by Mg Fig. \[fig:Qcapture\]. To check our simplified network, we present in Fig. \[fig:Mg-rates\] the rates computed by a 216 elements network and our modified 15 elements rates, both for a mixture of $90\%$ solar matter with $10\%$ Mg. The difference is much smaller than the difference inside the big network between this mixture and a mixture of $90\%$ solar matter with $10\%$ C-O core matter. Therefore our simplified network is a good approximation regarding energy production rates.
![Log of the total energy production rate for $\rho=1000.0$ gr/cc of a mixture that contains $90\%$ solar matter mixed with $10\%$ of Mg. Red:15 elements net used for the 2D model; Blue: The rates given by a full net of 216 elements. The Black line gives the rates of $90\%$ solar matter mixed with $10\%$ of CO core matter (see text). []{data-label="fig:Mg-rates"}](Mg-rates.eps){width="84mm"}
The crossing of the curves in Fig. \[fig:Mg-rates\] reveals a striking and very important feature - at temperatures less than $1.3\times10^{8}$K a mixture of $10\%$ carbon and $90\%$ solar compostion burns roughly ten times faster than a mixture of $10\%$ magnesium with the same solar composition gas. Above that temperature the rates exchange places and magnesium enhancement dominates C-O enhancement. This emphasizes the importance of a proper treatment of the effects of the $^{24}$Mg abundance in explosive burning on ONe(Mg) white dwarfs. In a future work, we intend to study more realistic models with the inclusion of a detailed reaction network.
In Fig. \[Qmagnesium\] we present the total burning rates in our toy model, together with the rates of previous models.
As can be expected, the enhancement of the burning in this toy model, relative to the 1D model, is indeed observed. However, the development of the runaway although faster than in the underlying O model is still much slower than the rise time of the C-O model. This result is easily overstood via the discussion above, as the initial burning temperature in the model is only ( $9\times10^7 {\rm K}$). At that temperature, the energy generation rate for proton capture on Mg is lower by almost three orders of magnitude relative to the energy generation rate by proton capture on C. The rates are about the same when the temperature is $1.3\times10^8 {\rm K}$ and from there on the magnesium capture rate is much higher. In order to demonstrate that this is indeed the case we calculated another 2D magnesium model in which the initial maximal 1D temperature was $1.125\times10^8 {\rm K}$. The rise time of this model is very short, even shorter than the rise time of the C-O model. One should regard those two simulations as two phases of one process - slow and fast. The maximal achieved temperature is $ 2.45\times 10^8$K and the maximal achieved burning rate is $ 1000.0\times 10^{42} erg/sec $, similar to the C-O case.
![Log of the total energy production rate for all the magnesium models compared to the CO model the 1D model and the oxygen model. The model with initial temperature higher than the default temperature of $ 9\times10^7 K $ is not shifted in time (see text). []{data-label="Qmagnesium"}](v-q-magnesium.eps){width="84mm"}
In order to better understand the convective flow for the case with underlying Mg we generated color maps of the absolute value of the velocity (speed) in the 2D models at different times along the development of the runaway (Fig. \[fig:flow-mg\]). The two Mg cases show extremely different behavior. In the first case (model m12kk Table \[tab:models\]), the burning rate is low and it grows mildly with time. The convective velocities are converging to a value of a few $ 10^{6} $ cm/sec and the cell size is only a bit bigger than a scale height. In the second case (model m12jj Table \[tab:models\]), the burning rate is high and it grows rapidly with time. The convective velocities are increasing with time up to a value of a few $10^{7} $ cm/sec. The convective cells in the radial direction converge to a structure of a few scale heights.
In accordance with our previous cases, the magnesium toy model dredge up substantial amounts of matter from the core to the envelope. There is a one to one correlation with the convective velocities. The amount of mixing at the slow initial stages (model m12kk) are small and tend to converge to a few percents. The amount of mixing at the late fast stages grows rapidly with time (Fig. \[Xmagnesium\]). We present here only the general trend detailed results will be presented in a forthcoming study.
conclusions {#conclu}
===========
We present here, for the first time, detailed 2D modeling of nova eruptions for a range of possible compositions beneath the accreted hydrogen layer. The main conclusion to be drawn from this study is that **[significant enrichment (around $30 \%$) of the ejected layer, by the convective drege-up mechanism, is a common feature of the entire set of models, regardless of the composition of the accreting white dwarf]{} . On the other hand, the burning rates and therefore the time scales of the runaway depend strongly on the composition of the underlying layers. There is also a one to one correlation between the burning rate, the velocities in the convective flow, and the amount of temporal mixing. Therefore, second order differences in the final enrichment are expected to depend on the underlying composition. Specific results for each case are as follows :**
a\) Since the energy generation rate for the capture of protons by C is high for the entire temperature range prevailing both in the ignition of the runaway and during the runaway itself, the underlying carbon layer accelerates the ignition and gives rise to C-O enrichment in the range of the observed amounts.
b\) For the densities and temperatures prevailing in nova outbursts helium is an inert isotope. Therefore, it does not play any role in the enhancement of the runaway. Nevertheless, we demonstrate that once the bottom of the envelope is convective, the shear flow induces substantial amounts of mixing with the underlying helium. The eruption in those cases is milder, with a lower burning rate. For recurrent nova, where the timescales are too short for the diffusion process to play a significant role, the observed helium enrichment favor the underlying convection mechanism as the dominant mixing mechanism. Future work dealing with more realistic core masses (1.35-1.4 solar masses) for recurrent novae will give better quantitative predictions that will enable us to confront our results with observational data.
c\) The energy generation rate for the capture of a proton by O is much lower than that of the capture by C for the entire temperature range prevailing in the ignition of the runaway and during the runaway itself. Underlying oxygen, whenever it is present, is thus expected to make only a minor contribution to the enhancement of the runaway. As a result the time scale of the runaway in this case is much larger than that of the C-O case. Still, the final enrichment of the ejecta is above $40 \% $, (Fig. \[Xoxygen\]). The energy generation rate by the capture of a proton by Ne is even lower than that of capture rate by O. We therefore expect Ne to make again only a minor contribution to the enhancement of the runaway, but with substantial mixing.
d\) Nova outbursts on massive ONe(Mg) white dwarfs are expected to be energetic fast nova. In this survey we show that for the range of temperatures relevant for the nova runaway the only isotope that can compete with the C as a source for burning enhancement by overshoot mixing is Mg. From our demonstrating toy model, we can speculate that even small amounts of Mg present at the high stages of the runaway can substantially enhance the burning rate, leading to a faster runaway with a significant amount of mixing. The relationship between the amount of Mg in the ONe(Mg) core, the steepness of the runaway, and the amount of mixing in this case are left to future studies.
Acknowledgments
===============
We thank the referee for his comments which helped us in clarifying our arguments in the revised version of the paper. Ami Glasner, wants to thank the Department of Astronomy and Astrophysics at the University of Chicago for the kind hospitality during his visit to Chicago, where part of this work was done. This work is supported in part at the University of Chicago by the National Science Foundation under Grant PHY 02-16783 for the Frontier Center “Joint Institute for Nuclear Astrophysics” (JINA), and in part at the Argonne National Laboratory by the U.S. Department of Energy, Office of Nuclear Physics, under contract DE-AC02-06CH11357.
natexlab\#1[\#1]{}
Alexakis,A., Young,Y.-N. and Rosner,R. 2002,[*Phys.Rev.E*]{} ,65,026313.
Alexakis,A., Calder,A.C., Heger,A., Brown,E.F., Dursi,L.J., Truran,J.W., Rosner,R., Lamb,D.Q., Timmes,F.X., Fryxell,B., Zingale,M., Ricker,P.M., & Olson,K. 2004, ApJ, 602, 931
Anders,E. & Grevesse,N. 1989, Geochim. Cosmochim. Acta,53,197
Anupama,G.C. & Dewagan,G.C. 2000, ApJ, 119,1359
Calder,A.C., Alexakis,A., Dursi,L.J., Rosner,R. , Truran, J. W., Fryxell,B., Ricker,P., Zingale,M., Olson,K.,Timmes,F.X. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.134.
Casanova, J.,Jose, J.,Garcia-Berro,E.,Calder,A. & Shore,S.N. 2010, A&A,513,L5
Casanova, J.,Jose, J.,Garcia-Berro,E.,Calder,A. & Shore,S.N. 2011a, A&A,527,A5
Casanova, J.,Jose, J.,Garcia-Berro,E., Shore,S.N. & Calder,A. 2011b, Nature10520.
Diaz,M.,P., Williams,R.,E., Luna,G.,J., Moraes,M. & Takeda, L. 2010, ApJ, 140,1860
Domingues,I., Staniero,O., Isern,J., & Tornambe,A. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.57.
Durisen,R.,H. 1977, ApJ, 213,145
Eggleton,P.P. 1971, , 151, 351
Fujimoto,M.Y. 1988, A&A, 198,163.
Fujimoto,M.Y.,Iben, I.Jr. 1992, ApJ, 399,646
Garcia-Berro,E. & Iben,I. 1994, ApJ,434,306
Garcia-Berro,E., Gil-Pons,P., & Truran,J.,W. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.62.
Gehrz, R.D.,Truran,J.W., Williams,R.E. & Starrfield, S. 1998,PASP,110,3
Gehrz, R.D.,Woodward,C.E., Helton, L.A.,Polomski,E.F., Hayward,T.L., Houck,J.R.,Evans,A., Krauter, J.,Shore, S.N., Starrfield, S., Truran,J.W.,Schwarz, G.J. & Wagner, R.M. 2008, ApJ, 672,1167
Gil-Pons,P. & Garcia-berro,E. 2001, A&A, 375,87.
Glasner,S.A. & Livne,E. 1995, , 445,L149
Glasner,S.A., Livne,E., & Truran,J.W. 1997, ApJ, 475,754
Glasner,S.A., Livne,E., & Truran,J.W. 2005,, 625,347
Glasner,S.A., Livne,E., & Truran,J.W. 2007,, 665,1321
Hernanz,M. & Jose,J. 2008, New Astronomy Reviews,52,386.
Iben, I.Jr., Fujimoto,M.Y., & MacDonald, J. 1991, ApJ, 375,L27
Iben, I.Jr., Fujimoto,M.Y., & MacDonald, J. 1992, ApJ, 388,521
Iliadis,C., Champagne, A.,Jose, J.,Starrfield, S., & Tupper,P.. 2002, ApJS, 142,105
Kercek Hillebrandt & Truran. 1998, , 337, 379
Kercek Hillebrandt & Truran. 1999, , 345, 831
Kippenhahn,R., & Thomas,H.-C., 1978, A&A, 63,625
Kovetz,A., & Prialnik, D. 1985 ApJ, 291,812.
Kutter,G.S. & Sparks,W.M. 1987, ApJ, 321, 386
Kutter,G.S. & Sparks,W.M. 1989, ApJ, 340, 985
Livio, M. & Truran, J. W. 1987, ApJ, 318, 316
Livne, E. 1993, ApJ, 412, 634
MacDonald, J. 1983, ApJ, 273, 289
Meakin,C.A., & Arnett,D. 2007, ApJ, 667, 448
Prialnik, D. & Kovetz,A. 1984 ApJ, 281,367
Rosner,R., Alexakis,A., ,Young,Y.-N.,Truran,J.W., and Hillebrandt,W. 2001, ApJ, 562,L177.
Shankar,A., Arnett,D., & Fryxell,B.A. 1992, ApJ, 394,L13
Shankar,A. & Arnett,D. 1994, ApJ, 433,216
Shore,S.,N. 1992, [*Astrophysical Hydrodynamics*]{}, (Wiley:Darmstadt).
Siess, L. 2006, , 448, 231
Sparks,W.M. & Kutter,G.S. 1987, ApJ, 321, 394
Starrfield,S. Sparks,W.M. & Truran,J.W., 1986, , 303,L5
Truran,J.W. & Livio,M. 1986, ApJ, 308, 721.
Truran, J. W. 1990, in [*Physics of Classical Novae*]{}, ed. A. Cassatella & R. Viotti (Berlin: Springer), 373
Webbink,R.,F., Livio,M., Truran,J.W. & Orio,M. 1987, ApJ, 314,653
Woosley,S.E. 1986 [*Nucleosynthesis and Chemical Evolution*]{}, ed. B.Hauck, A.Maeder & G.Magnet, (Geneva Observatory. Sauverny Switzerland)
\[lastpage\]
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Multiplicity distributions of charged particles for pp collisions at LHC Run 1 energies, from $\sqrt{s}=$ 0.9 to 8 TeV are measured over a wide pseudorapidity range ($-3.4<\eta<5.0$) for the first time. The results are obtained using the Forward Multiplicity Detector and the Silicon Pixel Detector within ALICE. The results are compared to Monte Carlo simulations, and to the IP-Glasma model.'
address: 'Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark'
author:
- 'Valentina Zaccolo, on behalf of the ALICE Collaboration'
bibliography:
- 'biblio\_QM.bib'
title: 'Charged-Particle Multiplicity Distributions over a Wide Pseudorapidity Range in Proton-Proton Collisions with ALICE'
---
charged-particle multiplicity distributions ,pp collisions ,saturation models ,forward rapidity
Introduction {#Intro}
============
The multiplicity distribution of charged particles ($ N_{\text{ch}}$) produced in high energy pp collisions, $\text{P}(N_{\text{ch}})$, is sensitive to the number of collisions between quarks and gluons contained in the colliding protons and, in general, to the mechanisms underlying particle production. In particular, $\text{P}(N_{\text{ch}})$ is a good probe for the saturation density of the gluon distribution in the colliding hadrons. The pp charged-particle multiplicity distributions are measured for five gradually larger pseudorapidity ranges. The full description of the ALICE detector is given in [@Aamodt:2008zz]. In this analysis, only three subdetectors are used, namely, the V0 detector [@Abbas:2013taa], the Silicon Pixel Detector (SPD) [@Aamodt:2008zz] and the Forward Multiplicity Detector (FMD) [@Christensen:2007yc] to achieve the maximum possible pseudorapidity coverage ($-3.4<\eta<5.0$).
Analysis Procedure {#Analysis}
==================
Three different collision energies (0.9, 7, and 8 TeV) are analyzed here. Pile-up events produce artificially large multiplicities that enhance the tail of the multiplicity distribution, therefore, special care was taken to avoid runs with high pile up. For the measurements presented here, the pile-up probability is of $\sim2\%$. The fast timing of the V0 and SPD are used to select events in which an interaction occurred and events are divided into two trigger classes. The first class includes all inelastic events (INEL) which is the same condition as used to select events where an interaction occurred (this is called the MB$_{\text{OR}}$ trigger condition). The second class of events requires a particle to be detected in both the V0A and the V0C (MB$_{\text{AND}}$ trigger condition). This class is called the Non-Single-Diffractive (NSD) event class, where the majority of Single-Diffractive events are removed.
The FMD has nearly 100$\%$ azimuthal acceptance, but the SPD has significant dead regions that must be accounted for. On the other hand, interactions in detector material will increase the detected number of charged particles and have to be taken into account. The main ingredients necessary to evaluate the primary multiplicity distributions are the raw (detected) multiplicity distributions and a matrix, which converts the raw distribution to the true primary one. The raw multiplicity distributions are determined by counting the number of clusters in the SPD acceptance, the number of energy loss signals in the FMD [@Abbas:2013bpa], or the average between the two if the acceptance of the SPD and FMD overlap. The response of the detector is determined by the matrix $R_{\text{mt}}$ which, when normalized, is the probability that an event with true multiplicity t and measured multiplicity m occurs. This matrix is obtained using Monte Carlo simulations, in this case the PYTHIA ATLAS-CSC flat tune [@d'Enterria:2011kw], where the generated particles are propagated through the detector simulation code (in this case GEANT [@Brun:1994aa]) and then through the same reconstruction steps as the actual data. The response matrix is obtained from an iterative application of Bayes’ unfolding [@2010arXiv1010.0632D].
[0.39]{} ![Charged-particle multiplicity distributions for NSD pp collisions at $\sqrt{s}=0.9$ and 8 TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown.[]{data-label="V0AND900"}](2015-Sep-21-results_0_1.pdf "fig:"){width="\textwidth"}
[0.39]{} ![Charged-particle multiplicity distributions for NSD pp collisions at $\sqrt{s}=0.9$ and 8 TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown.[]{data-label="V0AND900"}](2015-Sep-21-results_2_1.pdf "fig:"){width="\textwidth"}
The probability that an event is triggered, at all, depends on the multiplicity of produced charged particles. At low multiplicities large trigger inefficiencies exist and must be corrected for. The event selection efficiency, $\epsilon_{\text{TRIG}}$, is defined dividing the number of reconstructed events with the selected hardware trigger condition and with the reconstructed vertex less than 4 cm from the nominal IP by the same quantity but for the true interaction classification: $\epsilon_{\text{TRIG}}=N_{\text{ch,reco}}/N_{\text{ch,gen}}$. The unfolded distribution is corrected for the vertex and trigger inefficiency by dividing each multiplicity bin by its $\epsilon_{\text{TRIG}}$ value. Diffraction was implemented using the Kaidalov-Poghosyan model [@Kaidalov:2009aw] to tune the cross sections for diffractive processes (the measured diffraction cross–sections at LHC and the shapes of the diffractive masses $M_{\text{X}}$ are implemented in the Monte-Carlo models used for the $\epsilon_{\text{TRIG}}$ computation).
Results {#Results}
=======
The multiplicity distributions have been measured for the two event classes (INEL and NSD) for pp collisions at $\sqrt{s}=$ 0.9, 7, and 8 TeV. Fits to the sum of two Negative Binomial Distributions (NBDs) have been performed here and are plotted together with the results in Figs. \[V0AND900\] and \[V0AND7000\]. The distributions have been fitted using the function $$\label{eq2}
\text{P}(n)=\lambda[\alpha \text{P}_{NBD}(n,\langle n\rangle_{\text{1}},k_{\text{1}})+(1-\alpha)\text{P}_{NBD}(n,\langle n\rangle_{\text{2}},k_{\text{2}})]$$ To account for NBDs not describing the 0–bin and the first bins for the wider rapidities (and therefore removing that bin from the fit), a normalization factor $\lambda$ is introduced. The $\alpha$ parameter reveals the fraction of soft events. It is lower for higher energies and for wider pseudorapidity ranges, where the percentage of semi–hard events included is higher: $\alpha\backsim65\%$ for $\vert\eta\vert<2.0$ at $\sqrt{s}=$ 0.9 TeV and $\alpha\backsim35\%$ for $-3.4<\eta<5.0$ at 7 and 8 TeV. $\langle n\rangle_{\text{1}}$ is the average multiplicity of the soft (first) component, while $\langle n\rangle_{\text{2}}$ is the average for the semi–hard (second) component. The parameters $k_{\text{1,2}}$ represent the shape of the two components of the distribution.
In Fig. \[V0AND900\], the obtained multiplicity distributions for 0.9 TeV and 8 TeV for the NSD event class are shown for five pseudorapidity ranges, $\vert\eta\vert<2.0$, $\vert\eta\vert<2.4$, $\vert\eta\vert<3.0$, $\vert\eta\vert<3.4$ and $-3.4<\eta<5.0$. The distributions are multiplied by factors of 10 to allow all distributions to fit in the same figure without overlapping. Figure \[V0AND7000\] shows the results for the INEL event classes for collisions at 7 TeV (left plot). Comparisons with distributions obtained with the PYTHIA 6 Perugia 0 tune [@Skands:2010ak], PYTHIA 8 Monash tune [@Sjostrand:2014zea], PHOJET [@Bopp:1998rc] and EPOS LHC [@Pierog:2013ria] Monte Carlo generators are shown for INEL events at 7 TeV (right plot) . Both PHOJET and the PYTHIA 6 strongly underestimate the multiplicity distributions. PYTHIA 8 reproduces well the tails for the wider pseudorapidity range, but shows an enhancement in the peak region. EPOS with the LHC tune models well the distributions, both in the first bins, which are dominated by diffractive events, and in the tails.
[0.39]{} ![Left: charged-particle multiplicity distributions for INEL pp collisions at $\sqrt{s}=7$ TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown. Right: comparison of multiplicity distributions for INEL events to PYTHIA 6 Perugia 0, PYTHIA 8 Monash, PHOJET and EPOS LHC at 7 TeV.[]{data-label="V0AND7000"}](2015-Sep-21-results_1_0.pdf "fig:"){width="\textwidth"}
[0.39]{} ![Left: charged-particle multiplicity distributions for INEL pp collisions at $\sqrt{s}=7$ TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown. Right: comparison of multiplicity distributions for INEL events to PYTHIA 6 Perugia 0, PYTHIA 8 Monash, PHOJET and EPOS LHC at 7 TeV.[]{data-label="V0AND7000"}](2015-Sep-21-MCresults_1_0.pdf "fig:"){width="\textwidth"}
The multiplicity distributions are compared to those from the IP–Glasma model [@Schenke:2013dpa]. This model is based on the Color Glass Condensate (CGC) [@Iancu:2003xm]. It has been shown that NBDs are generated within the CGC framework [@Gelis:2009wh; @McLerran:2008es]. In Fig. \[IPGlasmaMineNSD7000\], the distribution for $\vert\eta\vert<2.0$ is shown together with the IP–Glasma model distributions as a function of the KNO variable $N_{\text{ch}}/\langle N_{\text{ch}}\rangle$. The IP–Glasma distribution shown in green is generated with a fixed ratio between $Q_{s}$ (gluon saturation scale) and density of color charge. This introduces no fluctuations. The blue distribution, instead, is generated with fluctuations of the color charge density around the mean following a Gaussian distribution with width $\sigma=0.09$. The black distributions includes an additional source of fluctuations, dominantly of non-perturbative origin, from stochastic splitting of dipoles that is not accounted for in the conventional frameworks of CGC [@McLerran:2015qxa]. In this model, the evolution of color charges in the rapidity direction still needs to be implemented and, therefore, in the present model the low multiplicity bins are not reproduced for the wide pseudorapidity range presented here.
![Charged-particle multiplicity distributions for pp collisions at $\sqrt{s}=7$ TeV compared to distributions from the IP–Glasma model with the ratio between $Q_{s}$ and the color charge density either fixed (green), allowed to fluctuate with a Gaussian (blue) [@Schenke:2013dpa] or with additional fluctuations of proton saturation scale (black) [@McLerran:2015qxa].[]{data-label="IPGlasmaMineNSD7000"}](2015-Sep-21-comparison.pdf){width="79.00000%"}
Conclusions {#Conclusions}
===========
Data from the Silicon Pixel Detector (SPD) and the Forward Multiplicity Detector (FMD) in ALICE were used to access a uniquely wide pseudorapidity coverage at the LHC of more the eight $\eta$ units, from $-3.4<\eta<5.0$. The charged-particle multiplicity distributions were presented for two event classes, INEL and NSD, and extend the pseudorapidity coverage of the earliest results published by ALICE [@Adam:2015gka] and CMS [@Khachatryan:2010nk] around midrapidity, and, consequently, the high-multiplicity reach. PYTHIA 6 and PHOJET produce distributions which strongly underestimate the fraction of high multiplicity events. PYTHIA 8 underestimates slightly the tails of the distributions, while EPOS reproduces both the low and the high multiplicity events. The Color Glass Condensate based IP–Glasma models produce distributions which underestimate the fraction of high multiplicity events, but introducing fluctuations in the saturation momentum the high multiplicity events are better explained.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'The discovery of dynamic memory effects in the magnetization decays of spin glasses in 1983 marked a turning point in the study of the highly disordered spin glass state. Detailed studies of the memory effects have led to much progress in understanding the qualitative features of the phase space. Even so, the exact nature of the magnetization decay functions have remained elusive, causing confusion. In this letter, we report strong evidence that the Thermoremanent Magnetization (TRM) decays scale with the waiting time, $t_{w}$. By employing a series of cooling protocols, we demonstrate that the rate at which the sample is cooled to the measuring temperature plays a major role in the determination of scaling. As the effective cooling time, $t_{c}^{eff}$, decreases, $\frac {t}{t_{w}}$ scaling improves and for $t_{c}^{eff}<20s$ we find almost perfect $\frac{t}{t_{w}}$ scaling, i.e full aging.'
author:
- 'G. F. Rodriguez'
- 'G. G. Kenning'
- 'R. Orbach'
title: Full Aging in Spin Glasses
---
Since the discovery of aging effects in spin glasses approximately twenty years ago[@Cham83][@Lund83], much effort has gone into determining the exact time dependence of the memory decay functions. In particular, memory effects show up in the Thermoremanent Magnetization (TRM) (or complementary Zero-Field Cooled (ZFC) magnetization) where the sample is cooled through its spin glass transition temperature in a small magnetic field (zero field) and held in that particular field and temperature configuration for a waiting time, $t_{w}$. At time $t_{w}$, a change in the magnetic field produces a very long time decay in the magnetization. The decay is dependent on the waiting time. Hence, the system has a memory of the time it spent in the magnetic field. A rather persuasive argument[@Bou92] suggests that for systems with infinite equilibration times, the decays must scale with the only relevant time scale in the experiment, $t_{w}$. This would imply that plotting the magnetization on a t/$t_{w}$ axis would collapse the different waiting time curves onto each other. This effect has not been observed.
What has been observed[@Alba86] is that the experimentally determined magnetization decays will scale with a modified waiting time, $(t_{w})^\mu$. Where $\mu$ is a fitting parameter. For $\mu<1$ the system is said to have subaged. A $\mu>1$ is called superaging and $\mu=1$ corresponds to full aging. For TRM experiments a $\mu$ of approximately 0.9 is found for different types of spin glasses[@Alba86; @Ocio85], over a wide range of reduced temperatures indicating subaging. At very low temperatures and temperatures approaching the transition temperature, $\mu$ is observed to decrease from the usual 0.9 value. Superaging has been observed in Monte Carlo simulations of spin glasses[@Sibani]. This has led to confusion as to the exact nature of scaling.
Zotev et al.[@Zov02] have suggested that the departures from full $\frac{t}{t_{w}}$ scaling, observed in aging experiments, are mainly due to cooling effects. In a real experimental environment, the situation is complicated by the time it takes for the sample to cool to its measuring temperature. An effect due to the cooling rate at which the sample temperature approaches the measuring temperature has been known[@Nord87; @Nord00]. This effect is not trivial, it does not contribute a constant time to $t_{w}$.
Another possible explanation for the deviation from full aging comes from the widely held belief that the magnetization decay is an additive combination of a stationary term ($M_{Stat} = A
(\tau_{0} / t)^{\alpha})$ and an aging term ($M =
f(\frac{t}{t_{w}})$ )[@Cou95; @Bou95; @Vin96]. Subtraction of a stationary term, where $\tau_{0}$ is a microscopic spin flipping time, A is a dimensionless constant and $\alpha$ is a parameter determined from $\chi''$ measurements, was shown to increase $\mu$ from 0.9 to 0.97[@Vin96].
In this letter we analyze effects of the cooling time through a series of different cooling protocols and we present the first clear and unambiguous experimental evidence that the TRM decays scale as $\frac{t}{t_{w}}$(i.e. full aging).
Three different methods have been regularly employed to understand the scaling of the TRM decays. The first and simplest is to scale the time axis of the magnetization decay with the time the sample has spent in a magnetic field (i.e.$\frac{t}{t_{w}}$)[@Bou92]. If the decays scale as a function of waiting time it would be expected that the decay curves would overlap. This has not yet been observed.
A second more sophisticated method was initially developed by Struik[@Stu79] for scaling the dynamic mechanical response in glassy polymers and first applied to spin glasses by Ocio et al.[@Ocio85]. This method plots the log of the reduced magnetization $M/M_{fc}$ ($M_{fc}$ is the field cooled magnetization), against an effective waiting time $\xi=\frac{\lambda}{t_{w}^{\mu}}$ where
$$\begin{aligned}
\lambda = \frac{t_{w}}{1-\mu}[(1+\frac{t}{t_{w}})^{1-\mu} -
1];~~~~\mu < 1 \label{eq:one}\end{aligned}$$
or $$\begin{aligned}
\lambda = t_{w} \log[1+\frac{t}{t_{w}}];~~~~~~~~~~~~~~~~\mu = 1
\label{eq:two}\end{aligned}$$ A value of $\mu$ = 1 would correspond to perfect $t/t_{w}$ scaling. Previous values of $\mu$ obtained on the decays using this method have varied from .7 to .94[@Alba86] for a temperature range $.2<T_{r}<.95$. A value of $\mu<1$ is called subaging.
Finally a peak in the function S(t)
$$\begin{aligned}
S(t)=-\frac{1}{H}\frac{dM(t)}{d[Log_{10}(t)]} \label{eq:three}\end{aligned}$$
as a function of time has been shown to be an approximately linear function of the waiting time[@Lund83]. This peak occurs at a time slightly larger then the waiting time, again suggesting possible subaging.
In this study we use all three of the above scaling procedures to analyze the data we have produced with different cooling protocols. All measurements in this letter were performed on our homebuilt DC SQUID magnetometer with a $Cu_{.94}Mn_{.06}$ sample. The sample is well documented[@Ken91] and has been used in many other studies. The measurements described in this letter were performed in the following manner: The sample was cooled, in a magnetic field of 20 G, from 35 K through its transition temperature of 31.5 K to a measuring temperature of 26 K. This corresponds to reduced temperature of .83 $T_{g}$. The sample was held at this temperature for a waiting time $t_{w}$, after which time the magnetic field was rapidly decreased to 0G. The resulting magnetization decay is measured 1s after field cutoff to a time greater than or equal to 5$t_{w}$. The only parameters we have varied in this study are $t_{w}$ and the rate and profile at which we cool the sample through the transition temperature to the measuring temperature. The sample is located on the end of the sapphire rod and sits in the upper coil of a second order gradiometer configuration. The temperature measuring thermometer is located 12.5 cm above the sample. Heat is applied to the sample through a heater coil located on the same sapphire rod 17 cm above the sample. Sample cooling occurs by heat transfer with the He bath via a constant amount of He exchange gas, which was previously introduced into each chamber of the double vacuum jacket. We have measured the decay time of our field coil and find that we can quench the field in less then 0.1ms. We have also determined that without a sample, our system has a small reproducible exponential decay, that decays to a constant value less than the system noise within 400 seconds. In order to accurately describe our data we subtract this system decay from all of the data.
In this paper we present TRM data for eight waiting times ($t_{w}$= 50s, 100s, 300s, 630s, 1000s, 3600s, 6310s, and 10000s). The same TRM experiments were performed for six different cooling protocols. In this paper we use four of the cooling protocols. Figure 1 (top row) is a plot of temperature vs. time for four of the cooling protocols. These different cooling protocols, were achieved, by varying applications of heat and by varying the amount of exchange gas in the vacuum jackets. A more detailed description of the cooling protocols will be given in a followup publication. In Figure 1 (bottom row) we plot S(t) (Eq $\ref{eq:three}.)$ of the ZTRM protocol (i.e. zero waiting time TRM) in order to characterize a time associated with the cooling protocol, $t_{c}^{eff}$. As observed in Figure 1 we have achieved effective cooling times ranging from 406s down to 19 seconds. These times can be compared with commercial magnetometers which have cooling times in the range of 100-400s.
In Figure 2, we plot the data for the TRM decays (first column) with the four cooling protocols. It should be noted that the magnetization (y-axis) is scaled by the field cooled magnetization.
The second column is the same data, as column one, with the time axis(x-axis) normalized by $t_{w}$. It can be observed for $\frac{t}{t_{w}}$ scaling (column 2) that as the effective cooling time decreases the spread in the decays decreases giving almost perfect $\frac{t}{t_{w}}$ scaling for the 19 second cooling protocol.
The last column in Figure 2, is the data scaled,using $\mu$ scaling which has previously been described. It has long been known that the rate of cooling affected $\mu$ scaling and that $\mu$ scaling is only valid in the limit $t_{w}>>t_{c}^{eff}$. We find this to be true and that the limit is much more rigorous than previously believed. To determine $\mu$ scaling, we focused on applying this scaling to the longest waiting time data (i.e., $t_{w}$ = 3600s, 6310s and 10,000s). For the largest effective cooling time data, $t_{c}^{eff}=406s$, we find that we can fit the longest waiting time data with a $\mu$ value of .88. This is consistent with previously reported values of $\mu$[@Alba86]. We do find, however, that TRM data with waiting times less then 3600s do not fit on the scaling curve. We find that scaling of the three longest waiting time decays produces $\mu$ values which increase as $t_{c}^{eff}$ decreases. We also find that as $t_{c}^{eff}$ decreases, the data with shorter $t_{w}$ begins to fit to the scaling better. It can be observed that at $t_{c}^{eff}$= 19s we obtain almost perfect scaling for all of the data with a value of $\mu=.999$. However we find we can reasonably fit the data to a range of $\mu$ between .989-1.001. The fitting for the large $t_{w}$ decay curves, $t_{w}$ = 3600s, 6310s and 10,000s, is very very good. Small systematic deviations, as a function of $t_{w}$, occur for $t_{w}<3600s$ with the largest deviations for $t_{w}$= 50s. Even with an effective cooling time two orders of magnitude less then the waiting time, one sees deviations from perfect scaling. We have also scaled the data using Eq. 2. We find no noticeable difference between the quality of this fit and the quality of the $\mu=.999$, for $t_{c}^{eff}=19s$, shown in Figure 2. Data with longer cooling times cannot be fit with Eq. 2. We therefore conclude that full aging is observed for the long $t_{w}$ data using the $t_{c}^{eff}=19s$ protocol.
It is clear from Figure 1 (bottom row), that the effect of the cooling time has implications for the decay all the way up to the longest time measured, 10,000 seconds. The form of the S(t) of the ZTRM is very broad. The S(t) function is often thought of as corresponding to a distribution of time scales (or barrier heights) within a system which has infinite equilibration times (barriers). The peak in S(t) is generally associated with the time scale (barrier height) probed in time $t_{w}$. In Figure 1, we observe that for the larger effective cooling times, the waiting times correspond to points on or near the peak of S(t) for the ZTRM. We therefore believe that for the larger effective cooling times there is significant contamination from the cooling protocol over the entire region of $t_{w}$s used in this paper. Only for $t_{c}^{eff}=19s$ cooling protocol do we find that the majority of $t_{w}s$ occur far away from the peak in the S(t).
All the data in figure 3 used the cooling protocol with $t_{c}^{eff} = 19s$. In Figure 3a we plot the S(t)(ZTRM) for $t_{c}^{eff}=19s$ with arrows to indicate the waiting times for the TRM measurements. It can be observed that after approximately 1000 seconds the slope of the S(t) function decreases, possibly approaching a horizontal curve, which would correspond to a pure logarithmic decay in M(t). If, on the other hand, the slope is continuously changing this part of the decay may be described by a weak power law. Either way, this region would correspond to aging within a pure non-equilibrated state. We believe that the long waiting time data occurs outside the time regime that has been corrupted by the cooling time and that this is the reason that we have, for the first time, observed full aging.
It has been suggested that subtraction of a stationary component of the magnetization decay will improve scaling[@Vin96]. The very long time magnetization decay is believed to consist of a stationary term that is thought to decay as a power law. We fit a power law, $M(t)=A(\tau_o/t)^\alpha$ to the long time decay (1000s-5000s), of the ZTRM for $t_{w}$=19s. Using $\tau_{o}=10^{-12}s$, we find $\alpha=.07$ and $A=.27$. Subtracting this power law form from the magnetization decay destroys scaling. We find that the subtraction of a much smaller power law term with A=.06 and $\alpha$=.02 slightly improves scaling at both short and long times. While the $\alpha$ values for the two different power law terms we have fit to are quite different, both values fall within the range determined from the decay of $\chi$"[@Vin96]. In Figure 3(c-d) and 3(e-f), we plot the two different types of scaling we have performed, with and without the subtraction of the weaker power law term.
We find that even for $t_{c}^{eff}=19s$ the peak in S(t) for $t_{w}>1000s$ occur at a time larger then $t_{w}$(fig 3b). We find that we can fit the effective time associated with the peak in S(t) to $t_{w}^{eff}=t_w^{1.1}$.
In summary, we have performed TRM decays over a wide range of waiting times (50s - 10,000s) for six different cooling protocols. We find that as the time associated with the cooling time decreases, scaling of the TRM curves improves in the $\frac{t}{t_{w}}$ scaling regime and in the $\mu$ scaling regime. In $\mu$ scaling we find that as the effective cooling time decreases $\mu$ increases approaching a value of .999 for $t_{c}^{eff}=19s$. For the $t_{c}^{eff}=19s$ TRM decays, we find that subtraction of a small power law term (A=.06, $\alpha$=.02) slightly improves the scaling. It is however likely that the small systematic deviations of the $t_{c}^{eff}=19s$ data as a function of $t_{w}$ are associated with the small but finite cooling rate.
The authors would like to thank V. S. Zotev, E. Vincent and J. M. Hammann for very helpful discussions.
[99]{} R. V. Chamberlin, M. Hardiman and R.Orbach, J. Appl. Phys **52**, 1771 (1983). L.Lundgren, P.Svedlindh, P.Nordblad and O.Beckman, Phys. Rev. Lett. **51**, 911(1983); L.Lundgren, P.Svedlindh, P.Nordblad and O.Beckman, J. Appl. Phys. **57**, 3371 (1985). J.P. Bouchaud, J. Phys. I (Paris) **2**,1705 (1992). M. Alba, M. Ocio and J. Hammann, Euro Phys Lett, **2**, 45 (1986); M. Alba, J. Hammann, M. Ocio, Ph. Refregier and H.Bouchiat, J. Appl. Phys **61**, 3683 (1987). M. Ocio, M. Alba,J. Hammann, J. Phys. (PARIS) Lett. **46**, 1101 (1985) Private communication Pablo Sibani V. S. Zotev, G. F. Rodriguez, G. G. Kenning, R. Orbach, E. Vincent and J. Hammann, cond-matt/0202269, To be published in Phys Rev B. P. Nordblad, P. Svedlindh, L. Sandlund and L.Lundgren,Phys Lett A **120**, 475 (1987). K. Jonason, P. Nordblad, Physica B **279**, 334 (2000). L. F. Cugliandolo and J. Kurchan, J. Phys , 5749 (1994). J. P. Bouchaud and D. S. Dean, J.Phys. I (Paris) **5**, 265 (1995). E.Vincent, J.Hammann, M.Ocio, J.P.Bouchaud and L.F. Cugliandolo, *Slow dynamics and aging in spin glasses. Complex behaviour of glassy systems*, ed. M. Rubi, Sitges conference, can be retrieved as cond-matt/9607224. L.C.E Struik, *Physical Ageing in Amorphous polymers and other materials* Lesevier Sci Pub Co. Amesterdam 1978 G. G. Kenning, D. Chu, R. Orbach, Phys. Rev. Lett **66**, 2933 (1991)
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.'
address:
- |
Systems Engineering Department,\
National Autonomous University of Honduras. Blvd. Suyapa, Tegucigalpa, Honduras
- |
Department of Computer Science, University of Alcalá\
Alcalá de Henares, 28871 Madrid, Spain
- |
Department of Computer Science, University of A Coruña\
Campus de Elviña s/n 15071 - A Coruña, Spain
author:
- 'Raul-Jose Palma-Mendoza'
- 'Luis de-Marcos'
- Daniel Rodriguez
- 'Amparo Alonso-Betanzos'
title: 'Distributed Correlation-Based Feature Selection in Spark'
---
feature selection ,scalability ,big data ,apache spark ,cfs ,correlation
Introduction {#sec:intro}
============
In recent years, the advent of big data has raised unprecedented challenges for all types of organizations and researchers in many fields. Xindong et al. [@XindongWu2014], however, state that the big data revolution has come to us not only with many challenges but also with plenty of opportunities for those organizations and researchers willing to embrace them. Data mining is one field where the opportunities offered by big data can be embraced, and, as indicated by Leskovec et al. [@Leskovec2014mining], the main challenge is to extract useful information or knowledge from these huge data volumes that enable us to predict or better understand the phenomena involved in the generation of the data.
Feature selection (FS) is a dimensionality reduction technique that has emerged as an important step in data mining. According to Guyon and Eliseeff [@Guyon2003] its purpose is twofold: to select relevant attributes and simultaneously to discard redundant attributes. This purpose has become even more important nowadays, as vast quantities of data need to be processed in all kinds of disciplines. Practitioners also face the challenge of not having enough computational resources. In a review of the most widely used FS methods, Bolón-Canedo et al. [@Bolon-Canedo2015b] conclude that there is a growing need for scalable and efficient FS methods, given that the existing methods are likely to prove inadequate for handling the increasing number of features encountered in big data.
Depending on their relationship with the classification process, FS methods are commonly classified in one of three main categories : (i) filter methods, (ii) wrapper methods, or (iii) embedded methods. *Filters* rely solely on the characteristics of the data and, since they are independent of any learning scheme, they require less computational effort. They have been shown to be important preprocessing techniques, with many applications such as churn prediction [@Idris2012; @Idris2013] and microarray data classification. In microarray data classification, filters obtain better or at least comparable results in terms of accuracy to wrappers [@Bolon-Canedo2015a]. In *wrapper* methods, the final subset selection is based on a learning algorithm that is repeatedly trained with the data. Although wrappers tend to increase the final accuracy of the learning scheme, they are usually more computationally expensive than the other two approaches. Finally, in *embedded* methods, FS is part of the classification process, e.g., as happens with decision trees.
Another important classification of FS methods is, according to their results, as (i) ranker algorithms or (ii) subset selector algorithms. With *rankers*, the result is a sorted set of the original features. The order of this returned set is defined according to the quality that the FS method determines for each feature. Some rankers also assign a weight to each feature that provides more information about its quality. *Subset selectors* return a non-ordered subset of features from the original set so that together they yield the highest possible quality according to some given measure. Subset selectors, therefore, consist of a search procedure and an evaluation measure. This can be considered an advantage in many cases, as rankers usually evaluate features individually and leave it to the user to select the number of top features in a ranking.
One filter-based subset selector method is the Correlation-Based Feature Selection (CFS) algorithm [@Hall2000], traditionally considered useful due to its ability not only to reduce dimensionality but also to improve classification algorithm performance. However, the CFS algorithm, like many other multivariate FS algorithms, has a time execution complexity $\mathcal{O}(m^2 \cdot n)$, where $m$ is the number of features and $n$ is the number of instances. This quadratic complexity in the number of features makes CFS very sensitive to the *the curse of dimensionality* [@bellman1957dynamic]. Therefore, a scalable adaptation of the original algorithm is required to be able to apply the CFS algorithm to datasets that are large both in number of instances and dimensions.
As a response to the big data phenomenon, many technologies and programming frameworks have appeared with the aim of helping data mining practitioners design new strategies and algorithms that can tackle the challenge of distributing work over clusters of computers. One such tool that has recently received much attention is Apache Spark [@Zaharia2010], which represents a new programming model that is a superset of the MapReduce model introduced by Google [@Dean2004a; @Dean2008]. One of Spark’s strongest advantages over the traditional MapReduce model is its ability to efficiently handle the iterative algorithms that frequently appear in the data mining and machine learning fields.
We describe two distributed and parallel versions of the original CFS algorithm for classification problems using the Apache Spark programming model. The main difference between them is how the data is distributed across the cluster, i.e., using a horizontal partitioning scheme (hp) or using a vertical partitioning scheme (vp). We compare the two versions – DiCFS-hp and DiCFS-vp, respectively – and also compare them with a baseline, represented by the classical non-distributed implementation of CFS in WEKA [@Hall2009a]. Finally, their benefits in terms of reduced execution time are compared with those of the CFS version developed by Eiras-Fanco et al. [@Eiras-Franco2016] for regression problems. The results show that the time-efficiency and scalability of our two versions are an improvement on those of the original version of the CFS; furthermore, similar or improved execution times are obtained with respect to the Eiras-Franco et al [@Eiras-Franco2016] regression version. In the interest of reproducibility, our software and sources are available as a Spark package[^1] called DiCFS, with a corresponding mirror in Github.[^2]
The rest of this paper is organized as follows. Section \[sec:stateofart\] summarizes the most important contributions in the area of distributed and parallel FS and proposes a classification according to how parallelization is carried out. Section \[sec:cFS\] describes the original CFS algorithm, including its theoretical foundations. Section \[sec:spark\] presents the main aspects of the Apache Spark computing framework, focusing on those relevant to the design and implementation of our proposed algorithms. Section \[sec:diCFS\] describes and discusses our DiCFS-hp and DiCFS-vp versions of the CFS algorithm. Section \[sec:experiments\] describes our experiments to compare results for DiCFS-hp and DiCFS-vp, the WEKA approach and the Eiras-Fanco et al. [@Eiras-Franco2016] approach. Finally, conclusions and future work are outlined in Section \[sec:conclusions\].
Background and Related Work {#sec:stateofart}
===========================
As might be expected, filter-based FS algorithms have asymptotic complexities that depend on the number of features and/or instances in a dataset. Many algorithms, such as the CFS, have quadratic complexities, while the most frequently used algorithms have at least linear complexities [@Bolon-Canedo2015b]. This is why, in recent years, many attempts have been made to achieve more scalable FS methods. In what follows, we analyse recent work on the design of new scalable FS methods according to parallelization approaches: (i) search-oriented, (ii) dataset-split-oriented, or (iii) filter-oriented.
*Search-oriented* parallelizations account for most approaches, in that the main aspects to be parallelized are (i) the search guided by a classifier and (ii) the corresponding evaluation of the resulting models. We classify the following studies in this category:
- Kubica et al. [@Kubica2011] developed parallel versions of three forward-search-based FS algorithms, where a wrapper with a logistic regression classifier is used to guide a search parallelized using the MapReduce model.
- García et al. [@Garcia_aparallel] presented a simple approach for parallel FS, based on selecting random feature subsets and evaluating them in parallel using a classifier. In their experiments they used a support vector machine (SVM) classifier and, in comparing their results with those for a traditional wrapper approach, found lower accuracies but also much shorter computation times.
- Wang et al. [@Wang2016] used the Spark computing model to implement an FS strategy for classifying network traffic. They first implemented an initial FS using the Fisher score filter [@duda2012pattern] and then performed, using a wrapper approach, a distributed forward search over the best $m$ features selected. Since the Fisher filter was used, however, only numerical features could be handled.
- Silva et al. [@Silva2017] addressed the FS scaling problem using an asynchronous search approach, given that synchronous search, as commonly performed, can lead to efficiency losses due to the inactivity of some processors waiting for other processors to end their tasks. In their tests, they first obtained an initial reduction using a mutual information (MI) [@Peng2005] filter and then evaluated subsets using a random forest (RF) [@Ho1995] classifier. However, as stated by those authors, any other approach could be used for subset evaluation.
*Dataset-split-oriented* approaches have the main characteristic that parallelization is performed by splitting the dataset vertically or horizontally, then applying existing algorithms to the parts and finally merging the results following certain criteria. We classify the following studies in this category:
- Peralta et al. [@Peralta2015] used the MapReduce model to implement a wrapper-based evolutionary search FS method. The dataset was split by instances and the FS method was applied to each resulting subset. Simple majority voting was used as a reduction step for the selected features and the final subset of feature was selected according to a user-defined threshold. All tests were carried out using the EPSILON dataset, which we also use here (see Section \[sec:experiments\]).
- Bolón-Canedo et al. [@Bolon-Canedo2015a] proposed a framework to deal with high dimensionality data by first optionally ranking features using a FS filter, then partitioning vertically by dividing the data according to features (columns) rather than, as commonly done, according to instances (rows). After partitioning, another FS filter is applied to each partition, and finally, a merging procedure guided by a classifier obtains a single set of features. The authors experiment with five commonly used FS filters for the partitions, namely, CFS [@Hall2000], Consistency [@Dash2003], INTERACT [@Zhao2007], Information Gain [@Quinlan1986] and ReliefF [@Kononenko1994], and with four classifiers for the final merging, namely, C4.5 [@Quinlan1992], Naive Bayes [@rish2001empirical], $k$-Nearest Neighbors [@Aha1991] and SVM [@vapnik1995nature], show that their own approach significantly reduces execution times while maintaining and, in some cases, even improving accuracy.
Finally, *filter-oriented* methods include redesigned or new filter methods that are, or become, inherently parallel. Unlike the methods in the other categories, parallelization in this category methods can be viewed as an internal, rather than external, element of the algorithm. We classify the following studies in this category:
- Zhao et al. [@Zhao2013a] described a distributed parallel FS method based on a variance preservation criterion using the proprietary software SAS High-Performance Analytics. [^3] One remarkable characteristic of the method is its support not only for supervised FS, but also for unsupervised FS where no label information is available. Their experiments were carried out with datasets with both high dimensionality and a high number of instances.
- Ramírez-Gallego et al. [@Ramirez-Gallego2017] described scalable versions of the popular mRMR [@Peng2005] FS filter that included a distributed version using Spark. The authors showed that their version that leveraged the power of a cluster of computers could perform much faster than the original and processed much larger datasets.
- In a previous work [@Palma-Mendoza2018], using the Spark computing model we designed a distributed version of the ReliefF [@Kononenko1994] filter, called DiReliefF. In testing using datasets with large numbers of features and instances, it was much more efficient and scalable than the original filter.
- Finally, Eiras-Franco et al [@Eiras-Franco2016], using four distributed FS algorithms, three of them filters, namely, InfoGain [@Quinlan1986], ReliefF [@Kononenko1994] and the CFS [@Hall2000], reduce execution times with respect to the original versions. However, in the CFS case, the version of those authors focuses on regression problems where all the features, including the class label, are numerical, with correlations calculated using the Pearson coefficient. A completely different approach is required to design a parallel version for classification problems where correlations are based on the information theory.
The approach described here can be categorized as a *filter-oriented* approach that builds on works described elsewhere [@Ramirez-Gallego2017], [@Palma-Mendoza2018], [@Eiras-Franco2016]. The fact that their focus was not only on designing an efficient and scalable FS algorithm, but also on preserving the original behaviour (and obtaining the same final results) of traditional filters, means that research focused on those filters is also valid for adapted versions. Another important issue in relation to filters is that, since they are generally more efficient than wrappers, they are often the only feasible option due to the abundance of data. It is worth mentioning that scalable filters could feasibly be included in any of the methods mentioned in the *search-oriented* and *dataset-split-oriented* categories, where an initial filtering step is implemented to improve performance.
Correlation-Based Feature Selection (CFS) {#sec:cFS}
=========================================
The CFS method, originally developed by Hall [@Hall2000], is categorized as a subset selector because it evaluates subsets rather than individual features. For this reason, the CFS needs to perform a search over candidate subsets, but since performing a full search over all possible subsets is prohibitive (due to the exponential complexity of the problem), a heuristic has to be used to guide a partial search. This heuristic is the main concept behind the CFS algorithm, and, as a filter method, the CFS is not a classification-derived measure, but rather applies a principle derived from Ghiselly’s test theory [@ghiselli1964theory], i.e., *good feature subsets contain features highly correlated with the class, yet uncorrelated with each other*.
This principle is formalized in Equation (\[eq:heuristic\]) where $M_s$ represents the merit assigned by the heuristic to a subset $s$ that contains $k$ features, $\overline{r_{cf}}$ represents the average of the correlations between each feature in $s$ and the class attribute, and $\overline{r_{ff}}$ is the average correlation between each of the $\begin{psmallmatrix}k\\2\end{psmallmatrix}$ possible feature pairs in $s$. The numerator can be interpreted as an indicator of how predictive the feature set is and the denominator can be interpreted as an indicator of how redundant features in $s$ are.
$$\label{eq:heuristic}
M_s = \frac { k\cdot \overline { r_{cf} } }{ \sqrt { k + k (k - 1) \cdot \overline{ r_{ff}} } }$$
Equation (\[eq:heuristic\]) also posits the second important concept underlying the CFS, which is the computation of correlations to obtain the required averages. In classification problems, the CFS uses the symmetrical uncertainty (SU) measure [@press1982numerical] shown in Equation (\[eq:su\]), where $H$ represents the entropy function of a single or conditioned random variable, as shown in Equation (\[eq:entropy\]). This calculation adds a requirement for the dataset before processing, which is that all non-discrete features must be discretized. By default, this process is performed using the discretization algorithm proposed by Fayyad and Irani [@Fayyad1993].
$$\label{eq:su}
SU = 2 \cdot \left[ \frac { H(X) - H(X|Y) }{ H(Y) + H(X) } \right]$$
$$\begin{aligned}
\label{eq:entropy}
H(X) &=-\sum _{ x\in X }{ p(x)\log _{2}{p(x)} } \nonumber \\
H(X | Y) &=-\sum _{ y\in Y }{ p(y) } \sum_{x \in X}{p(x |y) \log _{ 2 }{ p(x | y) } } \end{aligned}$$
The third core CFS concept is its search strategy. By default, the CFS algorithm uses a best-first search to explore the search space. The algorithm starts with an empty set of features and at each step of the search all possible single feature expansions are generated. The new subsets are evaluated using Equation (\[eq:heuristic\]) and are then added to a priority queue according to merit. In the subsequent iteration, the best subset from the queue is selected for expansion in the same way as was done for the first empty subset. If expanding the best subset fails to produce an improvement in the overall merit, this counts as a *fail* and the next best subset from the queue is selected. By default, the CFS uses five consecutive fails as a stopping criterion and as a limit on queue length.
The final CFS element is an optional post-processing step. As stated before, the CFS tends to select feature subsets with low redundancy and high correlation with the class. However, in some cases, extra features that are *locally predictive* in a small area of the instance space may exist that can be leveraged by certain classifiers [@Hall1999]. To include these features in the subset after the search, the CFS can optionally use a heuristic that enables inclusion of all features whose correlation with the class is higher than the correlation between the features themselves and with features already selected. Algorithm \[alg:cFS\] summarizes the main aspects of the CFS.
$Corrs := $ correlations between all features with the class \[lin:allCorrs\] $BestSubset := \emptyset$ $Queue.setCapacity(5)$ $Queue.add(BestSubset)$ $NFails := 0$ $HeadState := Queue.dequeue$ $NewSubsets := evaluate(expand(HeadState), Corrs)$ \[lin:expand\] $Queue.add(NewSubsets)$ $BestSubset$ $LocalBest := Queue.head$ $BestSubset := LocalBest$ $NFails := 0$ $NFails := NFails + 1$ $BestSubset$
The Spark Cluster Computing Model {#sec:spark}
=================================
The following short description of the main concepts behind the Spark computing model focuses exclusively on aspects that complete the conceptual basis for our DiCFS proposal in Section \[sec:diCFS\].
The main concept behind the Spark model is what is known as the resilient distributed dataset (RDD). Zaharia et al. [@Zaharia2010; @Zaharia2012] defined an RDD as a read-only collection of objects, i.e., a dataset partitioned and distributed across the nodes of a cluster. The RDD has the ability to automatically recover lost partitions through a lineage record that knows the origin of the data and possible calculations done. Even more relevant for our purposes is the fact that operations run for an RDD are automatically parallelized by the Spark engine; this abstraction frees the programmer from having to deal with threads, locks and all other complexities of traditional parallel programming.
With respect to the cluster architecture, Spark follows the master-slave model. Through a cluster manager (master), a driver program can access the cluster and coordinate the execution of a user application by assigning tasks to the executors, i.e., programs that run in worker nodes (slaves). By default, only one executor is run per worker. Regarding the data, RDD partitions are distributed across the worker nodes, and the number of tasks launched by the driver for each executor is set according to the number of RDD partitions residing in the worker.
Two types of operations can be executed on an RDD, namely, actions and transformations. Of the *actions*, which allow results to be obtained from a Spark cluster, perhaps the most important is $collect$, which returns an array with all the elements in the RDD. This operation has to be done with care, to avoid exceeding the maximum memory available to the driver. Other important actions include $reduce$, $sum$, $aggregate$ and $sample$, but as they are not used by us here, we will not explain them. *Transformations* are mechanisms for creating an RDD from another RDD. Since RDDs are read-only, a transformation creating a new RDD does not affect the original RDD. A basic transformation is $mapPartitions$, which receives, as a parameter, a function that can handle all the elements of a partition and return another collection of elements to conform a new partition. The $mapPartitions$ transformation is applied to all partitions in the RDD to obtain a new transformed RDD. Since received and returned partitions do not need to match in size, $mapPartitions$ can thus reduce or increase the overall size of an RDD. Another interesting transformation is $reduceByKey$; this can only be applied to what is known as a $PairRDD$, which is an RDD whose elements are key-value pairs, where the keys do not have to be unique. The $reduceByKey$ transformation is used to aggregate the elements of an RDD, which it does by applying a commutative and associative function that receives two values of the PairRDD as arguments and returns one element of the same type. This reduction is applied by key, i.e., elements with the same key are reduced such that the final result is a PairRDD with unique keys, whose corresponding values are the result of the reduction. Other important transformations (which we do not explain here) are $map$, $flatMap$ and $filter$.
Another key concept in Spark is *shuffling*, which refers to the data communication required for certain types of transformations, such as the above-mentioned $reduceByKey$. Shuffling is a costly operation because it requires redistribution of the data in the partitions, and therefore, data read and write across all nodes in the cluster. For this reason, shuffling operations are minimized as much as possible.
The final concept underpinning our proposal is *broadcasting*, which is a useful mechanism for efficiently sharing read-only data between all worker nodes in a cluster. Broadcast data is dispatched from the driver throughout the network and is thus made available to all workers in a deserialized fast-to-access form.
Distributed Correlation-Based Feature Selection (DiCFS) {#sec:diCFS}
=======================================================
We now describe the two algorithms that conform our proposal. They represent alternative distributed versions that use different partitioning strategies to process the data. We start with some considerations common to both approaches.
As stated previously, CFS has a time execution complexity of $\mathcal{O}(m^2 \cdot n)$ where $m$ is the number of features and $n$ is the number of instances. This complexity derives from the first step shown in Algorithm \[alg:cFS\], the calculation of $\begin{psmallmatrix}m+ 1\\2\end{psmallmatrix}$ correlations between all pairs of features including the class, and the fact that for each pair, $\mathcal{O}(n)$ operations are needed in order to calculate the entropies. Thus, to develop a scalable version, our main focus in parallelization design must be on the calculation of correlations.
Another important issue is that, although the original study by Hall [@Hall2000] stated that all correlations had to be calculated before the search, this is only a true requisite when a backward best-first search is performed. In the case of the search shown in Algorithm \[alg:cFS\], correlations can be calculated on demand, i.e., on each occasion a new non-evaluated pair of features appears during the search. In fact, trying to calculate all correlations in any dataset with a high number of features and instances is prohibitive; the tests performed on the datasets described in Section \[sec:experiments\] show that a very low percentage of correlations is actually used during the search and also that on-demand correlation calculation is around $100$ times faster when the default number of five maximum fails is used.
Below we describe our two alternative methods for calculating these correlations in a distributed manner depending on the type of partitioning used.
Horizontal Partitioning {#subsec:horizontalPart}
-----------------------
Horizontal partitioning of the data may be the most natural way to distribute work between the nodes of a cluster. If we consider the default layout where the data is represented as a matrix $D$ in which the columns represent the different features and the rows represent the instances, then it is natural to distribute the matrix by assigning different groups of rows to nodes in the cluster. If we represent this matrix as an RDD, this is exactly what Spark will automatically do.
Once the data is partitioned, Algorithm \[alg:cFS\] (omitting line \[lin:allCorrs\]) can be started on the driver. The distributed work will be performed on line \[lin:expand\], where the best subset in the queue is expanded and, depending on this subset and the state of the search, a number $nc$ of new pairs of correlations will be required to evaluate the resulting subsets. Thus, the most complex step is the calculation of the corresponding $nc$ contingency tables that will allow us to obtain the entropies and conditional entropies that conform the symmetrical uncertainty correlation (see Equation (\[eq:su\])). These $nc$ contingency tables are partially calculated locally by the workers following Algorithm \[alg:localCTables\]. As can be observed, the algorithm loops through all the local rows, counting the values of the features contained in *pairs* (declared in line \[lin:pairs\]) and storing the results in a map holding the feature pairs as keys and the contingency tables as their matching values.
The next step is to merge the contingency tables from all the workers to obtain global results. Since these tables hold simple value counts, they can easily be aggregated by performing an element-wise sum of the corresponding tables. These steps are summarized in Equation (\[eq:cTables\]), where $CTables$ is an RDD of keys and values, and where each key corresponds to a feature pair and each value to a contingency table.
$pairs \leftarrow$ $nc$ pairs of features \[lin:pairs\] $rows \leftarrow$ local rows of $partition$ $m \leftarrow$ number of columns (features in $D$) $ctables \leftarrow$ a map from each pair to an empty contingency table $ctables(x,y)(r(x),r(y))$ += $1$ $ctables$
$$\begin{aligned}
\label{eq:cTables}
pairs &= \left \{ (feat_a, feat_b), \cdots, (feat_x, feat_y) \right \} \nonumber \\
nc &= \left | pairs \right | \nonumber \\
CTables &= D.mapPartitions(localCTables(pairs)).reduceByKey(sum) \nonumber \\
CTables &=
\begin{bmatrix}
((feat_a, feat_b), ctable_{a,b})\\
\vdots \\
((feat_x, feat_y), ctable_{x,y})\\
\end{bmatrix}_{nc \times 1} \nonumber \\\end{aligned}$$
Once the contingency tables have been obtained, the calculation of the entropies and conditional entropies is straightforward since all the information necessary for each calculation is contained in a single row of the $CTables$ RDD. This calculation can therefore be performed in parallel by processing the local rows of this RDD.
Once the distributed calculation of the correlations is complete, control returns to the driver, which continues execution of line \[lin:expand\] in Algorithm \[alg:cFS\]. As can be observed, the distributed work only happens when new correlations are needed, and this occurs in only two cases: (i) when new pairs of features need to be evaluated during the search, and (ii) at the end of the execution if the user requests the addition of locally predictive features.
To sum up, every iteration in Algorithm \[alg:cFS\] expands the current best subset and obtains a group of subsets for evaluation. This evaluation requires a merit, and the merit for each subset is obtained according to Figure \[fig:horizontalPartResume\], which illustrates the most important steps in the horizontal partitioning scheme using a case where correlations between features f2 and f1 and between f2 and f3 are calculated in order to evaluate a subset.
![Horizontal partitioning steps for a small dataset D to obtain the correlations needed to evaluate a features subset[]{data-label="fig:horizontalPartResume"}](fig01.eps){width="100.00000%"}
Vertical Partitioning {#subsec:vecticalPart}
---------------------
Vertical partitioning has already been proposed in Spark by Ramírez-Gallego et al. [@Ramirez-Gallego2017], using another important FS filter, mRMR. Although mRMR is a ranking algorithm (it does not select subsets), it also requires the calculation of information theory measures such as entropies and conditional entropies between features. Since data is distributed horizontally by Spark, those authors propose two main operations to perform the vertical distribution:
- *Columnar transformation*. Rather than use the traditional format whereby the dataset is viewed as a matrix whose columns represent features and rows represent instances, a transposed version is used in which the data represented as an RDD is distributed by features and not by instances, in such a way that the data for a specific feature will in most cases be stored and processed by the same node. Figure \[fig:columnarTrans\], based on Ramírez-Gallego et al. [@Ramirez-Gallego2017], explains the process using an example based on a dataset with two partitions, seven instances and four features.
- *Feature broadcasting*. Because features must be processed in pairs to calculate conditional entropies and because different features can be stored in different nodes, some features are broadcast over the cluster so all nodes can access and evaluate them along with the other stored features.
![Example of a columnar transformation of a small dataset with two partitions, seven instances and four features (from [@Ramirez-Gallego2017])[]{data-label="fig:columnarTrans"}](fig02.eps){width="100.00000%"}
In the case of the adapted mRMR [@Ramirez-Gallego2017], since every step in the search requires the comparison of a single feature with a group of remaining features, it proves efficient, at each step, to broadcast this single feature (rather than multiple features). In the case of the CFS, the core issue is that, at any point in the search when expansion is performed, if the size of subset being expanded is $k$, then the correlations between the $m-k$ remaining features and $k-1$ features in the subset being expanded have already been calculated in previous steps; consequently, only the correlations between the most recently added feature and the $m-k$ remaining features are missing. Therefore, the proposed operations can be applied efficiently in the CFS just by broadcasting the most recently added feature.
The disadvantages of vertical partitioning are that (i) it requires an extra processing step to change the original layout of the data and this requires shuffling, (ii) it needs data transmission to broadcast a single feature in each search step, and (iii) the fact that, by default, the dataset is divided into a number of partitions equal to the number of features $m$ in the dataset may not be optimal for all cases (while this parameter can be tuned, it can never exceed $m$). The main advantage of vertical partitioning is that the data layout and the broadcasting of the compared feature move all the information needed to calculate the contingency table to the same node, which means that this information can be more efficiently processed locally. Another advantage is that the whole dataset does not need to be read every time a new set of features has to be compared, since the dataset can be filtered by rows to process only the required features.
Due to the nature of the search strategy (best-first) used in the CFS, the first search step will always involve all features, so no filtering can be performed. For each subsequent step, only one more feature per step can be filtered out. This is especially important with high dimensionality datasets: the fact that the number of features is much higher than the number of search steps means that the percentage of features that can be filtered out is reduced.
We performed a number of experiments to quantify the effects of the advantages and disadvantages of each approach and to check the conditions in which one approach was better than the other.
Experiments {#sec:experiments}
===========
The experiments tested and compared time-efficiency and scalability for the horizontal and vertical DiCFS approaches so as to check whether they improved on the original non-distributed version of the CFS. We also tested and compared execution times with that reported in the recently published research by Eiras-Franco et al. [@Eiras-Franco2016] into a distributed version of CFS for regression problems.
Note that no experiments were needed to compare the quality of the results for the distributed and non-distributed CFS versions as the distributed versions were designed to return the same results as the original algorithm.
For our experiments, we used a single master node and up to ten slave nodes from the big data platform of the Galician Supercomputing Technological Centre (CESGA). [^4] The nodes have the following configuration:
- CPU: 2 X Intel Xeon E5-2620 v3 @ 2.40GHz
- CPU Cores: 12 (2X6)
- Total Memory: 64 GB
- Network: 10GbE
- Master Node Disks: 8 X 480GB SSD SATA 2.5" MLC G3HS
- Slave Node Disks: 12 X 2TB NL SATA 6Gbps 3.5" G2HS
- Java version: OpenJDK 1.8
- Spark version: 1.6
- Hadoop (HDFS) version: 2.7.1
- WEKA version: 3.8.1
The experiments were run on four large-scale publicly available datasets. The ECBDL14 [@Bacardit2012] dataset, from the protein structure prediction field, was used in the ECBLD14 Big Data Competition included in the GECCO’2014 international conference. This dataset has approximately 33.6 million instances, 631 attributes and 2 classes, consists 98% of negative examples and occupies about 56GB of disk space. HIGGS [@Sadowski2014], from the UCI Machine Learning Repository [@Lichman2013], is a recent dataset representing a classification problem that distinguishes between a signal process which produces Higgs bosons and a background process which does not. KDDCUP99 [@Ma2009] represents data from network connections and classifies them as normal connections or different types of attacks (a multi-class problem). Finally, EPSILON is an artificial dataset built for the Pascal Large Scale Learning Challenge in 2008.[^5] Table \[tbl:datasets\] summarizes the main characteristics of the datasets.
[P[1in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}]{} Dataset & No. of Samples ($\times 10^{6}$) & No. of Features. & Feature Types & Problem Type\
ECBDL14 [@Bacardit2012] & $\sim$33.6 & 632 & Numerical, Categorical & Binary\
HIGGS [@Sadowski2014] & 11 & 28 & Numerical & Binary\
KDDCUP99 [@Ma2009] & $\sim$5 & 42 & Numerical, Categorical & Multiclass\
EPSILON & 1/2 & 2,000 & Numerical & Binary\
With respect to algorithm parameter configuration, two defaults were used in all the experiments: the inclusion of locally predictive features and the use of five consecutive fails as a stopping criterion. These defaults apply to both distributed and non-distributed versions. Moreover, for the vertical partitioning version, the number of partitions was equal to the number of features, as set by default in Ramírez-Gallego et al. [@Ramirez-Gallego2017]. The horizontally and vertically distributed versions of the CFS are labelled DiCFS-hp and DiCFS-vp, respectively.
We first compared execution times for the four algorithms in the datasets using ten slave nodes with all their cores available. For the case of the non-distributed version of the CFS, we used the implementation provided in the WEKA platform [@Hall2009a]. The results are shown in Figure \[fig:execTimeVsNInsta\].
![Execution time with respect to percentages of instances in four datasets, for DiCFS-hp and DiCFS-vp using ten nodes and for a non-distributed implementation in WEKA using a single node[]{data-label="fig:execTimeVsNInsta"}](fig03.eps){width="100.00000%"}
Note that, with the aim of offering a comprehensive view of execution time behaviour, Figure \[fig:execTimeVsNInsta\] shows results for sizes larger than the 100% of the datasets. To achieve these sizes, the instances in each dataset were duplicated as many times as necessary. Note also that, since ECBDL14 is a very large dataset, its temporal scale is different from that of the other datasets.
Regarding the non-distributed version of the CFS, Figure \[fig:execTimeVsNInsta\] does not show results for WEKA in the experiments on the ECBDL14 dataset, because it was impossible to execute that version in the CESGA platform due to memory requirements exceeding the available limits. This also occurred with the larger samples from the EPSILON dataset for both algorithms: DiCFS-vp and DiCFS-hp. Even when it was possible to execute the WEKA version with the two smallest samples from the EPSILON dataset, these samples are not shown because the execution times were too high (19 and 69 minutes, respectively). Figure \[fig:execTimeVsNInsta\] shows successful results for the smaller HIGGS and KDDCUP99 datasets, which could still be processed in a single node of the cluster, as required by the non-distributed version. However, even in the case of these smaller datasets, the execution times of the WEKA version were worse compared to those of the distributed versions.
Regarding the distributed versions, DiCFS-vp was unable to process the oversized versions of the ECBDL14 dataset, due to the large amounts of memory required to perform shuffling. The HIGGS and KDDCUP99 datasets showed an increasing difference in favor of DiCFS-hp, however, due to the fact that these datasets have much smaller feature sizes than ECBDL14 and EPSILON. As mentioned earlier, DiCFS-vp ties parallelization to the number of features in the dataset, so datasets with small numbers of features were not able to fully leverage the cluster nodes. Another view of the same issue is given by the results for the EPSILON dataset; in this case, DiCFS-vp obtained the best execution times for the 300% sized and larger datasets. This was because there were too many partitions (2,000) for the number of instances available in smaller than 300% sized datasets; further experiments showed that adjusting the number of partitions to 100 reduced the execution time of DiCFS-vp for the 100% EPSILON dataset from about 2 minutes to 1.4 minutes (faster than DiCFS-hp). Reducing the number of partitions further, however, caused the execution time to start increasing again.
Figure \[fig:execTimeVsNFeats\] shows the results for similar experiments, except that this time the percentage of features in the datasets was varied and the features were copied to obtain oversized versions of the datasets. It can be observed that the number of features had a greater impact on the memory requirements of DiCFS-vp. This caused problems not only in processing the ECBDL14 dataset but also the EPSILON dataset. We can also see quadratic time complexity in the number of features and how the temporal scale in the EPSILON dataset (with the highest number of dimensions) matches that of the ECBDL14 dataset. As for the KDDCUP99 dataset, the results show that increasing the number of features obtained a better level of parallelization and a slightly improved execution time of DiCFS-vp compared to DiCFS-hp for the 400% dataset version and above.
![Execution times with respect to different percentages of features in four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:execTimeVsNFeats"}](fig04.eps){width="100.00000%"}
An important measure of the scalability of an algorithm is *speed-up*, which is a measure that indicates how capable an algorithm is of leveraging a growing number of nodes so as to reduce execution times. We used the speed-up definition shown in Equation (\[eq:speedup\]) and used all the available cores for each node (i.e., 12). The experimental results are shown in Figure \[fig:speedup\], where it can be observed that, for all four datasets, DiCFS-hp scales better than DiCFS-vp. It can also be observed that the HIGGS and KDDCUP datasets are too small to take advantage of the use of more than two nodes and also that practically no speed-up improvement is obtained from increasing this value.
To summarize, our experiments show that even when vertical partitioning results in shorter execution times (the case in certain circumstances, e.g., when the dataset has an adequate number of features and instances for optimal parallelization according to the cluster resources), the benefits are not significant and may even be eclipsed by the effort invested in determining whether this approach is indeed the most efficient approach for a particular dataset or a particular hardware configuration or in fine-tuning the number of partitions. Horizontal partitioning should therefore be considered as the best option in the general case.
$$\label{eq:speedup}
speedup(m)=\left[ \frac { execution\quad time\quad on\quad 2\quad nodes }{execution\quad time\quad on\quad m\quad nodes } \right]$$
![Speed-up for four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:speedup"}](fig05.eps){width="100.00000%"}
We also compared the DiCFS-hp approach with that of Eiras-Franco et al. [@Eiras-Franco2016], who described a Spark-based distributed version of the CFS for regression problems. The comparison was based on their experiments with the HIGGS and EPSILON datasets but using our current hardware. Those datasets were selected as only having numerical features and so could naturally be treated as regression problems. Table \[tbl:speedUp\] shows execution time and speed-up values obtained for different sizes of both datasets for both distributed and non-distributed versions and considering them to be classification and regression problems. Regression-oriented versions for the Spark and WEKA versions are labelled RegCFS and RegWEKA, respectively, the number after the dataset name represents the sample size and the letter indicates whether the sample had removed or added instances (*i*) or removed or added features (*f*). In the case of oversized samples, the method used was the same as described above, i.e., features or instances were copied as necessary. The experiments were performed using ten cluster nodes for the distributed versions and a single node for the WEKA version. The resulting speed-up was calculated as the WEKA execution time divided by the corresponding Spark execution time.
The original experiments in [@Eiras-Franco2016] were performed only using EPSILON\_50i and HIGGS\_100i. It can be observed that much better speed-up was obtained by the DiCFS-hp version for EPSILON\_50i but in the case of HIGGS\_100i, the resulting speed-up in the classification version was lower than the regression version. However, in order to have a better comparison, two more versions for each dataset were considered, Table \[tbl:speedUp\] shows that the DiCFS-hp version has a better speed-up in all cases except in HIGGS\_100i dataset mentioned before.
-------------- --------- --------- ------- -------- -------- ----------
Dataset
RegCFS DiCFS-hp
EPSILON\_25i 1011.42 655.56 58.85 63.61 10.31 17.19
EPSILON\_25f 393.91 703.95 25.83 55.08 12.78 15.25
EPSILON\_50i 4103.35 2228.64 76.98 110.13 20.24 53.30
HIGGS\_100i 182.86 327.61 21.34 23.70 13.82 8.57
HIGGS\_200i 2079.58 475.98 28.89 26.77 17.78 71.99
HIGGS\_200f 934.07 720.32 21.42 34.35 20.97 43.61
-------------- --------- --------- ------- -------- -------- ----------
: Execution time and speed-up values for different CFS versions for regression and classification[]{data-label="tbl:speedUp"}
Conclusions and Future Work {#sec:conclusions}
===========================
We describe two parallel and distributed versions of the CFS filter-based FS algorithm using the Apache Spark programming model: DiCFS-vp and DiCFS-hp. These two versions essentially differ in how the dataset is distributed across the nodes of the cluster. The first version distributes the data by splitting rows (instances) and the second version, following Ramírez-Gallego et al. [@Ramirez-Gallego2017], distributes the data by splitting columns (features). As the outcome of a four-way comparison of DiCFS-vp and DiCFS-hp, a non-distributed implementation in WEKA and a distributed regression version in Spark, we can conclude as follows:
- As was expected, both DiCFS-vp and DiCFS-hp were able to handle larger datasets in much a more time-efficient manner than the classical WEKA implementation. Moreover, in many cases they were the only feasible way to process certain types of datasets because of prohibitive WEKA memory requirements.
- Of the horizontal and vertical partitioning schemes, the horizontal version (DiCFS-hp) proved to be the better option in the general case due to its better scalability and its natural partitioning mode that enables the Spark framework to make better use of cluster resources.
- For classification problems, the benefits obtained from distribution compared to non-distribution version can be considered equal to or even better than the benefits already demonstrated for the regression domain [@Eiras-Franco2016].
Regarding future research, an especially interesting line is whether it is necessary for this kind of algorithm to process all the data available or whether it would be possible to design automatic sampling procedures that could guarantee that, under certain circumstances, equivalent results could be obtained. In the case of the CFS, this question becomes more pertinent in view of the study of symmetrical uncertainty in datasets with up to 20,000 samples by Hall [@Hall1999], where tests showed that symmetrical uncertainty decreased exponentially with the number of instances and then stabilized at a certain number. Another line of future work could be research into different data partitioning schemes that could, for instance, improve the locality of data while overcoming the disadvantages of vertical partitioning.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank CESGA for use of their supercomputing resources. This research has been partially supported by the Spanish Ministerio de Economía y Competitividad (research projects TIN 2015-65069-C2-1R, TIN2016-76956-C3-3-R), the Xunta de Galicia (Grants GRC2014/035 and ED431G/01) and the European Union Regional Development Funds. R. Palma-Mendoza holds a scholarship from the Spanish Fundación Carolina and the National Autonomous University of Honduras.
[10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{}
D. W. Aha, D. Kibler, M. K. Albert, [Instance-Based Learning Algorithms]{}, Machine Learning 6 (1) (1991) 37–66. [](http://dx.doi.org/10.1023/A:1022689900470).
J. Bacardit, P. Widera, A. M[á]{}rquez-chamorro, F. Divina, J. S. Aguilar-Ruiz, N. Krasnogor, [Contact map prediction using a large-scale ensemble of rule sets and the fusion of multiple predicted structural features]{}, Bioinformatics 28 (19) (2012) 2441–2448. [](http://dx.doi.org/10.1093/bioinformatics/bts472).
R. Bellman, [[Dynamic Programming]{}]{}, Rand Corporation research study, Princeton University Press, 1957. V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [[Distributed feature selection: An application to microarray data classification]{}]{}, Applied Soft Computing 30 (2015) 136–150. [](http://dx.doi.org/10.1016/j.asoc.2015.01.035). V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [Recent advances and emerging challenges of feature selection in the context of big data]{}, Knowledge-Based Systems 86 (2015) 33–45. [](http://dx.doi.org/10.1016/j.knosys.2015.05.014).
M. Dash, H. Liu, [[Consistency-based search in feature selection]{}]{}, Artificial Intelligence 151 (1-2) (2003) 155–176. [](http://dx.doi.org/10.1016/S0004-3702(03)00079-1). <http://linkinghub.elsevier.com/retrieve/pii/S0004370203000791>
J. Dean, S. Ghemawat, [MapReduce: Simplied Data Processing on Large Clusters]{}, Proceedings of 6th Symposium on Operating Systems Design and Implementation (2004) 137–149[](http://arxiv.org/abs/10.1.1.163.5292), [](http://dx.doi.org/10.1145/1327452.1327492).
J. Dean, S. Ghemawat, [[MapReduce: Simplified Data Processing on Large Clusters]{}]{}, Communications of the ACM 51 (1) (2008) 107. <http://dl.acm.org/citation.cfm?id=1327452.1327492>
R. O. Duda, P. E. Hart, D. G. Stork, [[Pattern Classification]{}]{}, John Wiley [&]{} Sons, 2001. C. Eiras-Franco, V. Bol[ó]{}n-Canedo, S. Ramos, J. Gonz[á]{}lez-Dom[í]{}nguez, A. Alonso-Betanzos, J. Touri[ñ]{}o, [[Multithreaded and Spark parallelization of feature selection filters]{}]{}, Journal of Computational Science 17 (2016) 609–619. [](http://dx.doi.org/10.1016/j.jocs.2016.07.002). U. M. Fayyad, K. B. Irani, [[Multi-Interval Discretization of Continuos-Valued Attributes for Classification Learning]{}]{} (1993). <http://trs-new.jpl.nasa.gov/dspace/handle/2014/35171>
D. J. Garcia, L. O. Hall, D. B. Goldgof, K. Kramer, [A Parallel Feature Selection Algorithm from Random Subsets]{} (2004).
E. E. Ghiselli, [[Theory of Psychological Measurement]{}]{}, McGraw-Hill series in psychology, McGraw-Hill, 1964. <https://books.google.es/books?id=mmh9AAAAMAAJ>
I. Guyon, A. Elisseeff, [An Introduction to Variable and Feature Selection]{}, Journal of Machine Learning Research (JMLR) 3 (3) (2003) 1157–1182. [](http://arxiv.org/abs/1111.6189v1), [](http://dx.doi.org/10.1016/j.aca.2011.07.027).
M. A. Hall, [Correlation-based feature selection for machine learning]{}, PhD Thesis., Department of Computer Science, Waikato University, New Zealand (1999). [](http://dx.doi.org/10.1.1.37.4643).
M. A. Hall, [[Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning]{}]{} (2000) 359–366. <http://dl.acm.org/citation.cfm?id=645529.657793>
M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I. Witten, [The WEKA data mining software: An update]{}, SIGKDD Explorations 11 (1) (2009) 10–18. [](http://dx.doi.org/10.1145/1656274.1656278).
T. K. Ho, [[Random Decision Forests]{}]{}, in: Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR ’95, IEEE Computer Society, Washington, DC, USA, 1995, pp. 278—-. <http://dl.acm.org/citation.cfm?id=844379.844681>
A. Idris, A. Khan, Y. S. Lee, [Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification]{}, Applied Intelligence 39 (3) (2013) 659–672. [](http://dx.doi.org/10.1007/s10489-013-0440-x).
A. Idris, M. Rizwan, A. Khan, [Churn prediction in telecom using Random Forest and PSO based data balancing in combination with various feature selection strategies]{}, Computers and Electrical Engineering 38 (6) (2012) 1808–1819. [](http://dx.doi.org/10.1016/j.compeleceng.2012.09.001).
I. Kononenko, [[Estimating attributes: Analysis and extensions of RELIEF]{}]{}, Machine Learning: ECML-94 784 (1994) 171–182. [](http://dx.doi.org/10.1007/3-540-57868-4). <http://www.springerlink.com/index/10.1007/3-540-57868-4>
J. Kubica, S. Singh, D. Sorokina, [[Parallel Large-Scale Feature Selection]{}]{}, in: Scaling Up Machine Learning, no. February, 2011, pp. 352–370. [](http://dx.doi.org/10.1017/CBO9781139042918.018). <http://ebooks.cambridge.org/ref/id/CBO9781139042918A143>
J. Leskovec, A. Rajaraman, J. D. Ullman, [[Mining of Massive Datasets]{}]{}, 2014. [](http://dx.doi.org/10.1017/CBO9781139924801). <http://ebooks.cambridge.org/ref/id/CBO9781139924801>
M. Lichman, [[UCI Machine Learning Repository]{}](http://archive.ics.uci.edu/ml) (2013). <http://archive.ics.uci.edu/ml>
J. Ma, L. K. Saul, S. Savage, G. M. Voelker, [Identifying Suspicious URLs : An Application of Large-Scale Online Learning]{}, in: Proceedings of the International Conference on Machine Learning (ICML), Montreal, Quebec, 2009.
R. J. Palma-Mendoza, D. Rodriguez, L. De-Marcos, [[Distributed ReliefF-based feature selection in Spark]{}](http://link.springer.com/10.1007/s10115-017-1145-y), Knowledge and Information Systems (2018) 1–20[](http://dx.doi.org/10.1007/s10115-017-1145-y). <http://link.springer.com/10.1007/s10115-017-1145-y>
H. Peng, F. Long, C. Ding, [[Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.]{}]{}, IEEE transactions on pattern analysis and machine intelligence 27 (8) (2005) 1226–38. [](http://dx.doi.org/10.1109/TPAMI.2005.159). <http://www.ncbi.nlm.nih.gov/pubmed/16119262>
D. Peralta, S. del R[í]{}o, S. Ram[í]{}rez-Gallego, I. Riguero, J. M. Benitez, F. Herrera, [[Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach ]{}]{}, Mathematical Problems in Engineering 2015 (JANUARY). [](http://dx.doi.org/10.1155/2015/246139). W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, [Numerical recipes in C]{}, Vol. 2, Cambridge Univ Press, 1982.
J. R. Quinlan, [[Induction of Decision Trees]{}](http://dx.doi.org/10.1023/A:1022643204877), Mach. Learn. 1 (1) (1986) 81–106. [](http://dx.doi.org/10.1023/A:1022643204877). <http://dx.doi.org/10.1023/A:1022643204877>
J. R. Quinlan, [[C4.5: Programs for Machine Learning]{}](http://portal.acm.org/citation.cfm?id=152181), Vol. 1, 1992. [](http://dx.doi.org/10.1016/S0019-9958(62)90649-6). <http://portal.acm.org/citation.cfm?id=152181>
S. Ram[í]{}rez-Gallego, I. Lastra, D. Mart[í]{}nez-Rego, V. Bol[ó]{}n-Canedo, J. M. Ben[í]{}tez, F. Herrera, A. Alonso-Betanzos, [[Fast-mRMR: Fast Minimum Redundancy Maximum Relevance Algorithm for High-Dimensional Big Data]{}]{}, International Journal of Intelligent Systems 32 (2) (2017) 134–152. [](http://dx.doi.org/10.1002/int.21833). <http://doi.wiley.com/10.1002/int.21833>
I. Rish, [An empirical study of the naive Bayes classifier]{}, in: IJCAI 2001 workshop on empirical methods in artificial intelligence, Vol. 3, IBM, 2001, pp. 41–46.
P. Sadowski, P. Baldi, D. Whiteson, [Searching for Higgs Boson Decay Modes with Deep Learning]{}, Advances in Neural Information Processing Systems 27 (Proceedings of NIPS) (2014) 1–9.
J. Silva, A. Aguiar, F. Silva, [[Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms]{}]{}, International Journal of Parallel Programming (2017) 1–32[](http://dx.doi.org/10.1007/s10766-017-0493-2). <http://link.springer.com/10.1007/s10766-017-0493-2>
V. Vapnik, [The Nature of Statistical Learning Theory]{} (1995).
Y. Wang, W. Ke, X. Tao, [[A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark]{}]{}, Information 7 (1) (2016) 6. [](http://dx.doi.org/10.3390/info7010006). <http://www.mdpi.com/2078-2489/7/1/6>
, [Xingquan Zhu]{}, [Gong-Qing Wu]{}, [Wei Ding]{}, [[Data mining with big data]{}](http://ieeexplore.ieee.org/document/6547630/), IEEE Transactions on Knowledge and Data Engineering 26 (1) (2014) 97–107. [](http://dx.doi.org/10.1109/TKDE.2013.109). <http://ieeexplore.ieee.org/document/6547630/>
M. Zaharia, M. Chowdhury, T. Das, A. Dave, [[Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing]{}]{}, NSDI’12 Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation (2012) 2[](http://arxiv.org/abs/EECS-2011-82), [](http://dx.doi.org/10.1111/j.1095-8649.2005.00662.x). M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica, [Spark : Cluster Computing with Working Sets]{}, HotCloud’10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (2010) 10[](http://dx.doi.org/10.1007/s00256-009-0861-0).
Z. Zhao, H. Liu, [Searching for interacting features]{}, IJCAI International Joint Conference on Artificial Intelligence (2007) 1156–1161[](http://dx.doi.org/10.3233/IDA-2009-0364).
Z. Zhao, R. Zhang, J. Cox, D. Duling, W. Sarle, [[Massively parallel feature selection: an approach based on variance preservation]{}]{}, Machine Learning 92 (1) (2013) 195–220. [](http://dx.doi.org/10.1007/s10994-013-5373-4). <http://link.springer.com/10.1007/s10994-013-5373-4>
[^1]: <https://spark-packages.org>
[^2]: <https://github.com/rauljosepalma/DiCFS>
[^3]: <http://www.sas.com/en_us/software/high-performance-analytics.html>
[^4]: <http://bigdata.cesga.es/>
[^5]: <http://largescale.ml.tu-berlin.de/about/>
|
{
"pile_set_name": "arxiv"
}
|
package volumes
var _ ResizeService = (*LinuxResizeService)(nil)
|
{
"pile_set_name": "github"
}
|
Marie C. Couvent Elementary School
Marie C. Couvent Elementary was a historic elementary school in New Orleans, Louisiana named for Marie Couvent, an African American former slave who married successful African American businessman Bernard Couvent and deeded property for a school for orphans in her community (Institute Catholique). The school was built in 1940 in Faubourg Marigny and originally named Marigny Elementary School. It was renamed for Marie Couvent and renamed again in 1997 to A. P. Tureaud Elementary School because the Couvents had slaves.
History
In 1989 Sun Ra performed in front of the school. In the 1990s a campaign was launched to rename city public schools that venerated slaveholders. Since Couvent and her husband owned slaves, and the school changed its name in 1997 to A. P. Tureaud Elementary School for A.P. Tureaud. Manumission may have been illegal during the era they lived so their options to free people may have been limited. The school closed in 2013.
References
Category:Public elementary schools in Louisiana
Category:Defunct elementary schools in New Orleans
Category:1940 establishments in Louisiana
|
{
"pile_set_name": "wikipedia_en"
}
|
# coding=utf-8
import typing
from pyramid.config import Configurator
import transaction
from tracim_backend.app_models.contents import FOLDER_TYPE
from tracim_backend.app_models.contents import content_type_list
from tracim_backend.config import CFG
from tracim_backend.exceptions import ContentFilenameAlreadyUsedInFolder
from tracim_backend.exceptions import EmptyLabelNotAllowed
from tracim_backend.extensions import hapic
from tracim_backend.lib.core.content import ContentApi
from tracim_backend.lib.utils.authorization import ContentTypeChecker
from tracim_backend.lib.utils.authorization import check_right
from tracim_backend.lib.utils.authorization import is_contributor
from tracim_backend.lib.utils.authorization import is_reader
from tracim_backend.lib.utils.request import TracimRequest
from tracim_backend.lib.utils.utils import generate_documentation_swagger_tag
from tracim_backend.models.context_models import ContentInContext
from tracim_backend.models.context_models import RevisionInContext
from tracim_backend.models.revision_protection import new_revision
from tracim_backend.views.controllers import Controller
from tracim_backend.views.core_api.schemas import FolderContentModifySchema
from tracim_backend.views.core_api.schemas import NoContentSchema
from tracim_backend.views.core_api.schemas import SetContentStatusSchema
from tracim_backend.views.core_api.schemas import TextBasedContentSchema
from tracim_backend.views.core_api.schemas import TextBasedRevisionSchema
from tracim_backend.views.core_api.schemas import WorkspaceAndContentIdPathSchema
from tracim_backend.views.swagger_generic_section import SWAGGER_TAG__CONTENT_ENDPOINTS
try: # Python 3.5+
from http import HTTPStatus
except ImportError:
from http import client as HTTPStatus
SWAGGER_TAG__CONTENT_FOLDER_SECTION = "Folders"
SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS = generate_documentation_swagger_tag(
SWAGGER_TAG__CONTENT_ENDPOINTS, SWAGGER_TAG__CONTENT_FOLDER_SECTION
)
is_folder_content = ContentTypeChecker([FOLDER_TYPE])
class FolderController(Controller):
@hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS])
@check_right(is_reader)
@check_right(is_folder_content)
@hapic.input_path(WorkspaceAndContentIdPathSchema())
@hapic.output_body(TextBasedContentSchema())
def get_folder(self, context, request: TracimRequest, hapic_data=None) -> ContentInContext:
"""
Get folder info
"""
app_config = request.registry.settings["CFG"] # type: CFG
api = ContentApi(
show_archived=True,
show_deleted=True,
current_user=request.current_user,
session=request.dbsession,
config=app_config,
)
content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG)
return api.get_content_in_context(content)
@hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS])
@hapic.handle_exception(EmptyLabelNotAllowed, HTTPStatus.BAD_REQUEST)
@hapic.handle_exception(ContentFilenameAlreadyUsedInFolder, HTTPStatus.BAD_REQUEST)
@check_right(is_contributor)
@check_right(is_folder_content)
@hapic.input_path(WorkspaceAndContentIdPathSchema())
@hapic.input_body(FolderContentModifySchema())
@hapic.output_body(TextBasedContentSchema())
def update_folder(self, context, request: TracimRequest, hapic_data=None) -> ContentInContext:
"""
update folder
"""
app_config = request.registry.settings["CFG"] # type: CFG
api = ContentApi(
show_archived=True,
show_deleted=True,
current_user=request.current_user,
session=request.dbsession,
config=app_config,
)
content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG)
with new_revision(session=request.dbsession, tm=transaction.manager, content=content):
api.update_container_content(
item=content,
new_label=hapic_data.body.label,
new_content=hapic_data.body.raw_content,
allowed_content_type_slug_list=hapic_data.body.sub_content_types,
)
api.save(content)
api.execute_update_content_actions(content)
return api.get_content_in_context(content)
@hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS])
@check_right(is_reader)
@check_right(is_folder_content)
@hapic.input_path(WorkspaceAndContentIdPathSchema())
@hapic.output_body(TextBasedRevisionSchema(many=True))
def get_folder_revisions(
self, context, request: TracimRequest, hapic_data=None
) -> typing.List[RevisionInContext]:
"""
get folder revisions
"""
app_config = request.registry.settings["CFG"] # type: CFG
api = ContentApi(
show_archived=True,
show_deleted=True,
current_user=request.current_user,
session=request.dbsession,
config=app_config,
)
content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG)
revisions = content.revisions
return [api.get_revision_in_context(revision) for revision in revisions]
@hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS])
@check_right(is_contributor)
@check_right(is_folder_content)
@hapic.input_path(WorkspaceAndContentIdPathSchema())
@hapic.input_body(SetContentStatusSchema())
@hapic.output_body(NoContentSchema(), default_http_code=HTTPStatus.NO_CONTENT)
def set_folder_status(self, context, request: TracimRequest, hapic_data=None) -> None:
"""
set folder status
"""
app_config = request.registry.settings["CFG"] # type: CFG
api = ContentApi(
show_archived=True,
show_deleted=True,
current_user=request.current_user,
session=request.dbsession,
config=app_config,
)
content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG)
with new_revision(session=request.dbsession, tm=transaction.manager, content=content):
api.set_status(content, hapic_data.body.status)
api.save(content)
api.execute_update_content_actions(content)
return
def bind(self, configurator: Configurator) -> None:
# Get folder
configurator.add_route(
"folder", "/workspaces/{workspace_id}/folders/{content_id}", request_method="GET"
)
configurator.add_view(self.get_folder, route_name="folder")
# update folder
configurator.add_route(
"update_folder", "/workspaces/{workspace_id}/folders/{content_id}", request_method="PUT"
)
configurator.add_view(self.update_folder, route_name="update_folder")
# get folder revisions
configurator.add_route(
"folder_revisions",
"/workspaces/{workspace_id}/folders/{content_id}/revisions",
request_method="GET",
)
configurator.add_view(self.get_folder_revisions, route_name="folder_revisions")
# get folder revisions
configurator.add_route(
"set_folder_status",
"/workspaces/{workspace_id}/folders/{content_id}/status",
request_method="PUT",
)
configurator.add_view(self.set_folder_status, route_name="set_folder_status")
|
{
"pile_set_name": "github"
}
|
Size:
- 0.1
- 0.1
- 0.1
Color:
- 0.66
- 0.70220774
- 0.94
- 1
Body: Animated
Pose:
- - -0.41426134
- 0.9058533
- -8.841649e-2
- 1.6415431
- - 0.6057532
- 0.34691048
- 0.71604204
- 4.429285
- - 0.6793016
- 0.24306992
- -0.69243515
- 8.778018
- - 0.0
- 0.0
- 0.0
- 1
Shape: Cube
|
{
"pile_set_name": "github"
}
|
Music from McLeod's Daughters
McLeod's Daughters have had many different songs for their closing credits which are written by Posie Graeme-Evans & Chris Harriot and performed by singer Rebecca Lavelle who also had a guest role in series 6 as Bindi Martin
Song List
Other
Hey You by Abi Tucker who plays Grace McLeod from 2007 - 2008 and featured the song in Episode 196, My Enemy, My Friend.
List of Released Songs
Rebecca Lavelle
Understand Me
Common Ground
Never Enough
Don't Judge
Love You, Hate You
Heat
Am I Crazy?
We Got It Wrong
The Siren's Song
Hopeless Case
Just A Child
My Heart Is Like A River
Theme Song - Version 1
Hey Girl (You Got A New Life)
Take The Rain Away
The Stranger
Sometimes
Too Young
The First Touch
In His Eyes
By My Side
Did I Tell You?
Don't Give Up
Gentle Gentle (Life of Your Life)
Theme Song - Version 2
You Believed
Had To Happen
It Comes To This
Charlotte's Song
One True Thing
I Wish The Past Was Different
Locked Away Inside My Heart
Our Home, Our Place
Strip Jack Naked
Broken Dreams
This Perfect Day
Trust The Night
The Man I Loved (We Had No Time)
Time Turn Over
Drover's Run (My Heart's Home)
Abi Tucker
Hey You
Speak My Angel
List of Unreleased Songs
Feet on The Ground by Rebecca Lavelle
Room To Move by Rebecca Lavelle
A Matter of Time by Rebecca Lavelle
All I Ever Wanted was Love by Rebecca Lavelle
Alone & Afraid by Rebecca Lavelle
Belonging by Rebecca Lavelle
I Reach Out by Naomi Starr
Life Makes A Fool of Us by Rebecca Lavelle
Love is Endless by Rebecca Lavelle
Something So Strong by Rebecca Lavelle
Sorrow by Rebecca Lavelle
Stay by Rebecca Lavelle
Tears on My Pillow by Rebecca Lavelle & Glenda Linscott
Kate's Lullaby by Michala Banas
Wake Up Gungellan by Doris Younane (Abi Tucker & Gillian Alexy Short Clip)
Truckstop Woman by Doris Younane, Simmone Jade Mackinnon, Luke Jacobz, Gillian Alexy & Chorus
Forever by Doris Youanne, Peter Hardy, Abi Tucker & Matt Passmore
References
External links
McLeod's Daughters Official Website
Dutch McLeod's Daughters Website
Category:McLeod's Daughters
|
{
"pile_set_name": "wikipedia_en"
}
|
SET UTF-8
LANG tr
|
{
"pile_set_name": "github"
}
|
<?xml version="1.0" ?>
<component id="root" name="root">
<component id="system" name="system">
<!--McPAT will skip the components if number is set to 0 -->
<param name="number_of_cores" value="64"/>
<param name="number_of_L1Directories" value="0"/>
<param name="number_of_L2Directories" value="0"/>
<param name="number_of_L2s" value="64"/> <!-- This number means how many L2 clusters in each cluster there can be multiple banks/ports -->
<param name="number_of_L3s" value="0"/> <!-- This number means how many L3 clusters -->
<param name="number_of_NoCs" value="1"/>
<param name="homogeneous_cores" value="1"/><!--1 means homo -->
<param name="homogeneous_L2s" value="1"/>
<param name="homogeneous_L1Directorys" value="1"/>
<param name="homogeneous_L2Directorys" value="1"/>
<param name="homogeneous_L3s" value="1"/>
<param name="homogeneous_ccs" value="1"/><!--cache coherece hardware -->
<param name="homogeneous_NoCs" value="1"/>
<param name="core_tech_node" value="22"/><!-- nm -->
<param name="target_core_clockrate" value="3500"/><!--MHz -->
<param name="temperature" value="360"/> <!-- Kelvin -->
<param name="number_cache_levels" value="2"/>
<param name="interconnect_projection_type" value="0"/><!--0: agressive wire technology; 1: conservative wire technology -->
<param name="device_type" value="0"/><!--0: HP(High Performance Type); 1: LSTP(Low standby power) 2: LOP (Low Operating Power) -->
<param name="longer_channel_device" value="1"/><!-- 0 no use; 1 use when possible -->
<param name="machine_bits" value="64"/>
<param name="virtual_address_width" value="64"/>
<param name="physical_address_width" value="52"/>
<param name="virtual_memory_page_size" value="4096"/>
<stat name="total_cycles" value="100000"/>
<stat name="idle_cycles" value="0"/>
<stat name="busy_cycles" value="100000"/>
<!--This page size(B) is complete different from the page size in Main memo secction. this page size is the size of
virtual memory from OS/Archi perspective; the page size in Main memo secction is the actuall physical line in a DRAM bank -->
<!-- *********************** cores ******************* -->
<component id="system.core0" name="core0">
<!-- Core property -->
<param name="clock_rate" value="3500"/>
<param name="instruction_length" value="32"/>
<param name="opcode_width" value="9"/>
<!-- address width determins the tag_width in Cache, LSQ and buffers in cache controller
default value is machine_bits, if not set -->
<param name="machine_type" value="1"/><!-- 1 inorder; 0 OOO-->
<!-- inorder/OoO -->
<param name="number_hardware_threads" value="4"/>
<!-- number_instruction_fetch_ports(icache ports) is always 1 in single-thread processor,
it only may be more than one in SMT processors. BTB ports always equals to fetch ports since
branch information in consective branch instructions in the same fetch group can be read out from BTB once.-->
<param name="fetch_width" value="1"/>
<!-- fetch_width determins the size of cachelines of L1 cache block -->
<param name="number_instruction_fetch_ports" value="1"/>
<param name="decode_width" value="1"/>
<!-- decode_width determins the number of ports of the
renaming table (both RAM and CAM) scheme -->
<param name="issue_width" value="1"/>
<!-- issue_width determins the number of ports of Issue window and other logic
as in the complexity effective proccessors paper; issue_width==dispatch_width -->
<param name="commit_width" value="1"/>
<!-- commit_width determins the number of ports of register files -->
<param name="fp_issue_width" value="1"/>
<param name="prediction_width" value="0"/>
<!-- number of branch instructions can be predicted simultannouesl-->
<!-- Current version of McPAT does not distinguish int and floating point pipelines
Theses parameters are reserved for future use.-->
<param name="pipelines_per_core" value="1,1"/>
<!--integer_pipeline and floating_pipelines, if the floating_pipelines is 0, then the pipeline is shared-->
<param name="pipeline_depth" value="6,6"/>
<!-- pipeline depth of int and fp, if pipeline is shared, the second number is the average cycles of fp ops -->
<!-- issue and exe unit-->
<param name="ALU_per_core" value="1"/>
<!-- contains an adder, a shifter, and a logical unit -->
<param name="MUL_per_core" value="1"/>
<!-- For MUL and Div -->
<param name="FPU_per_core" value="0.125"/>
<!-- buffer between IF and ID stage -->
<param name="instruction_buffer_size" value="16"/>
<!-- buffer between ID and sche/exe stage -->
<param name="decoded_stream_buffer_size" value="16"/>
<param name="instruction_window_scheme" value="0"/><!-- 0 PHYREG based, 1 RSBASED-->
<!-- McPAT support 2 types of OoO cores, RS based and physical reg based-->
<param name="instruction_window_size" value="16"/>
<param name="fp_instruction_window_size" value="16"/>
<!-- the instruction issue Q as in Alpha 21264; The RS as in Intel P6 -->
<param name="ROB_size" value="80"/>
<!-- each in-flight instruction has an entry in ROB -->
<!-- registers -->
<param name="archi_Regs_IRF_size" value="32"/>
<param name="archi_Regs_FRF_size" value="32"/>
<!-- if OoO processor, phy_reg number is needed for renaming logic,
renaming logic is for both integer and floating point insts. -->
<param name="phy_Regs_IRF_size" value="80"/>
<param name="phy_Regs_FRF_size" value="80"/>
<!-- rename logic -->
<param name="rename_scheme" value="0"/>
<!-- can be RAM based(0) or CAM based(1) rename scheme
RAM-based scheme will have free list, status table;
CAM-based scheme have the valid bit in the data field of the CAM
both RAM and CAM need RAM-based checkpoint table, checkpoint_depth=# of in_flight instructions;
Detailed RAT Implementation see TR -->
<param name="register_windows_size" value="8"/>
<!-- how many windows in the windowed register file, sun processors;
no register windowing is used when this number is 0 -->
<!-- In OoO cores, loads and stores can be issued whether inorder(Pentium Pro) or (OoO)out-of-order(Alpha),
They will always try to exeute out-of-order though. -->
<param name="LSU_order" value="inorder"/>
<param name="store_buffer_size" value="32"/>
<!-- By default, in-order cores do not have load buffers -->
<param name="load_buffer_size" value="32"/>
<!-- number of ports refer to sustainable concurrent memory accesses -->
<param name="memory_ports" value="1"/>
<!-- max_allowed_in_flight_memo_instructions determins the # of ports of load and store buffer
as well as the ports of Dcache which is connected to LSU -->
<!-- dual-pumped Dcache can be used to save the extra read/write ports -->
<param name="RAS_size" value="32"/>
<!-- general stats, defines simulation periods;require total, idle, and busy cycles for senity check -->
<!-- please note: if target architecture is X86, then all the instrucions refer to (fused) micro-ops -->
<stat name="total_instructions" value="800000"/>
<stat name="int_instructions" value="600000"/>
<stat name="fp_instructions" value="20000"/>
<stat name="branch_instructions" value="0"/>
<stat name="branch_mispredictions" value="0"/>
<stat name="load_instructions" value="100000"/>
<stat name="store_instructions" value="100000"/>
<stat name="committed_instructions" value="800000"/>
<stat name="committed_int_instructions" value="600000"/>
<stat name="committed_fp_instructions" value="20000"/>
<stat name="pipeline_duty_cycle" value="0.6"/><!--<=1, runtime_ipc/peak_ipc; averaged for all cores if homogenous -->
<!-- the following cycle stats are used for heterogeneouse cores only,
please ignore them if homogeneouse cores -->
<stat name="total_cycles" value="100000"/>
<stat name="idle_cycles" value="0"/>
<stat name="busy_cycles" value="100000"/>
<!-- instruction buffer stats -->
<!-- ROB stats, both RS and Phy based OoOs have ROB
performance simulator should capture the difference on accesses,
otherwise, McPAT has to guess based on number of commited instructions. -->
<stat name="ROB_reads" value="263886"/>
<stat name="ROB_writes" value="263886"/>
<!-- RAT accesses -->
<stat name="rename_accesses" value="263886"/>
<stat name="fp_rename_accesses" value="263886"/>
<!-- decode and rename stage use this, should be total ic - nop -->
<!-- Inst window stats -->
<stat name="inst_window_reads" value="263886"/>
<stat name="inst_window_writes" value="263886"/>
<stat name="inst_window_wakeup_accesses" value="263886"/>
<stat name="fp_inst_window_reads" value="263886"/>
<stat name="fp_inst_window_writes" value="263886"/>
<stat name="fp_inst_window_wakeup_accesses" value="263886"/>
<!-- RF accesses -->
<stat name="int_regfile_reads" value="1600000"/>
<stat name="float_regfile_reads" value="40000"/>
<stat name="int_regfile_writes" value="800000"/>
<stat name="float_regfile_writes" value="20000"/>
<!-- accesses to the working reg -->
<stat name="function_calls" value="5"/>
<stat name="context_switches" value="260343"/>
<!-- Number of Windowes switches (number of function calls and returns)-->
<!-- Alu stats by default, the processor has one FPU that includes the divider and
multiplier. The fpu accesses should include accesses to multiplier and divider -->
<stat name="ialu_accesses" value="800000"/>
<stat name="fpu_accesses" value="10000"/>
<stat name="mul_accesses" value="100000"/>
<stat name="cdb_alu_accesses" value="1000000"/>
<stat name="cdb_mul_accesses" value="0"/>
<stat name="cdb_fpu_accesses" value="0"/>
<!-- multiple cycle accesses should be counted multiple times,
otherwise, McPAT can use internal counter for different floating point instructions
to get final accesses. But that needs detailed info for floating point inst mix -->
<!-- currently the performance simulator should
make sure all the numbers are final numbers,
including the explicit read/write accesses,
and the implicite accesses such as replacements and etc.
Future versions of McPAT may be able to reason the implicite access
based on param and stats of last level cache
The same rule applies to all cache access stats too! -->
<!-- following is AF for max power computation.
Do not change them, unless you understand them-->
<stat name="IFU_duty_cycle" value="0.25"/>
<stat name="LSU_duty_cycle" value="0.25"/>
<stat name="MemManU_I_duty_cycle" value="1"/>
<stat name="MemManU_D_duty_cycle" value="0.25"/>
<stat name="ALU_duty_cycle" value="0.9"/>
<stat name="MUL_duty_cycle" value="0.5"/>
<stat name="FPU_duty_cycle" value="0.4"/>
<stat name="ALU_cdb_duty_cycle" value="0.9"/>
<stat name="MUL_cdb_duty_cycle" value="0.5"/>
<stat name="FPU_cdb_duty_cycle" value="0.4"/>
<component id="system.core0.predictor" name="PBT">
<!-- branch predictor; tournament predictor see Alpha implementation -->
<param name="local_predictor_size" value="10,3"/>
<param name="local_predictor_entries" value="1024"/>
<param name="global_predictor_entries" value="4096"/>
<param name="global_predictor_bits" value="2"/>
<param name="chooser_predictor_entries" value="4096"/>
<param name="chooser_predictor_bits" value="2"/>
<!-- These parameters can be combined like below in next version
<param name="load_predictor" value="10,3,1024"/>
<param name="global_predictor" value="4096,2"/>
<param name="predictor_chooser" value="4096,2"/>
-->
</component>
<component id="system.core0.itlb" name="itlb">
<param name="number_entries" value="64"/>
<stat name="total_accesses" value="800000"/>
<stat name="total_misses" value="4"/>
<stat name="conflicts" value="0"/>
<!-- there is no write requests to itlb although writes happen to itlb after miss,
which is actually a replacement -->
</component>
<component id="system.core0.icache" name="icache">
<!-- there is no write requests to itlb although writes happen to it after miss,
which is actually a replacement -->
<param name="icache_config" value="16384,32,4,1,1,3,8,0"/>
<!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy -->
<!-- cache_policy;//0 no write or write-though with non-write allocate;1 write-back with write-allocate -->
<param name="buffer_sizes" value="16, 16, 16,0"/>
<!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size-->
<stat name="read_accesses" value="200000"/>
<stat name="read_misses" value="0"/>
<stat name="conflicts" value="0"/>
</component>
<component id="system.core0.dtlb" name="dtlb">
<param name="number_entries" value="64"/>
<stat name="total_accesses" value="200000"/>
<stat name="total_misses" value="4"/>
<stat name="conflicts" value="0"/>
</component>
<component id="system.core0.dcache" name="dcache">
<!-- all the buffer related are optional -->
<param name="dcache_config" value="8192,16,4,1,1,3,16,0"/>
<param name="buffer_sizes" value="16, 16, 16, 16"/>
<!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size-->
<stat name="read_accesses" value="200000"/>
<stat name="write_accesses" value="27276"/>
<stat name="read_misses" value="1632"/>
<stat name="write_misses" value="183"/>
<stat name="conflicts" value="0"/>
</component>
<component id="system.core0.BTB" name="BTB">
<!-- all the buffer related are optional -->
<param name="BTB_config" value="8192,4,2,1, 1,3"/>
<!-- the parameters are capacity,block_width,associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,-->
</component>
</component>
<component id="system.L1Directory0" name="L1Directory0">
<param name="Directory_type" value="0"/>
<!--0 cam based shadowed tag. 1 directory cache -->
<param name="Dir_config" value="2048,1,0,1, 4, 4,8"/>
<!-- the parameters are capacity,block_width, associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,-->
<param name="buffer_sizes" value="8, 8, 8, 8"/>
<!-- all the buffer related are optional -->
<param name="clockrate" value="3500"/>
<param name="ports" value="1,1,1"/>
<!-- number of r, w, and rw search ports -->
<param name="device_type" value="0"/>
<!-- altough there are multiple access types,
Performance simulator needs to cast them into reads or writes
e.g. the invalidates can be considered as writes -->
<stat name="read_accesses" value="800000"/>
<stat name="write_accesses" value="27276"/>
<stat name="read_misses" value="1632"/>
<stat name="write_misses" value="183"/>
<stat name="conflicts" value="20"/>
<stat name="duty_cycle" value="0.45"/>
</component>
<component id="system.L2Directory0" name="L2Directory0">
<param name="Directory_type" value="1"/>
<!--0 cam based shadowed tag. 1 directory cache -->
<param name="Dir_config" value="1048576,16,16,1,2, 100"/>
<!-- the parameters are capacity,block_width, associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,-->
<param name="buffer_sizes" value="8, 8, 8, 8"/>
<!-- all the buffer related are optional -->
<param name="clockrate" value="3500"/>
<param name="ports" value="1,1,1"/>
<!-- number of r, w, and rw search ports -->
<param name="device_type" value="0"/>
<!-- altough there are multiple access types,
Performance simulator needs to cast them into reads or writes
e.g. the invalidates can be considered as writes -->
<stat name="read_accesses" value="58824"/>
<stat name="write_accesses" value="27276"/>
<stat name="read_misses" value="1632"/>
<stat name="write_misses" value="183"/>
<stat name="conflicts" value="100"/>
<stat name="duty_cycle" value="0.45"/>
</component>
<component id="system.L20" name="L20">
<!-- all the buffer related are optional -->
<param name="L2_config" value="1048576,64,16,1, 4,23, 64, 1"/>
<!-- consider 4-way bank interleaving for Niagara 1 -->
<!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy -->
<param name="buffer_sizes" value="16, 16, 16, 16"/>
<!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size-->
<param name="clockrate" value="3500"/>
<param name="ports" value="1,1,1"/>
<!-- number of r, w, and rw ports -->
<param name="device_type" value="0"/>
<stat name="read_accesses" value="200000"/>
<stat name="write_accesses" value="0"/>
<stat name="read_misses" value="0"/>
<stat name="write_misses" value="0"/>
<stat name="conflicts" value="0"/>
<stat name="duty_cycle" value="0.5"/>
</component>
<!--**********************************************************************-->
<component id="system.L30" name="L30">
<param name="L3_config" value="1048576,64,16,1, 2,100, 64,1"/>
<!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy -->
<param name="clockrate" value="3500"/>
<param name="ports" value="1,1,1"/>
<!-- number of r, w, and rw ports -->
<param name="device_type" value="0"/>
<param name="buffer_sizes" value="16, 16, 16, 16"/>
<!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size-->
<stat name="read_accesses" value="58824"/>
<stat name="write_accesses" value="27276"/>
<stat name="read_misses" value="1632"/>
<stat name="write_misses" value="183"/>
<stat name="conflicts" value="0"/>
<stat name="duty_cycle" value="0.35"/>
</component>
<!--**********************************************************************-->
<component id="system.NoC0" name="noc0">
<param name="clockrate" value="3500"/>
<param name="type" value="1"/>
<!-- 1 NoC, O bus -->
<param name="horizontal_nodes" value="8"/>
<param name="vertical_nodes" value="8"/>
<param name="has_global_link" value="1"/>
<!-- 1 has global link, 0 does not have global link -->
<param name="link_throughput" value="1"/><!--w.r.t clock -->
<param name="link_latency" value="1"/><!--w.r.t clock -->
<!-- througput >= latency -->
<!-- Router architecture -->
<param name="input_ports" value="5"/>
<param name="output_ports" value="5"/>
<param name="virtual_channel_per_port" value="1"/>
<!-- input buffer; in classic routers only input ports need buffers -->
<param name="flit_bits" value="256"/>
<param name="input_buffer_entries_per_vc" value="4"/><!--VCs within the same ports share input buffers whose size is propotional to the number of VCs-->
<param name="chip_coverage" value="1"/>
<!-- When multiple NOC present, one NOC will cover part of the whole chip. chip_coverage <=1 -->
<stat name="total_accesses" value="360000"/>
<!-- This is the number of total accesses within the whole network not for each router -->
<stat name="duty_cycle" value="0.1"/>
</component>
<!--**********************************************************************-->
<component id="system.mem" name="mem">
<!-- Main memory property -->
<param name="mem_tech_node" value="32"/>
<param name="device_clock" value="200"/><!--MHz, this is clock rate of the actual memory device, not the FSB -->
<param name="peak_transfer_rate" value="3200"/><!--MB/S-->
<param name="internal_prefetch_of_DRAM_chip" value="4"/>
<!-- 2 for DDR, 4 for DDR2, 8 for DDR3...-->
<!-- the device clock, peak_transfer_rate, and the internal prefetch decide the DIMM property -->
<!-- above numbers can be easily found from Wikipedia -->
<param name="capacity_per_channel" value="4096"/> <!-- MB -->
<!-- capacity_per_Dram_chip=capacity_per_channel/number_of_dimms/number_ranks/Dram_chips_per_rank
Current McPAT assumes single DIMMs are used.-->
<param name="number_ranks" value="2"/>
<param name="num_banks_of_DRAM_chip" value="8"/>
<param name="Block_width_of_DRAM_chip" value="64"/> <!-- B -->
<param name="output_width_of_DRAM_chip" value="8"/>
<!--number of Dram_chips_per_rank=" 72/output_width_of_DRAM_chip-->
<!--number of Dram_chips_per_rank=" 72/output_width_of_DRAM_chip-->
<param name="page_size_of_DRAM_chip" value="8"/> <!-- 8 or 16 -->
<param name="burstlength_of_DRAM_chip" value="8"/>
<stat name="memory_accesses" value="1052"/>
<stat name="memory_reads" value="1052"/>
<stat name="memory_writes" value="1052"/>
</component>
<component id="system.mc" name="mc">
<!-- Memeory controllers are for DDR(2,3...) DIMMs -->
<!-- current version of McPAT uses published values for base parameters of memory controller
improvments on MC will be added in later versions. -->
<param name="mc_clock" value="200"/><!--DIMM IO bus clock rate MHz DDR2-400 for Niagara 1-->
<param name="peak_transfer_rate" value="3200"/><!--MB/S-->
<param name="llc_line_length" value="64"/><!--B-->
<param name="number_mcs" value="4"/>
<!-- current McPAT only supports homogeneous memory controllers -->
<param name="memory_channels_per_mc" value="1"/>
<param name="number_ranks" value="2"/>
<!-- # of ranks of each channel-->
<param name="req_window_size_per_channel" value="32"/>
<param name="IO_buffer_size_per_channel" value="32"/>
<param name="databus_width" value="128"/>
<param name="addressbus_width" value="51"/>
<!-- McPAT will add the control bus width to the addressbus width automatically -->
<stat name="memory_accesses" value="33333"/>
<stat name="memory_reads" value="16667"/>
<stat name="memory_writes" value="16667"/>
<!-- McPAT does not track individual mc, instead, it takes the total accesses and calculate
the average power per MC or per channel. This is sufficent for most application.
Further trackdown can be easily added in later versions. -->
</component>
<!--**********************************************************************-->
</component>
</component>
|
{
"pile_set_name": "github"
}
|
**MARKET DEPTH AND PRICE DYNAMICS: A NOTE**
FRANK H. WESTERHOFF
*University of Osnabrueck, Department of Economics*
*Rolandstrasse 8, D-49069 Osnabrueck, Germany*
*e-mail: fwesterho@oec.uni-osnabrueck.de*
Abstract: This note explores the consequences of nonlinear price impact functions on price dynamics within the chartist-fundamentalist framework. Price impact functions may be nonlinear with respect to trading volume. As indicated by recent empirical studies, a given transaction may cause a large (small) price change if market depth is low (high). Simulations reveal that such a relationship may create endogenous complex price fluctuations even if the trading behavior of chartists and fundamentalists is linear.
Keywords: Econophysics; Market Depth; Price Dynamics; Nonlinearities; Technical and Fundamental Analysis.
Introduction
============
Interactions between heterogeneous agents, so-called chartists and fundamentalists, may generate endogenous price dynamics either due to nonlinear trading rules or due to a switching between simple linear trading rules.$^{1,2}$ Overall, multi-agent models appear to be quite successful in replicating financial market dynamics.$^{3,4}$ In addition, this research direction has important applications. On the one hand, understanding the working of financial markets may help to design better investment strategies.$^{5}$ On the other hand, it may facilitate the regulation of disorderly markets. For instance, Ehrenstein shows that the imposition of a low transaction tax may stabilize asset price fluctuations.$^{6}$
Within these models, the orders of the traders typically drive the price via a log linear price impact function: Buying orders shift the price proportionally up and selling orders shift the price proportionally down. Recent empirical evidence suggests, however, that the relationship between orders and price adjustment may be nonlinear. Moreover, as reported by Farmer et al., large price fluctuations occur when market depth is low.$^{3,7}$ Following this observation, our goal is to illustrate a novel mechanism for endogenous price dynamics.
We investigate – within an otherwise linear chartist-fundamentalist setup – a price impact function which depends nonlinearly on market depth. To be precise, a given transaction yields a larger price change when market depth is low than when it is high. Simulations indicate that such a relationship may lead to complex price movements. The dynamics may be sketched as follows. The market switches back and forth between two regimes. When liquidity is high, the market is relatively stable. But low price fluctuations indicate only weak trading signals and thus the transactions of speculators decline. As liquidity decreases, the price responsiveness of a trade increases. The market becomes unstable and price fluctuations increase again.
The remainder of this note is organized as follows. Section 2 sketches the empirical evidence on price impact functions. In section 3, we present our model, and in section 4, we discuss the main results. The final section concludes.
Empirical Evidence
==================
Financial prices are obviously driven by the orders of heterogeneous agents. However, it is not clear what the true functional form of price impact is. For instance, Farmer proposes a log linear price impact function for theoretical analysis while Zhang develops a model with nonlinear price impact.$^{8,9}$ His approach is backed up by empirical research that documents a concave price impact function. According to Hasbrouck, the larger is the order size, the smaller is the price impact per trade unit.$^{10}$ Also Kempf and Korn, using data on DAX futures, and Plerou et al., using data on the 116 most frequently traded US stocks, find that the price impact function displays a concave curvature with increasing order size, and flattening out at larger values.$^{11,12}$ Weber and Rosenow fitted a concave function in the form of a power law and obtained an impressive correlation coefficient of 0.977.$^{13}$ For a further theoretical and empirical debate on the possible shape of the price impact function with respect to the order size see Gabaix et al., Farmer and Lillo and Plerou et al.$^{14-16}$
But these results are currently challenged by an empirical study which is crucial for this note. Farmer et al. present evidence that price fluctuations caused by individual market orders are essentially independent of the volume of the orders.$^{7}$ Instead, large price fluctuations are driven by fluctuations in liquidity, i.e. variations in the market’s ability to absorb new orders. The reason is that even for the most liquid stocks there can be substantial gaps in the order book. When such a gap exists next to the best price – due to low liquidity – even a small new order can remove the best quote and trigger a large price change. These results are supported by Chordia, Roll and Subrahmanyam who also document that there is considerable time variation in market wide liquidity and Lillo, Farmer and Mantenga who detect that higher capitalization stocks tend to have smaller price responses for the same normalized transaction size.$^{17,18}$
Note that the relation between liquidity and price impact is of direct importance to investors developing trading strategies and to regulators attempting to stabilize financial markets. Farmer et al. argue, for instance, that agents who are trying to transact large amounts should split their orders and execute them a little at a time, watching the order book, and taking whatever liquidity is available as it enters.$^{7}$ Hence, when there is a lot of volume in the market, they should submit large orders. Assuming a concave price impact function would obviously lead to quite different investment decisions. Ehrenstein, Westerhoff and Stauffer demonstrate, for instance, that the success of a Tobin tax depends on its impact on market depth.$^{19}$ Depending on the degree of the nonlinearity of the price impact function, a transaction tax may stabilize or destabilize the markets.
The Model
=========
Following Simon, agents are boundedly rational and display a rule-governed behavior.$^{20}$ Moreover, survey studies reveal that financial market participants rely strongly on technical and fundamental analysis to predict prices.$^{21,22}$ Chartists typically extrapolate past price movements into the future. Let $P$ be the log of the price. Then, their orders may be expressed as
$$D^C_t = a(P_t -P_{t-1}),$$
where $a$ is a positive reaction coefficient denoting the strength of the trading. Accordingly, technical traders submit buying orders if prices go up and vice versa. In contrast, fundamentalists expect the price to track its fundamental value. Orders from this type of agent may be written as
$$D^F_t = b(F-P_t).$$
Again, $b$ is a positive reaction coefficient, and $F$ stands for the log of the fundamental value. For instance, if the asset is overvalued, fundamentalists submit selling orders.
As usual, excess buying drives the price up and excess selling drives it down so that the price adjustment process may be formalized as
$$P_{t+1} = P_t + A_t(wD^C_t + (1-w)D^F_t),$$
where $w$ indicates the fraction of chartists and $(1-w)$ the fraction of fundamentalists. The novel idea is to base the degree of price adjustmen $A$ on a nonlinear function of the market depth.$^{23}$ Exploiting that given excess demand has a larger (smaller) impact on the price if the trading volume is low (high), one may write
$$A_t = \frac{c}{(|wD^C_t|+|(1-w) D^F_t|)^d}.$$
The curvature of $A$ is captured by $d\geq 0$, while $c>0$ is a shift parameter.
For $d=0$, the price adjustment function is log-linear.$^{1,3}$ In that case, the law of motion of the price, derived from combining (1) to (4), is a second-order linear difference equation which has a unique steady state at
$$P_{t+1} = P_t = P_{t-1} = F.$$
Rewriting Schur’s stability conditions, the fixed point is stable for
$$0<c<\left\{\begin{array}{ll}
\displaystyle\frac{1}{aw} & \mbox{for~~} w> \displaystyle \frac{b}{4a +b}\\
\displaystyle \frac{2}{b(1-w)-2aw}\qquad & \mbox{else}
\end{array}.\right.$$
However, we are interested in the case where $d>0$. Combining (1)-(4) and solving for $P$ yields
$$P_{t+1} = P_t + c \frac{wa(P_t - P_{t-1}) + (1-w)
b(F-P_t)}{(|wa(P_t-P_{t-1})|+|(1-w)b(F-P_t)|)^d},$$
which is a two-dimensional nonlinear difference equation. Since (7) precludes closed analysis, we simulate the dynamics to demostrate that the underlying structure gives rise to endogenous deterministic motion.
Some Results
============
Figure 1 contains three bifurcation diagrams for $0<d<1$ and $w=0.7$ (top), $w=0.5$ (central) and $w=0.3$ (bottom). The other parameters are fixed at $a=b=c=1$ and the log of the fundamental value is $F=0$. We increase $d$ in 500 steps. In each step, $P$ is plotted from $t=$1001-1100. Note that bifurcation diagrams are frequently used to illustrate the dynamic properties of nonlinear systems.
Figure 1 suggests that if $d$ is small, there may exist a stable equilibrium. For instance, for $w=0.5$, prices converge towards the fundamental value as long as $d$ is smaller than around 0.1. If $d$ is increased further, the fixed point becomes unstable. In addition, the range in which the fluctuations take place increases too. Note also that many different types of bifurcation occur. Our model generates the full range of possible dynamic outcomes: fixed points, limit cycles, quasi periodic motion and chaotic fluctuations. For some parameter combinations coexisting attractors emerge. Comparing the three panels indicates that the higher the fraction of chartists, the less stable the market seems to be.
To check the robustness of endogenous motion, figure 2 presents bifurcation diagrams for $0 < a <2$ (top), $0< b < 2$ (central) and $0< c
< 2$ (bottom), with the remaining parameters fixed at $a=b=c=1$ and $d=w=0.5$. Again, complicated movements arise. While chartism seems to destabilize the market, fundamentalism is apparently stabilizing. Naturally, a higher price adjustment destabilizes the market as well. Overall, many parameter combinations exist which trigger complicated motion.[^1]
t
Let us finally explore what drives the dynamics. Figure 3 shows the dynamics in the time domain for $a=0.85$, $b=c=1$, and $d=w=0.5$. The first, second and third panel present the log of the price $P$, the price adjustment $A$ and the trading volume $V$ for 150 observations, respectively. Visual inspection reveals that the price circles around its fundamental value without any tendency to converge. Nonlinear price adjustment may thus be an endogenous engine for volatility and trading volume. Note that when trading volume drops the price adjustment increases and price movements are amplified. However, the dynamics does not explode since a higher trading volume leads again to a decrease in the price adjustment.
Finally, figure 4 displays the price (top panel) and the trading volume (bottom panel) for 5000 observations $(a = 0.25,~b = 1,~c = 50,~d = 2$ and $w=0.5)$. As can be seen, the dynamics may become quite complex. Remember that trading volume increases with increasing price changes (orders of chartists) and/or increasing deviations from fundamentals (orders of fundametalists). In a stylized way, the dynamics may thus be sketched as follows: Suppose that trading volume is relatively low. Since the price adjustment $A$ is strong, the system is unstable. As the trading becomes increasingly hectic, prices start to diverge from the fundamental value. At some point, however, the trading activity has become so strong that, due to the reduction of the price adjustment $A$, the system becomes stable. Afterwards, a period of convergence begins until the system jumps back to the unstable regime. This process continually repeats itself but in an intricate way.
Conclusions
===========
When switching between simple linear trading rules and/or relying on nonlinear strategies, interactions between heterogeneous agents may cause irregular dynamics. This note shows that changes in market depth also stimulate price changes. The reason is that if market liquidity goes down, a given order obtains a larger price impact. For a broad range of parameter combinations, erratic yet deterministic trajectories emerge since the system switches back and forth between stable and unstable regimes.
[**References**]{}
1. D. Farmer and S. Joshi, [*Journal of Economic Behavior and Organizations*]{} [**49**]{}, 149 (2002).
2. T. Lux and M. Marchesi, [*International Journal of Theoretical and Applied Finance*]{} [**3**]{}, 675 (2000).
3. R. Cont and J.-P. Bouchaud, [*Macroeconomic Dynamics*]{} [**4**]{}, 170 (2000).
4. D. Stauffer, [*Advances in Complex Systems*]{} [**4**]{}, 19 (2001).
5. D. Sornette and W. Zhou, [*Quantitative Finance*]{} [**2**]{}, 468 (2002).
6. G. Ehrenstein, [*International Journal of Modern Physics C*]{} [**13**]{}, 1323 (2002).
7. D. Farmer, L. Gillemot, F. Lillo, S. Mike and A. Sen, What Really Causes Large Price Changes?, SFI Working Paper, 04-02-006, 2004.
8. D. Farmer, [*Industrial and Corporate Change*]{} [**11**]{}, 895 (2002).
9. Y.-C. Zhang, [*Physica A*]{} [**269**]{}, 30 (1999).
10. J. Hasbrouck, [*Journal of Finance*]{} [**46**]{}, 179 (1991).
11. A. Kempf and O. Korn, [*Journal of Financial Markets*]{} [**2**]{}, 29 (1999).
12. V. Plerou, P. Gopikrishnan, X. Gabaix and E. Stanley, [*Physical Review E*]{} [**66**]{}, 027104, 1 (2002).
13. P. Weber and B. Rosenow, Order Book Approach to Price Impact, Preprint cond-mat/0311457, 2003.
14. X. Gabaix, P. Gopikrishnan, V. Plerou and E. Stanley, [*Nature*]{} [**423,**]{} 267 (2003).
15. D. Farmer and F. Lillo, [*Quantitative Finance*]{} [**4**]{}, C7 (2004).
16. V. Plerou, P. Gopikrishnan, X. Gabaix and E. Stanley, [*Quantitative Finance*]{} [**4**]{}, C11 (2004).
17. T. Chordia, R. Roll and A. Subrahmanyam, [*Journal of Finance*]{} [**56**]{}, 501 (2001).
18. F. Lillo, D. Farmer and R. Mantegna, [*Nature*]{} [**421**]{}, 129 (2003).
19. G. Ehrenstein, F. Westerhoff and D. Stauffer D., Tobin Tax and Market Depth, Preprint cond-mat/0311581, 2001.
20. H. Simon, [*Quarterly Journal of Economics*]{} [**9**]{}, 99 (1955).
21. M. Taylor and H. Allen, [*Journal of International Money and Finance*]{} [**11**]{}, 304 (1992).
22. Y.-H. Lui and D. Mole, [*Journal of International Money and Finance*]{} [**17**]{}, 535 (1998).
23. D. Sornette and K. Ide, [*International Journal of Modern Physiscs C*]{} [**14**]{}, 267 (2003).
[^1]: To observe permanent fluctuations only small variations in $A$ are needed. Suppose that $A$ takes two values centered around the upper bound of the stability condition $X$, say $X-Y$ and $X+Y$, depending on whether trading volume is above or below a certain level $Z$. Such a system obviously produces nonconvergent but also nonexplosive fluctuations for arbitrary values of $Y$ and $Z$.
|
{
"pile_set_name": "arxiv"
}
|
package de.peeeq.wurstscript.utils;
import de.peeeq.wurstscript.WLogger;
public class ExecutiontimeMeasure implements AutoCloseable {
private String message;
private long startTime;
public ExecutiontimeMeasure(String message) {
this.message = message;
this.startTime = System.currentTimeMillis();
}
@Override
public void close() {
long time = System.currentTimeMillis() - startTime;
WLogger.info("Executed " + message + " in " + time + "ms.");
}
}
|
{
"pile_set_name": "github"
}
|
Raigam Tele'es Best Teledrama Art Director Award
The Raigam Tele'es Best Teledrama Art Director Award is a Raigam Tele'es awared presented annually in Sri Lanka by the Kingdom of Raigam companies for the best Sri Lankan art director of the year in television.
The award was first given in 2005.
Award list in each year
References
Category:Performing arts awards
Category:Raigam Tele'es
|
{
"pile_set_name": "wikipedia_en"
}
|
AxisControlBus
ControlBus
PathPlanning1
PathPlanning6
PathToAxisControlBus
GearType1
GearType2
Motor
Controller
AxisType1
AxisType2
MechanicalStructure
|
{
"pile_set_name": "github"
}
|
/***********************************************************************
!!!!!! DO NOT MODIFY !!!!!!
GacGen.exe Resource.xml
This file is generated by Workflow compiler
https://github.com/vczh-libraries
***********************************************************************/
#ifndef VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION
#define VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION
#include "Demo.h"
#ifndef VCZH_DEBUG_NO_REFLECTION
#include "GacUIReflection.h"
#endif
#if defined( _MSC_VER)
#pragma warning(push)
#pragma warning(disable:4250)
#elif defined(__GNUC__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wparentheses-equality"
#elif defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wparentheses-equality"
#endif
/***********************************************************************
Reflection
***********************************************************************/
namespace vl
{
namespace reflection
{
namespace description
{
#ifndef VCZH_DEBUG_NO_REFLECTION
DECL_TYPE_INFO(::demo::MainWindow)
DECL_TYPE_INFO(::demo::MainWindowConstructor)
#endif
extern bool LoadDemoTypes();
}
}
}
#if defined( _MSC_VER)
#pragma warning(pop)
#elif defined(__GNUC__)
#pragma GCC diagnostic pop
#elif defined(__clang__)
#pragma clang diagnostic pop
#endif
#endif
|
{
"pile_set_name": "github"
}
|
package tk.woppo.sunday.model;
import android.database.Cursor;
import com.google.gson.Gson;
import com.google.gson.annotations.SerializedName;
import java.util.HashMap;
import tk.woppo.sunday.dao.WeatherDataHelper;
import tk.woppo.sunday.dao.WeatherTodayDataHelper;
/**
* Created by Ho on 2014/7/4.
*/
public class WeatherTodayModel extends BaseModel {
private static final HashMap<String, WeatherTodayModel> CACHE = new HashMap<String, WeatherTodayModel>();
/** 城市ID */
@SerializedName("cityid")
public String id;
/** 城市名称 */
@SerializedName("city")
public String cityName;
/** 温度 */
public String temp;
/** 天气 */
public String weather;
/** 风向 */
@SerializedName("WD")
public String wind;
/** 风力 */
@SerializedName("WS")
public String ws;
/** 湿度 */
@SerializedName("SD")
public String sd;
/** 发布时间 */
public String time;
private static void addToCache(WeatherTodayModel model) {
CACHE.put(model.id, model);
}
private static WeatherTodayModel getFromCache(String id) {
return CACHE.get(id);
}
public static WeatherTodayModel fromJson(String json) {
return new Gson().fromJson(json, WeatherTodayModel.class);
}
public static WeatherTodayModel fromCursor(Cursor cursor) {
String id = cursor.getString(cursor.getColumnIndex(WeatherDataHelper.WeatherDBInfo.ID));
WeatherTodayModel model = getFromCache(id);
if (model != null) {
return model;
}
model = new Gson().fromJson(cursor.getString(cursor.getColumnIndex(WeatherTodayDataHelper.WeatherTodayDBInfo.JSON)), WeatherTodayModel.class);
addToCache(model);
return model;
}
public static class WeatherTodayRequestData {
public WeatherTodayModel weatherinfo;
}
}
|
{
"pile_set_name": "github"
}
|
<html>
<body>
<h1>Directory listing</h1>
<hr/>
<pre>
<a href="management-core-3.0.4-javadoc.jar">management-core-3.0.4-javadoc.jar</a>
<a href="management-core-3.0.4-javadoc.jar.md5">management-core-3.0.4-javadoc.jar.md5</a>
<a href="management-core-3.0.4-javadoc.jar.sha1">management-core-3.0.4-javadoc.jar.sha1</a>
<a href="management-core-3.0.4-sources.jar">management-core-3.0.4-sources.jar</a>
<a href="management-core-3.0.4-sources.jar.md5">management-core-3.0.4-sources.jar.md5</a>
<a href="management-core-3.0.4-sources.jar.sha1">management-core-3.0.4-sources.jar.sha1</a>
<a href="management-core-3.0.4.jar">management-core-3.0.4.jar</a>
<a href="management-core-3.0.4.jar.md5">management-core-3.0.4.jar.md5</a>
<a href="management-core-3.0.4.jar.sha1">management-core-3.0.4.jar.sha1</a>
<a href="management-core-3.0.4.pom">management-core-3.0.4.pom</a>
<a href="management-core-3.0.4.pom.md5">management-core-3.0.4.pom.md5</a>
<a href="management-core-3.0.4.pom.sha1">management-core-3.0.4.pom.sha1</a>
</pre>
</body>
</html>
|
{
"pile_set_name": "github"
}
|
---
bibliography:
- 'references-Forstneric.bib'
---
[**Holomorphic embeddings and immersions\
of Stein manifolds: a survey**]{}
[**Franc Forstnerič**]{}
> [**Abstract**]{} In this paper we survey results on the existence of holomorphic embeddings and immersions of Stein manifolds into complex manifolds. Most of them pertain to proper maps into Stein manifolds. We include a new result saying that every continuous map $X\to Y$ between Stein manifolds is homotopic to a proper holomorphic embedding provided that $\dim Y>2\dim X$ and we allow a homotopic deformation of the Stein structure on $X$.
>
> [**Keywords**]{} Stein manifold, embedding, density property, Oka manifold
>
> [**MSC (2010):**]{}
>
> 32E10, 32F10, 32H02, 32M17, 32Q28, 58E20, 14C30
Introduction {#sec:intro}
============
In this paper we review what we know about the existence of holomorphic embeddings and immersions of Stein manifolds into other complex manifolds. The emphasis is on recent results, but we also include some classical ones for the sake of completeness and historical perspective. Recall that Stein manifolds are precisely the closed complex submanifolds of Euclidean spaces ${\mathbb{C}}^N$ (see Remmert [@Remmert1956], Bishop [@Bishop1961AJM], and Narasimhan [@Narasimhan1960AJM]; cf. Theorem \[th:classical\]). Stein manifolds of dimension $1$ are open Riemann surfaces (see Behnke and Stein [@BehnkeStein1949]). A domain in ${\mathbb{C}}^n$ is Stein if and only if it is a domain of holomorphy (see Cartan and Thullen [@CartanThullen1932]). For more information, see the monographs [@Forstneric2017E; @GrauertRemmert1979; @GunningRossi2009; @HormanderSCV].
In §\[sec:Euclidean\] we survey results on the existence of proper holomorphic immersions and embeddings of Stein manifolds into Euclidean spaces. Of special interest are the minimal embedding and immersion dimensions. Theorem \[th:EGS\], due to Eliashberg and Gromov [@EliashbergGromov1992AM] (1992) and Schürmann [@Schurmann1997] (1997), settles this question for Stein manifolds of dimension $>1$. It remains an open problem whether every open Riemann surface embeds holomorphically into ${\mathbb{C}}^2$; we describe its current status in §\[ss:RS\]. We also discuss the use of holomorphic automorphisms of Euclidean spaces in the construction of wild holomorphic embeddings (see §\[ss:wild\] and §\[ss:complete\]).
It has recently been discovered by Andrist et al. [@AndristFRW2016; @AndristWold2014; @Forstneric-immersions] that there is a big class of Stein manifolds $Y$ which contain every Stein manifold $X$ with $2\dim X< \dim Y$ as a closed complex submanifold (see Theorem \[th:density\] ). In fact, this holds for every Stein manifold $Y$ enjoying Varolin’s [*density property*]{} [@Varolin2000; @Varolin2001]: the Lie algebra of all holomorphic vector fields on $Y$ is spanned by the ${\mathbb{C}}$-complete vector fields, i.e., those whose flow is an action of the additive group $({\mathbb{C}},+)$ by holomorphic automorphisms of $Y$ (see Definition \[def:density\]). Since the domain $({\mathbb{C}}^*)^n$ enjoys the volume density property, we infer that every Stein manifold $X$ of dimension $n$ admits a proper holomorphic immersion to $({\mathbb{C}}^*)^{2n}$ and a proper pluriharmonic map into ${\mathbb{R}}^{2n}$ (see Corollary \[cor:harmonic\]). This provides a counterexample to the Schoen-Yau conjecture [@SchoenYau1997] for any Stein source manifold (see §\[ss:S-Y\]).
The class of Stein manifolds (in particular, of affine algebraic manifolds) with the density property is quite big and contains most complex Lie groups and homogeneous spaces, as well as many nonhomogeneus manifolds. This class has been the focus of intensive research during the last decade; we refer the reader to the recent surveys [@KalimanKutzschebauch2015] and [@Forstneric2017E §4.10]. An open problem posed by Varolin [@Varolin2000; @Varolin2001] is whether every contractible Stein manifold with the density property is biholomorphic to a Euclidean space.
In §\[sec:PSC\] we recall a result of Drinovec Drnovšek and the author [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] to the effect that every smoothly bounded, strongly pseudoconvex Stein domain $X$ embeds properly holomorphically into an arbitrary Stein manifold $Y$ with $\dim Y>2\dim X$. More precisely, every continuous map $\overline X\to Y$ which is holomorphic on $X$ is homotopic to a proper holomorphic embedding $X{\hookrightarrow}Y$ (see Theorem \[th:BDF2010\]). The analogous result holds for immersions if $\dim Y\ge 2\dim X$, and also for every $q$-complete manifold $Y$ with $q\in \{1,\ldots,\dim Y-2\dim X+1\}$, where the Stein case corresponds to $q=1$. This summarizes a long line of previous results. In §\[ss:Hodge\] we mention a recent application of these techniques to the [*Hodge conjecture*]{} for the highest dimensional a priori nontrivial cohomology group of a $q$-complete manifold [@FSS2016]. In §\[ss:complete\] we survey recent results on the existence of [*complete*]{} proper holomorphic embeddings and immersions of strongly pseudoconvex domains into balls. Recall that a submanifold of ${\mathbb{C}}^N$ is said to be [*complete*]{} if every divergent curve in it has infinite Euclidean length.
In §\[sec:soft\] we show how the combination of the techniques from [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] with those of Slapar and the author [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] can be used to prove that, if $X$ and $Y$ are Stein manifolds and $\dim Y>2\dim X$, then every continuous map $X\to Y$ is homotopic to a proper holomorphic embedding up to a homotopic deformation of the Stein structure on $X$ (see Theorem \[th:soft\]). The analogous result holds for immersions if $\dim Y\ge 2\dim X$, and for $q$-complete manifolds $Y$ with $q\le \dim Y-2\dim X+1$. A result in a similar vein, concerning proper holomorphic embeddings of open Riemann surfaces into ${\mathbb{C}}^2$ up to a deformation of their conformal structures, is due to Alarcón and L[ó]{}pez [@AlarconLopez2013] (a special case was proved in [@CerneForstneric2002]); see also Ritter [@Ritter2014] for embeddings into $({\mathbb{C}}^*)^2$.
I have not included any topics from Cauchy-Riemann geometry since it would be impossible to properly discuss this major subject in the present survey of limited size and with a rather different focus. The reader may wish to consult the recent survey by Pinchuk et al. [@Pinchuk2017], the monograph by Baouendi et al. [@Baouendi1999] from 1999, and my survey [@Forstneric1993MN] from 1993. For a new direction in this field, see the papers by Bracci and Gaussier [@BracciGaussier2016X; @BracciGaussier2017X].
We shall be using the following notation and terminology. Let ${\mathbb{N}}=\{1,2,3,\ldots\}$. We denote by ${\mathbb D}=\{z\in {\mathbb{C}}:|z|<1\}$ the unit disc in ${\mathbb{C}}$, by ${\mathbb D}^n\subset{\mathbb{C}}^n$ the Cartesian product of $n$ copies of ${\mathbb D}$ (the unit polydisc in ${\mathbb{C}}^n$), and by ${\mathbb{B}}^n=\{z=(z_1,\ldots,z_n)\in{\mathbb{C}}^n : |z|^2 = |z_1|^2+\cdots +|z_n|^2<1\}$ the unit ball in ${\mathbb{C}}^n$. By ${\mathcal{O}}(X)$ we denote the algebra of all holomorphic functions on a complex manifold $X$, and by ${\mathcal{O}}(X,Y)$ the space of all holomorphic maps $X\to Y$ between a pair of complex manifolds; thus ${\mathcal{O}}(X)={\mathcal{O}}(X,{\mathbb{C}})$. These spaces carry the compact-open topology. This topology can be defined by a complete metric which renders them Baire spaces; in particular, ${\mathcal{O}}(X)$ is a Fréchet algebra. (See [@Forstneric2017E p. 5] for more details.) A compact set $K$ in a complex manifold $X$ is said to be [*${\mathcal{O}}(X)$-convex*]{} if $K={\widehat}K:= \{p\in X : |f(p)|\le \sup_K |f|\ \text{for every} \ f\in {\mathcal{O}}(X)\}$.
Embeddings and immersions of Stein manifolds into Euclidean spaces {#sec:Euclidean}
==================================================================
In this section we survey results on proper holomorphic immersions and embeddings of Stein manifolds into Euclidean spaces.
Classical results {#ss:classical}
-----------------
We begin by recalling the results of Remmert [@Remmert1956], Bishop [@Bishop1961AJM], and Narasimhan [@Narasimhan1960AJM] from the period 1956–1961.
\[th:classical\] Assume that $X$ is a Stein manifold of dimension $n$.
- If $N>2n$ then the set of proper embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$.
- If $N\ge 2n$ then the set of proper immersions $X{\hookrightarrow}{\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$.
- If $N>n$ then the set of proper maps $X\to {\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$.
- If $N\ge n$ then the set of almost proper maps $X\to {\mathbb{C}}^N$ is residual in ${\mathcal{O}}(X,{\mathbb{C}}^N)$.
A proof of these results can also be found in the monograph by Gunning and Rossi [@GunningRossi2009].
Recall that a set in a Baire space (such as ${\mathcal{O}}(X,{\mathbb{C}}^N)$) is said to be [*residual*]{}, or a set [*of second category*]{}, if it is the intersection of at most countably many open everywhere dense sets. Every residual set is dense. A property of elements in a Baire space is said to be [*generic*]{} if it holds for all elements in a residual set.
The density statement for embeddings and immersions is an easy consequence of the following result which follows from the jet transversality theorem for holomorphic maps. (See Forster [@Forster1970] for maps to Euclidean spaces and Kaliman and Zaidenberg [@KalimanZaidenberg1996TAMS] for the general case. A more complete discussion of this topic can be found in [@Forstneric2017E §8.8].) Note also that maps which are immersions or embeddings on a given compact set constitute an open set in the corresponding mapping space.
\[prop:generic\] Assume that $X$ is a Stein manifold, $K$ is a compact set in $X$, and $U\Subset X$ is an open relatively compact set containing $K$. If $Y$ is a complex manifold such that $\dim Y>2\dim X$, then every holomorphic map $f\colon X\to Y$ can be approximated uniformly on $K$ by holomorphic embeddings $U{\hookrightarrow}Y$. If $2\dim X \le \dim Y$ then $f$ can be approximated by holomorphic immersions $U\to Y$.
Proposition \[prop:generic\] fails in general without shrinking the domain of the map, for otherwise it would yield nonconstant holomorphic maps of ${\mathbb{C}}$ to any complex manifold of dimension $>1$ which is clearly false. On the other hand, it holds without shrinking the domain of the map if the target manifold $Y$ satisfies a suitable holomorphic flexibility property, in particular, if it is an [*Oka manifold*]{}. See [@Forstneric2017E Chap. 5] for the definition of this class of complex manifolds and [@Forstneric2017E Corollary 8.8.7] for the mentioned result.
In the proof of Theorem \[th:classical\], parts (a)–(c), we exhaust $X$ by a sequence $K_1\subset K_2\subset \cdots$ of compact ${\mathcal{O}}(X)$-convex sets and approximate the holomorphic map $f_j\colon X\to{\mathbb{C}}^N$ in the inductive step, uniformly on $K_j$, by a holomorphic map $f_{j+1}\colon X\to{\mathbb{C}}^N$ whose norm $|f_{j+1}|$ is not too small on $K_{j+1}\setminus K_j$ and such that $|f_{j+1}(x)|>1+\sup_{K_j} |f_j|$ holds for all $x\in bK_{j+1}$. If the approximation is close enough at every step then the sequence $f_j$ converges to a proper holomorphic map $f=\lim_{j\to\infty} f_j\colon X\to{\mathbb{C}}^N$. If $N>2n$ then every map $f_j$ in the sequence can be made an embedding on $K_{j}$ (immersion in $N\ge 2n$) by Proposition \[prop:generic\], and hence the limit map $f$ is also such.
A more efficient way of constructing proper maps, immersions and embeddings of Stein manifolds into Euclidean space was introduced by Bishop [@Bishop1961AJM]. He showed that any holomorphic map $X\to{\mathbb{C}}^n$ from an $n$-dimensional Stein manifold $X$ can be approximated uniformly on compacts by [*almost proper*]{} holomorphic maps $h\colon X\to {\mathbb{C}}^n$; see Theorem \[th:classical\](d). More precisely, there is an increasing sequence $P_1\subset P_2\subset \cdots\subset X$ of relatively compact open sets exhausting $X$ such that every $P_j$ is a union of finitely many special analytic polyhedra and $h$ maps $P_j$ properly onto a polydisc $a_j {\mathbb D}^n\subset {\mathbb{C}}^n$, where $0<a_1<a_2<\ldots$ and $\lim_{j\to\infty} a_j=+\infty$. We then obtain a proper map $(h,g)\colon X\to {\mathbb{C}}^{n+1}$ by choosing $g\in {\mathcal{O}}(X)$ such that for every $j\in{\mathbb{N}}$ we have $g>j$ on the compact set $L_j=\{x\in \overline P_{j+1}\setminus P_j : |h(x)|\le a_{j-1}\}$; since $\overline P_{j-1}\cup L_j$ is ${\mathcal{O}}(X)$-convex, this is possible by inductively using the Oka-Weil theorem. One can then find proper immersions and embeddings by adding a suitable number of additional components to $(h,g)$ (any such map is clearly proper) and using Proposition \[prop:generic\] and the Oka-Weil theorem inductively.
The first of the above mentioned procedures easily adapts to give a proof of the following interpolation theorem due to Acquistapace et al. [@Acquistapace1975 Theorem 1]. Their result also pertains to Stein spaces of bounded embedding dimension.
\[th:ABT\] [[@Acquistapace1975 Theorem 1]]{} Assume that $X$ is an $n$-dimensional Stein manifold, $X'$ is a closed complex subvariety of $X$, and $\phi\colon X'{\hookrightarrow}{\mathbb{C}}^N$ is a proper holomorphic embedding for some $N > 2n$. Then the set of all proper holomorphic embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ that extend $\phi$ is dense in the space of all holomorphic maps $X \to{\mathbb{C}}^N$ extending $\phi$. The analogous result holds for proper holomorphic immersions $X\to {\mathbb{C}}^N$ when $N\ge 2n$.
This interpolation theorem fails when $N<2n$. Indeed, for every $n>1$ there exists a proper holomorphic embedding $\phi \colon {\mathbb{C}}^{n-1} {\hookrightarrow}{\mathbb{C}}^{2n-1}$ such that ${\mathbb{C}}^{2n-1}\setminus \phi({\mathbb{C}}^{n-1})$ is Eisenman $n$-hyperbolic, so $\phi$ does not extend to an injective holomorphic map $f\colon {\mathbb{C}}^n\to {\mathbb{C}}^{2n-1}$ (see [@Forstneric2017E Proposition 9.5.6]; this topic is discussed in §\[ss:wild\]). The answer to the interpolation problem for embeddings seems unknown in the borderline case $N=2n$.
Embeddings and immersions into spaces of minimal dimension {#ss:minimal}
----------------------------------------------------------
After Theorem \[th:classical\] was proved in the early 1960’s, one of the main questions driving this theory during the next decades was to find the smallest number $N=N(n)$ such that every Stein manifold $X$ of dimension $n$ embeds or immerses properly holomorphically into ${\mathbb{C}}^N$. The belief that a Stein manifold of complex dimension $n$ admits proper holomorphic embeddings to Euclidean spaces of dimension smaller than $2n+1$ was based on the observation that such a manifold is homotopically equivalent to a CW complex of dimension at most $n$; this follows from Morse theory (see Milnor [@Milnor1963]) and the existence of strongly plurisubharmonic Morse exhaustion functions on $X$ (see Hamm [@Hamm1983] and [@Forstneric2017E §3.12]). This problem, which was investigated by Forster [@Forster1970], Eliashberg and Gromov [@EliashbergGromov1971] and others, gave rise to major new methods in Stein geometry. Except in the case $n=1$ when $X$ is an open Riemann surface, the following optimal answer was given by Eliashberg and Gromov [@EliashbergGromov1992AM] in 1992, with an improvement by one for odd values on $n$ due to Sch[ü]{}rmann [@Schurmann1997].
\[th:EGS\] [*[@EliashbergGromov1992AM; @Schurmann1997]*]{} Every Stein manifold $X$ of dimension $n$ immerses properly holomorphically into ${\mathbb{C}}^M$ with $M = \left[\frac{3n+1}{2}\right]$, and if $n>1$ then $X$ embeds properly holomorphically into ${\mathbb{C}}^N$ with $N = \left[\frac{3n}{2}\right]+ 1$.
Schürmann [@Schurmann1997] also found optimal embedding dimensions for Stein spaces with singularities and with bounded embedding dimension.
The key ingredient in the proof of Theorem \[th:EGS\] is a certain major extension of the Oka-Grauert theory, due to Gromov whose 1989 paper [@Gromov1989] marks the beginning of [*modern Oka theory*]{}. (See [@ForstnericLarusson2011] for an introduction to Oka theory and [@Forstneric2017E] for a complete account.)
Forster showed in [@Forster1970 Proposition 3] that the embedding dimension $N=\left[\frac{3n}{2}\right]+ 1$ is the minimal possible for every $n>1$, and the immersion dimension $M=\left[\frac{3n+1}{2}\right]$ is minimal for every even $n$, while for odd $n$ there could be two possible values. (See also [@Forstneric2017E Proposition 9.3.3].) In 2012, Ho et al. [@HoJacobowitzLandweber2012] found new examples showing that these dimensions are optimal already for Grauert tubes around compact totally real submanifolds, except perhaps for immersions with odd $n$. A more complete discussion of this topic and a self-contained proof of Theorem \[th:EGS\] can be found in [@Forstneric2017E Sects. 9.2–9.5]. Here we only give a brief outline of the main ideas used in the proof.
One begins by choosing a sufficiently generic almost proper map $h\colon X\to {\mathbb{C}}^{n}$ (see Theorem \[th:classical\](d)) and then tries to find the smallest possible number of functions $g_1,\ldots, g_q\in {\mathcal{O}}(X)$ such that the map $$\label{eq:f}
f=(h,g_1,\ldots,g_q)\colon X\to {\mathbb{C}}^{n+q}$$ is a proper embedding or immersion. Starting with a big number of functions $\tilde g_1,\ldots, \tilde g_{\tilde q}\in {\mathcal{O}}(X)$ which do the job, we try to reduce their number by applying a suitable fibrewise linear projection onto a smaller dimensional subspace, where the projection depends holomorphically on the base point. Explicitly, we look for functions $$\label{eq:ajk}
g_j=\sum_{k=1}^{\tilde q} a_{j,k}\tilde g_k, \qquad a_{j,k}\in {\mathcal{O}}(X),\ \ j=1,\ldots,q$$ such that the map is a proper embedding or immersion. In order to separate those pairs of points in $X$ which are not separated by the base map $h\colon X\to {\mathbb{C}}^{n}$, we consider coefficient functions of the form $a_{j,k}=b_{j,k}\circ h$ where $b_{j,k}\in{\mathcal{O}}({\mathbb{C}}^n)$ and $h\colon X\to{\mathbb{C}}^n$ is the chosen base almost proper map.
This outline cannot be applied directly since the base map $h\colon X\to{\mathbb{C}}^n$ may have too complicated behavior. Instead, one proceeds by induction on strata in a suitably chosen complex analytic stratification of $X$ which is equisingular with respect to $h$. The induction steps are of two kinds. In a step of the first kind we find a map $g=(g_1,\ldots,g_q)$ which separates points on the (finite) fibres of $h$ over the next bigger stratum and matches the map from the previous step on the union of the previous strata (the latter is a closed complex subvariety of $X$). A step of the second kind amounts to removing the kernel of the differential $dh_x$ for all points in the next stratum, thereby ensuring that $df_x=dh_x\oplus dg_x$ is injective there. Analysis of the immersion condition shows that the graph of the map $\alpha=(a_{j,k}) \colon X\to {\mathbb{C}}^{q{\tilde q}}$ in over the given stratum must avoid a certain complex subvariety $\Sigma$ of $E=X\times {\mathbb{C}}^{q{\tilde q}}$ with algebraic fibres. Similarly, analysis of the point separation condition leads to the problem of finding a map $\beta=(b_{j,k})\colon {\mathbb{C}}^n\to {\mathbb{C}}^{q{\tilde q}}$ avoiding a certain complex subvariety of $E={\mathbb{C}}^n\times {\mathbb{C}}^{q{\tilde q}}$ with algebraic fibers. In both cases the projection $\pi\colon E\setminus \Sigma\to X$ is a stratified holomorphic fibre bundle all of whose fibres are Oka manifolds. More precisely, if $q\ge \left[\frac{n}{2}\right]+1$ then each fibre $\Sigma_x= \Sigma\cap\, E_x$ is either empty or a union of finitely many affine linear subspaces of $E_x$ of complex codimension $>1$. The same lower bound on $q$ guarantees the existence of a continuous section $\alpha\colon X\to E\setminus \Sigma$ avoiding $\Sigma$. Gromov’s Oka principle [@Gromov1989] then furnishes a holomorphic section $X\to E\setminus \Sigma$. A general Oka principle for sections of stratified holomorphic fiber bundles with Oka fibres is given by [@Forstneric2017E Theorem 5.4.4]. We refer the reader to the original papers or to [@Forstneric2017E §9.3–9.4] for further details.
The classical constructions of proper holomorphic embeddings of Stein manifolds into Euclidean spaces are coordinate dependent and hence do not generalize to more general target manifolds. A conceptually new method has been found recently by Ritter and the author [@ForstnericRitter2014MZ]. It is based on a method of separating certain pairs of compact polynomially convex sets in ${\mathbb{C}}^N$ by Fatou-Bieberbach domains which contain one of the sets and avoid the other one. Another recently developed method, which also depends on holomorphic automorphisms and applies to a much bigger class of target manifolds, is discussed in §\[sec:density\].
Embedding open Riemann surfaces into ${\mathbb{C}}^2$ {#ss:RS}
-----------------------------------------------------
The constructions described so far fail to embed open Riemann surfaces into ${\mathbb{C}}^2$. The problem is that the subvarieties $\Sigma$ in the proof of Theorem \[th:EGS\] may contain hypersurfaces, and hence the Oka principle for sections of $E\setminus \Sigma\to X$ fails in general due to hyperbolicity of its complement. It is still an open problem whether every open Riemann surface embeds as a smooth closed complex curve in ${\mathbb{C}}^2$. (By Theorem \[th:classical\] it embeds properly holomorphically into ${\mathbb{C}}^3$ and immerses with normal crossings into ${\mathbb{C}}^2$. Every compact Riemann surface embeds holomorphically into ${\mathbb{CP}}^3$ and immerses into ${\mathbb{CP}}^2$, but very few of them embed into ${\mathbb{CP}}^2$; see [@GriffithsHarris1994].) There are no topological obstructions to this problem — it was shown by Alarcón and L[ó]{}pez [@AlarconLopez2013] that every open orientable surface $S$ carries a complex structure $J$ such that the Riemann surface $X=(S,J)$ admits a proper holomorphic embedding into ${\mathbb{C}}^2$.
There is a variety of results in the literature concerning the existence of proper holomorphic embeddings of certain special open Riemann surfaces into ${\mathbb{C}}^2$; the reader may wish to consult the survey in [@Forstneric2017E §9.10–9.11]. Here we mention only a few of the currently most general known results on the subject. The first one from 2009, due to Wold and the author, concerns bordered Riemann surfaces.
\[th:FW1\] [[@ForstnericWold2009 Corollary 1.2]]{} Assume that $X$ is a compact bordered Riemann surface with boundary of class ${\mathscr{C}}^r$ for some $r>1$. If $f \colon X {\hookrightarrow}{\mathbb{C}}^2$ is a ${\mathscr{C}}^1$ embedding that is holomorphic in the interior $\mathring X=X\setminus bX$, then $f$ can be approximated uniformly on compacts in $\mathring X$ by proper holomorphic embeddings $\mathring X{\hookrightarrow}{\mathbb{C}}^2$.
The proof relies on techniques introduced mainly by Wold [@Wold2006; @Wold2006-2; @Wold2007]. One of them concerns exposing boundary points of an embedded bordered Riemann surface in ${\mathbb{C}}^2$. This technique was improved in [@ForstnericWold2009]; see also the exposition in [@Forstneric2017E §9.9]. The second one depends on methods of Andersén-Lempert theory concerning holomorphic automorphisms of Euclidean spaces; see §\[ss:wild\]. A proper holomorphic embedding $\mathring X {\hookrightarrow}{\mathbb{C}}^2$ is obtained by first exposing a boundary point in each of the boundary curves of $f(X)\subset {\mathbb{C}}^2$, sending these points to infinity by a rational shear on ${\mathbb{C}}^2$ without other poles on $f(X)$, and then using a carefully constructed sequence of holomorphic automorphisms of ${\mathbb{C}}^2$ whose domain of convergence is a Fatou-Bieberbach domain $\Omega\subset {\mathbb{C}}^2$ which contains the embedded complex curve $f(X)\subset {\mathbb{C}}^2$, but does not contain any of its boundary points. If $\phi\colon \Omega\to {\mathbb{C}}^2$ is a Fatou-Bieberbach map then $\phi\circ f \colon X{\hookrightarrow}{\mathbb{C}}^2$ is a proper holomorphic embedding. A complete exposition of this proof can also be found in [@Forstneric2017E §9.10].
The second result due to Wold and the author [@ForstnericWold2013] (2013) concerns domains with infinitely many boundary components. A domain $X$ in the Riemann sphere $\mathbb P^1$ is a *generalized circled domain* if every connected component of ${\mathbb{P}}^1 \setminus X$ is a round disc or a point. Note that ${\mathbb{P}}^1 \setminus X$ contains at most countably many discs. By the uniformization theorem of He and Schramm [@HeSchramm1993; @HeSchramm1995], every domain in $\mathbb P^1$ with at most countably many complementary components is conformally equivalent to a generalized circled domain.
\[th:FW2\] [[@ForstnericWold2013 Theorem 5.1]]{} Let $X$ be a generalized circled domain in ${\mathbb{P}}^1$. If all but finitely many punctures in $\mathbb P^1\setminus X$ are limit points of discs in $\mathbb P^1\setminus X$, then $X$ embeds properly holomorphically into $\mathbb C^2$.
The paper [@ForstnericWold2013] contains several other more precise results on this subject.
The special case of Theorem \[th:FW2\] for plane domains $X\subset {\mathbb{C}}$ bounded by finitely many Jordan curves (and without punctures) is due to Globevnik and Stens[ø]{}nes [@GlobevnikStensones1995]. Results on embedding certain Riemann surfaces with countably many boundary components into ${\mathbb{C}}^2$ were also proved by Majcen [@Majcen2009]; an exposition can be found in [@Forstneric2017E §9.11]. The proof of Theorem \[th:FW2\] relies on similar techniques as that of Theorem \[th:FW1\], but it uses a considerably more involved induction scheme for dealing with infinitely many boundary components, clustering them together into suitable subsets to which the available analytic methods can be applied. The same technique gives the analogous result for domains in tori.
There are a few other recent results concerning embeddings of open Riemann surfaces into ${\mathbb{C}}\times {\mathbb{C}}^*$ and $({\mathbb{C}}^*)^2$, where ${\mathbb{C}}^*={\mathbb{C}}\setminus \{0\}$. Ritter showed in [@Ritter2013JGEA] that, for every circular domain $X\subset {\mathbb D}$ with finitely many boundary components, each homotopy class of continuous maps $X\to {\mathbb{C}}\times {\mathbb{C}}^*$ contains a proper holomorphic map. If ${\mathbb D}\setminus X$ contains finitely many punctures, then every continuous map $X\to {\mathbb{C}}\times {\mathbb{C}}^*$ is homotopic to a proper holomorphic immersion that identifies at most finitely many pairs of points in $X$ (L[árusson and Ritter [@LarussonRitter2014]). Ritter [@Ritter2014] also gave an analogue of Theorem]{} \[th:FW1\] for proper holomorphic embeddings of certain open Riemann surfaces into $({\mathbb{C}}^*)^2$.
Automorphisms of Euclidean spaces and wild embeddings {#ss:wild}
-----------------------------------------------------
There is another line of investigations that we wish to touch upon. It concerns the question how many proper holomorphic embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ of a given Stein manifold $X$ are there up to automorphisms of ${\mathbb{C}}^N$, and possibly also of $X$. This question was motivated by certain famous results from algebraic geometry, such as the one of Abhyankar and Moh [@AbhyankarMoh1975] and Suzuki [@Suzuki1974] to the effect that every polynomial embedding ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$ is equivalent to the linear embedding $z\mapsto (z,0)$ by a polynomial automorphism of ${\mathbb{C}}^2$.
It is a basic fact that for any $N>1$ the holomorphic automorphism group ${\mathrm{Aut}}({\mathbb{C}}^N)$ is very big and complicated. This is in stark contrast to the situation for bounded or, more generally, hyperbolic domains in ${\mathbb{C}}^N$ which have few automorphism; see Greene et al. [@Greene2011] for a survey of the latter topic. Major early work on understanding the group ${\mathrm{Aut}}({\mathbb{C}}^N)$ was made by Rosay and Rudin [@RosayRudin1988]. This theory became very useful with the papers of Anders[é]{}n and Lempert [@AndersenLempert1992] and Rosay and the author [@ForstnericRosay1993] in 1992–93. The central result is that every map in a smooth isotopy of biholomorphic mappings $\Phi_t\colon \Omega=\Omega_0 \to \Omega_t$ $(t\in [0,1])$ between Runge domains in ${\mathbb{C}}^N$, with $\Phi_0$ the identity on $\Omega$, can be approximated uniformly on compacts in $\Omega$ by holomorphic automorphisms of ${\mathbb{C}}^N$ (see [@ForstnericRosay1993 Theorem 1.1] or [@Forstneric2017E Theorem 4.9.2]). The analogous result holds on any Stein manifold with the density property; see §\[sec:density\]. A comprehensive survey of this subject can be found in [@Forstneric2017E Chap. 4].
By twisting a given submanifold of ${\mathbb{C}}^N$ with a sequence of holomorphic automorphisms, one can show that for any pair of integers $1\le n<N$ the set of all equivalence classes of proper holomorphic embeddings ${\mathbb{C}}^n{\hookrightarrow}{\mathbb{C}}^N$, modulo automorphisms of both spaces, is uncountable (see [@DerksenKutzschebauchWinkelmann1999]). In particular, the Abhyankar-Moh theorem fails in the holomorphic category since there exist proper holomorphic embeddings $\phi \colon {\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$ that are nonstraightenable by automorphisms of ${\mathbb{C}}^2$ [@ForstnericGlobevnikRosay1996], as well as embeddings whose complement ${\mathbb{C}}^2\setminus \phi({\mathbb{C}})$ is Kobayashi hyperbolic [@BuzzardFornaess1996]. More generally, for any pair of integers $1\le n<N$ there exists a proper holomorphic embedding $\phi\colon {\mathbb{C}}^n{\hookrightarrow}{\mathbb{C}}^N$ such that every nondegenerate holomorphic map ${\mathbb{C}}^{N-n}\to {\mathbb{C}}^N$ intersects $\phi({\mathbb{C}}^n)$ at infinitely many points [@Forstneric1999JGA]. It is also possible to arrange that ${\mathbb{C}}^N\setminus \phi({\mathbb{C}}^n)$ is Eisenman $(N-n)$-hyperbolic [@BorellKutzschebauch2006]. A more comprehensive discussion of this subject can be found in [@Forstneric2017E §4.18].
By using nonlinearizable proper holomorphic embeddings ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$, Derksen and Kutzschauch gave the first known examples of nonlinearizable periodic automorphisms of ${\mathbb{C}}^n$ [@DerksenKutzschebauch1998]. For instance, there is a nonlinearizable holomorphic involution on ${\mathbb{C}}^4$.
In another direction, Baader et al. [@Baaderall2010] constructed an example of a properly embedded disc in ${\mathbb{C}}^2$ whose image is topologically knotted, thereby answering a questions of Kirby. It is unknown whether there exists a knotted proper holomorphic embedding ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$, or an unknotted proper holomorphic embedding ${\mathbb D}{\hookrightarrow}{\mathbb{C}}^2$ of the disc.
Automorphisms of ${\mathbb{C}}^2$ and ${\mathbb{C}}^*\times {\mathbb{C}}$ were used in a very clever way by Wold in his landmark construction of non-Runge Fatou-Bieberbach domains in ${\mathbb{C}}^2$ [@Wold2008] and of non-Stein long ${\mathbb{C}}^2$’s [@Wold2010]. Each of these results solved a long-standing open problem. More recently, Wold’s construction was developed further by Boc Thaler and the author [@BocThalerForstneric2016] who showed that there is a continuum of pairwise nonequivalent long ${\mathbb{C}}^n$’s for any $n>1$ which do not admit any nonconstant holomorphic or plurisubharmonic functions. (See aso [@Forstneric2017E 4.21].)
Embeddings into Stein manifolds with the density property {#sec:density}
=========================================================
Universal Stein manifolds {#ss:universal}
-------------------------
It is natural to ask which Stein manifolds, besides the Euclidean spaces, contain all Stein manifolds of suitably low dimension as closed complex submanifolds. To facilitate the discussion, we introduce the following notions.
\[def:universal\] Let $Y$ be a Stein manifold.
1. $Y$ is [*universal for proper holomorphic embeddings*]{} if every Stein manifold $X$ with $2\dim X<\dim Y$ admits a proper holomorphic embedding $X{\hookrightarrow}Y$.
2. $Y$ is [*strongly universal for proper holomorphic embeddings*]{} if, under the assumptions in (1), every continuous map $f_0\colon X\to Y$ which is holomorphic in a neighborhood of a compact ${\mathcal{O}}(X)$-convex set $K\subset X$ is homotopic to a proper holomorphic embedding $f_0\colon X{\hookrightarrow}Y$ by a homotopy $f_t\colon X\to Y$ $(t\in[0,1])$ such that $f_t$ is holomorphic and arbitrarily close to $f_0$ on $K$ for every $t\in [0,1]$.
3. $Y$ is (strongly) [*universal for proper holomorphic immersions*]{} if condition (1) (resp. (2)) holds for proper holomorphic immersions $X\to Y$ from any Stein manifold $X$ satisfying $2\dim X\le \dim Y$.
In the terminology of Oka theory (cf. [@Forstneric2017E Chap. 5]), a complex manifold $Y$ is (strongly) universal for proper holomorphic embeddings if it satisfies the basic Oka property (with approximation) for proper holomorphic embeddings $X\to Y$ from Stein manifolds of dimension $2\dim X<\dim Y$. The dimension hypotheses in the above definition are justified by Proposition \[prop:generic\]. The main goal is to find good sufficient conditions for a Stein manifold to be universal. If a manifold $Y$ is Brody hyperbolic [@Brody1978] (i.e., it does not admit any nonconstant holomorphic images of ${\mathbb{C}}$) then clearly no complex manifold containing a nontrivial holomorphic image of ${\mathbb{C}}$ can be embedded into $Y$. In order to get positive results, one must therefore assume that $Y$ enjoys a suitable holomorphic flexibility (anti-hyperbolicity) property.
\[prob:Oka\] Is every Stein Oka manifold (strongly) universal for proper holomorphic embeddings and immersions?
Recall (see e.g. [@Forstneric2017E Theorem 5.5.1]) that every Oka manifold is strongly universal for not necessarily proper holomorphic maps, embeddings and immersions. Indeed, the cited theorem asserts that a generic holomorphic map $X\to Y$ from a Stein manifold $X$ into an Oka manifold $Y$ is an immersion if $\dim Y\ge 2\dim X$, and is an injective immersion if $\dim Y > 2\dim X$. However, the Oka condition does not imply universality for [*proper*]{} holomorphic maps since there are examples of (compact or noncompact) Oka manifolds without any closed complex subvarieties of positive dimension (see [@Forstneric2017E Example 9.8.3]).
Manifolds with the (volume) density property
--------------------------------------------
The following condition was introduced in 2000 by Varolin [@Varolin2000; @Varolin2001].
\[def:density\] A complex manifold $Y$ enjoys the (holomorphic) [*density property*]{} if the Lie algebra generated by the ${\mathbb{C}}$-complete holomorphic vector fields on $Y$ is dense in the Lie algebra of all holomorphic vector fields in the compact-open topology.
A complex manifold $Y$ endowed with a holomorphic volume form $\omega$ enjoys the [*volume density property*]{} if the analogous density condition holds in the Lie algebra of all holomorphic vector fields on $Y$ with vanishing $\omega$-divergence.
The algebraic density and volume density properties were introduced by Kaliman and Kutzschebauch [@KalimanKutzschebauch2010IM]. The class of Stein manifolds with the (volume) density property includes most complex Lie groups and homogeneous spaces, as well as many nonhomogeneous manifolds. We refer to [@Forstneric2017E §4.10] for a more complete discussion and an up-to-date collection of references on this subject. Another recent survey is the paper by Kaliman and Kutzschebauch [@KalimanKutzschebauch2015]. Every complex manifold with the density property is an Oka manifold, and a Stein manifold with the density property is elliptic in the sense of Gromov (see [@Forstneric2017E Proposition 5.6.23]). It is an open problem whether a contractible Stein manifold with the density property is biholomorphic to a complex Euclidean space.
The following result is due to Andrist and Wold [@AndristWold2014] in the special case when $X$ is an open Riemann surface, to Andrist et al. [@AndristFRW2016 Theorems 1.1, 1.2] for embeddings, and to the author [@Forstneric-immersions Theorem 1.1] for immersions in the double dimension.
\[th:density\] [[@AndristFRW2016; @AndristWold2014; @Forstneric-immersions]]{} Every Stein manifold with the density or the volume density property is strongly universal for proper holomorphic embeddings and immersions.
To prove Theorem \[th:density\], one follows the scheme of proof of the Oka principle for maps from Stein manifolds to Oka manifolds (see [@Forstneric2017E Chapter 5]), but with a crucial addition which we now briefly describe.
Assume that $D\Subset X$ is a relatively compact strongly pseudoconvex domain with smooth boundary and $f\colon \overline D{\hookrightarrow}Y$ is a holomorphic embedding such that $f(bD) \subset Y\setminus L$, where $L$ is a given compact ${\mathcal{O}}(Y)$-convex set in $Y$. We wish to approximate $f$ uniformly on $\overline D$ by a holomorphic embedding $f'\colon \overline{D'}{\hookrightarrow}Y$ of a certain bigger strongly pseudoconvex domain $\overline{D'} \Subset X$ to $Y$, where $D'$ is either a union of $D$ with a small convex bump $B$ chosen such that $f(\overline {D\cap B})\subset Y\setminus L$, or a thin handlebody whose core is the union of $D$ and a suitable smoothly embedded totally real disc in $X\setminus D$. (The second case amounts to a change of topology of the domain, and it typically occurs when passing a critical point of a strongly plurisubharmonic exhaustion function on $X$.) In view of Proposition \[prop:generic\], we only need to approximate $f$ by a holomorphic map $f'\colon \overline{D'} \to Y$ since a small generic perturbation of $f'$ then yields an embedding. It turns out that the second case involving a handlebody easily reduces to the first one by applying a Mergelyan type approximation theorem; see [@Forstneric2017E §5.11] for this reduction. The attachment of a bump is handled by using the density property of $Y$. This property allows us to find a holomorphic map $g\colon \overline B\to Y\setminus L$ approximating $f$ as closely as desired on a neighborhood of the attaching set $\overline{B\cap D}$ and satisfying $g(\overline B)\subset Y\setminus L$. (More precisely, we use that isotopies of biholomorphic maps between pseudoconvex Runge domains in $Y$ can be approximated by holomorphic automorphisms of $Y$; see [@ForstnericRosay1993 Theorem 1.1] and also [@Forstneric2017E Theorem 4.10.5] for the version pertaining to Stein manifolds with the density property.) Assuming that $g$ is sufficiently close to $f$ on $\overline{B\cap D}$, we can glue them into a holomorphic map $f'\colon \overline{D'}\to Y$ which approximates $f$ on $\overline D$ and satisfies $f'(\overline{B})\subset Y\setminus L$. The proof is completed by an induction procedure in which every induction step is of the type described above. The inclusion $f'(\overline{B})\subset Y\setminus L$ satisfied by the next map in the induction step guarantees properness of the limit embedding $X{\hookrightarrow}Y$. Of course the sets $L\subset Y$ also increase and form an exhaustion of $Y$.
The case of immersions in double dimension requires a more precise analysis. In the induction step described above, we must ensure that the immersion $f \colon \overline D\to Y$ is injective (an embedding) on the attaching set $\overline{B\cap D}$ of the bump $B$. This can be arranged by general position provided that $\overline{B\cap D}$ is very thin. It is shown in [@Forstneric-immersions] that it suffices to work with convex bumps such that, in suitably chosen holomorphic coordinates on a neighborhood of $\overline B$, the set $B$ is a convex polyhedron and $\overline{B\cap D}$ is a very thin neighborhood of one of its faces. This means that $\overline{B\cap D}$ is small thickening of a $(2n-1)$-dimensional object in $X$, and hence we can easily arrange that $f$ is injective on it. The remainder of the proof proceeds exactly as before, completing our sketch of proof of Theorem \[th:density\].
On the Schoen-Yau conjecture {#ss:S-Y}
----------------------------
The following corollary to Theorem \[th:density\] is related to a conjecture of Schoen and Yau [@SchoenYau1997] that the disc ${\mathbb D}=\{\zeta \in{\mathbb{C}}:|\zeta|<1\}$ does not admit any proper harmonic maps to ${\mathbb{R}}^2$.
\[cor:harmonic\] Every Stein manifold $X$ of complex dimension $n$ admits a proper holomorphic immersion to $({\mathbb{C}}^*)^{2n}$, and a proper pluriharmonic map into ${\mathbb{R}}^{2n}$.
The space $({\mathbb{C}}^*)^n$ with coordinates $z=(z_1,\ldots,z_n)$ (where $z_j\in {\mathbb{C}}^*$ for $j=1,\ldots,n$) enjoys the volume density property with respect to the volume form $$\omega= \frac{dz_1\wedge\cdots\wedge dz_n}{z_1\cdots z_n}.$$ (See Varolin [@Varolin2001] or [@Forstneric2017E Theorem 4.10.9(c)].) Hence, [@Forstneric-immersions Theorem 1.2] (the part of Theorem \[th:density\] above concerning immersions into the double dimension) furnishes a proper holomorphic immersion $f=(f_1,\ldots,f_{2n})\colon X\to ({\mathbb{C}}^*)^{2n}$. It follows that the map $$\label{eq:log}
u=(u_1,\ldots,u_{2n})\colon X\to {\mathbb{R}}^{2n}\quad \text{with}\ \ u_j=\log|f_j|\ \ \text{for}\ \ j=1,\ldots, 2n$$ is a proper map of $X$ to ${\mathbb{R}}^{2n}$ whose components are pluriharmonic functions.
Corollary \[cor:harmonic\] gives a counterexample to the Schoen-Yau conjecture in every dimension and for any Stein source manifold. The first and very explicit counterexample was given by Bo[ž]{}in [@Bozin1999IMRN] in 1999. In 2001, Globevnik and the author [@ForstnericGlobevnik2001MRL] constructed a proper holomorphic map $f=(f_1,f_2)\colon{\mathbb D}\to{\mathbb{C}}^2$ whose image is contained in $({\mathbb{C}}^*)^2$, i.e., it avoids both coordinate axes. The associated harmonic map $u=(u_1,u_2)\colon {\mathbb D}\to{\mathbb{R}}^2$ then satisfies $\lim_{|\zeta|\to 1} \max\{u_1(\zeta),u_2(\zeta)\} = +\infty$ which implies properness. Next, Alarc[ó]{}n and L[ó]{}pez [@AlarconLopez2012JDG] showed in 2012 that every open Riemann surface $X$ admits a conformal minimal immersion $u=(u_1,u_2,u_3)\colon X\to{\mathbb{R}}^3$ with a proper (harmonic) projection $(u_1,u_2)\colon X\to {\mathbb{R}}^2$. In 2014, Andrist and Wold [@AndristWold2014 Theorem 5.6] proved Corollary \[cor:harmonic\] in the case $n=1$.
Comparing Corollary \[cor:harmonic\] with the above mentioned result of Globevnik and the author [@ForstnericGlobevnik2001MRL], one is led to the following question.
Let $X$ be a Stein manifold of dimension $n>1$. Does there exist a proper holomorphic immersion $f\colon X\to {\mathbb{C}}^{2n}$ such that $f(X)\subset ({\mathbb{C}}^*)^{2n}$?
More generally, one can ask which type of sets in Stein manifolds can be avoided by proper holomorphic maps from Stein manifolds of sufficiently low dimension. In this direction, Drinovec Drnovšek showed in [@Drinovec2004MRL] that any closed complete pluripolar set can be avoided by proper holomorphic discs; see also Borell et al. [@Borell2008MRL] for embedded discs in ${\mathbb{C}}^n$. Note that every closed complex subvariety is a complete pluripolar set.
Embeddings of strongly pseudoconvex Stein domains {#sec:PSC}
=================================================
The Oka principle for embeddings of strongly pseudoconvex domains
-----------------------------------------------------------------
What can be said about proper holomorphic embeddings and immersions of Stein manifolds $X$ into arbitrary (Stein) manifolds $Y$? If $Y$ is Brody hyperbolic [@Brody1978], then no complex manifold containing a nontrivial holomorphic image of ${\mathbb{C}}$ embeds into $Y$. However, if $\dim Y>1$ and $Y$ is Stein then $Y$ still admits proper holomorphic images of any bordered Riemann surface [@DrinovecForstneric2007DMJ; @Globevnik2000]. For domains in Euclidean spaces, this line of investigation was started in 1976 by Forn[æ]{}ss [@Fornaess1976] and continued in 1985 by L[ø]{}w [@Low1985MZ] and the author [@Forstneric1986TAMS] who proved that every bounded strongly pseudoconvex domain $X\subset {\mathbb{C}}^n$ admits a proper holomorphic embedding into a high dimensional polydisc and ball. The long line of subsequent developments culminated in the following result of Drinovec Drnovšek and the author [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM].
\[th:BDF2010\] [[@DrinovecForstneric2010AJM Corollary 1.2]]{} Let $X$ be a relatively compact, smoothly bounded, strongly pseudoconvex domain in a Stein manifold ${\widetilde}X$ of dimension $n$, and let $Y$ be a Stein manifold of dimension $N$. If $N>2n$ then every continuous map $f\colon \overline X \to Y$ which is holomorphic on $X$ can be approximated uniformly on compacts in $X$ by proper holomorphic embeddings $X{\hookrightarrow}Y$. If $N\ge 2n$ then the analogous result holds for immersions. The same conclusions hold if the manifold $Y$ is strongly $q$-complete for some $q\in \{1,2,\ldots, N-2n+1\}$, where the case $q=1$ corresponds to Stein manifolds.
In the special case when $Y$ is a domain in a Euclidean space, this is due to Dor [@Dor1995]. The papers [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] include several more precise results in this direction and references to numerous previous works. Note that a continuous map $f\colon \overline X \to Y$ from a compact strongly pseudoconvex domain which is holomorphic on the open domain $X$, with values in an arbitrary complex manifold $Y$, can be approximated uniformly on $\overline X$ by holomorphic maps from small open neighborhoods of $\overline X$ in the ambient manifold ${\widetilde}X$, where the neighborhood depends on the map (see [@DrinovecForstneric2008FM Theorem 1.2] or [@Forstneric2017E Theorem 8.11.4]). However, unless $Y$ is an Oka manifold, it is impossible to approximate $f$ uniformly on $\overline X$ by holomorphic maps from a fixed bigger domain $X_1\subset {\widetilde}X$ independent of the map. For this reason, it is imperative that the initial map $f$ in Theorem \[th:BDF2010\] be defined on all of $\overline X$.
One of the main techniques used in the proof of Theorem \[th:BDF2010\] are special holomorphic peaking functions on $X$. The second tool is the method of holomorphic sprays developed in the context of Oka theory; this is essentially a nonlinear version of the ${\overline\partial}$-method.
Here is the main idea of the proof of Theorem \[th:BDF2010\]. Choose a strongly $q$-convex Morse exhaustion function $\rho\colon Y\to{\mathbb{R}}_+$. (When $q=1$, $\rho$ is strongly plurisubharmonic.) By using the mentioned tools, one can approximate any given holomorphic map $f\colon \overline X\to Y$ uniformly on compacts in $X$ by another holomorphic map $\tilde f\colon \overline X\to Y$ such that $\rho\circ \tilde f > \rho\circ f + c$ holds on $bX$ for some constant $c>0$ depending only on the geometry of $\rho$ on a given compact set $L\subset Y$ containing $f(\overline X)$. Geometrically speaking, this means that we lift the image of the boundary of $X$ in $Y$ to a higher level of the function $\rho$ by a prescribed amount. At the same time, we can ensure that $\rho\circ \tilde f > \rho \circ f -\delta$ on $X$ for any given $\delta>0$, and that $\tilde f$ approximates $f$ as closely as desired on a given compact ${\mathcal{O}}(X)$-convex subset $K\subset X$. By Proposition \[prop:generic\] we can ensure that our maps are embeddings. An inductive application of this technique yields a sequence of holomorphic embeddings $f_k\colon\overline X{\hookrightarrow}Y$ converging to a proper holomorphic embedding $X{\hookrightarrow}Y$. The same construction gives proper holomorphic immersions when $N\ge 2n$.
On the Hodge Conjecture for $q$-complete manifolds {#ss:Hodge}
--------------------------------------------------
A more precise analysis of the proof of Theorem \[th:BDF2010\] was used by Smrekar, Sukhov and the author [@FSS2016] to show the following result along the lines of the Hodge conjecture.
If $Y$ is a $q$-complete complex manifold of dimension $N$ and of finite topology such that $q<N$ and the number $N+q-1=2p$ is even, then every cohomology class in $H^{N+q-1}(Y;{\mathbb{Z}})$ is Poincaré dual to an analytic cycle in $Y$ consisting of proper holomorphic images of the ball ${\mathbb{B}}^p\subset {\mathbb{C}}^p$. If the manifold $Y$ has infinite topology, the same result holds for elements of the group ${\mathscr{H}}^{N+q-1}(Y;{\mathbb{Z}}) = \lim_j H^{N+q-1}(M_j;Z)$ where $\{M_j\}_{j\in {\mathbb{N}}}$ is an exhaustion of $Y$ by compact smoothly bounded domains.
Note that $H^{N+q-1}(Y;{\mathbb{Z}})$ is the highest dimensional a priori nontrivial cohomology group of a $q$-complete manifold $Y$ of dimension $N$. We do not know whether a similar result holds for lower dimensional cohomology groups of a $q$-complete manifold. In the special case when $Y$ is a Stein manifold, the situation is better understood thanks to the Oka-Grauert principle, and the reader can find appropriate references in the paper [@FSS2016].
Complete bounded complex submanifolds {#ss:complete}
-------------------------------------
There are interesting recent constructions of properly embedded complex submanifolds $X\subset {\mathbb{B}}^N$ of the unit ball in ${\mathbb{C}}^N$ (or of pseudoconvex domains in ${\mathbb{C}}^N$) which are [*complete*]{} in the sense that every curve in $X$ terminating on the sphere $b{\mathbb{B}}^N$ has infinite length. Equivalently, the metric on $X$, induced from the Euclidean metric on ${\mathbb{C}}^N$ by the embedding $X{\hookrightarrow}{\mathbb{C}}^N$, is a complete metric.
The question whether there exist complete bounded complex submanifolds in Euclidean spaces was asked by Paul Yang in 1977. The first such examples were provided by Jones [@Jones1979PAMS] in 1979. Recent results on this subject are due to Alarc[ó]{}n and the author [@AlarconForstneric2013MA], Alarc[ó]{}n and L[ópez]{} [@AlarconLopez2016], Drinovec Drnov[š]{}ek [@Drinovec2015JMAA], Globevnik [@Globevnik2015AM; @Globevnik2016JMAA; @Globevnik2016MA], and Alarc[ó]{}n et al. [@AlarconGlobevnik2017; @AlarconGlobevnikLopez2016Crelle]. In [@AlarconForstneric2013MA] it was shown that any bordered Riemann surface admits a proper complete holomorphic immersion into ${\mathbb{B}}^2$ and embedding into ${\mathbb{B}}^3$ (no change of the complex structure on the surface is necessary). In [@AlarconGlobevnik2017] the authors showed that properly embedded complete complex curves in the ball ${\mathbb{B}}^2$ can have any topology, but their method (using holomorphic automorphisms) does not allow one to control the complex structure of the examples. Drinovec Drnov[š]{}ek [@Drinovec2015JMAA] proved that every strongly pseudoconvex domain embeds as a complete complex submanifold of a high dimensional ball. Globevnik proved [@Globevnik2015AM; @Globevnik2016MA] that any pseudoconvex domain in ${\mathbb{C}}^N$ for $N>1$ can be foliated by complete complex hypersurfaces given as level sets of a holomorphic function, and Alarc[ó]{}n showed [@Alarcon2018] that there are nonsingular foliations of this type goven as level sets of a holomorphic function without critical points. Furthermore, there is a complete proper holomorphic embedding ${\mathbb D}{\hookrightarrow}{\mathbb{B}}^2$ whose image contains any given discrete subset of ${\mathbb{B}}^2$ [@Globevnik2016JMAA], and there exist complex curves of arbitrary topology in ${\mathbb{B}}^2$ satisfying this property [@AlarconGlobevnik2017]. The constructions in these papers, except those in [@Alarcon2018; @Globevnik2015AM; @Globevnik2016MA], rely on one of the following two methods:
- Riemann-Hilbert boundary values problem (or holomorphic peaking functions in the case of higher dimensional domains considered in [@Drinovec2015JMAA]);
- holomorphic automorphisms of the ambient space ${\mathbb{C}}^N$.
Each of these methods can be used to increase the intrinsic boundary distance in an embedded or immersed submanifold. The first method has the advantage of preserving the complex structure, and the disadvantage of introducing self-intersections in the double dimension or below. The second method is precisely the opposite — it keeps embeddedness, but does not provide any control of the complex structure since one must cut away pieces of the image manifold to keep it suitably bounded. The first of these methods has recently been applied in the theory of minimal surfaces in ${\mathbb{R}}^n$; we refer to the papers [@AlarconDrinovecForstnericLopez2015PLMS; @AlarconDrinovecForstnericLopez2017TAMS; @AlarconForstneric2015MA] and the references therein. On the other hand, ambient automorphisms cannot be applied in minimal surface theory since the only class of self-maps of ${\mathbb{R}}^n$ $(n>2)$ mapping minimal surfaces to minimal surfaces are the rigid affine linear maps.
Globevnik’s method in [@Globevnik2015AM; @Globevnik2016MA] is different from both of the above. He showed that for every integer $N>1$ there is a holomorphic function $f$ on the ball ${\mathbb{B}}^N$ whose real part $\Re f$ is unbounded on every path of finite length that ends on $b{\mathbb{B}}^N$. It follows that every level set $M_c=\{f=c\}$ is a closed complete complex hypersurface in ${\mathbb{B}}^N$, and $M_c$ is smooth for most values of $c$ in view of Sard’s lemma. The function $f$ is constructed such that its real part grows sufficiently fast on a certain labyrinth $\Lambda\subset {\mathbb{B}}^N$, consisting of pairwise disjoint closed polygonal domains in real affine hyperplanes, such that every curve in ${\mathbb{B}}^N\setminus \Lambda$ which terminates on $b{\mathbb{B}}^N$ has infinite length. The advantage of his method is that it gives an affirmative answer to Yang’s question in all dimensions and codimensions. The disadvantage is that one cannot control the topology or the complex structure of the level sets. By using instead holomorphic automorphisms in order to push a submanifold off the labyrinth $\Lambda$, Alarc[ó]{}n et al. [@AlarconGlobevnikLopez2016Crelle] succeeded to obtain partial control of the topology of the embedded submanifold, and complete control in the case of complex curves [@AlarconGlobevnik2017]. Finally, by using the method of constructing noncritical holomorphic functions due to Forstnerič [@Forstneric2003AM], Alarc[ó]{}n [@Alarcon2018] improved Globevnik’s main result from [@Globevnik2015AM] by showing that every closed complete complex hypersurface in the ball ${\mathbb{B}}^n$ $(n>1)$ is a leaf in a nonsingular holomorphic foliation of ${\mathbb{B}}^n$ by closed complete complex hypersurfaces.
By using the labyrinths constructed in [@AlarconGlobevnikLopez2016Crelle; @Globevnik2015AM] and methods of Andersén-Lempert theory, Alarc[ó]{}n and the author showed in [@AlarconForstneric2018PAMS] that there exists a complete injective holomorphic immersion $\mathbb{C}\to\mathbb{C}^2$ whose image is everywhere dense in $\mathbb{C}^2$ [@AlarconForstneric2018PAMS Corollary 1.2]. The analogous result holds for any closed complex submanifold $X\subsetneqq \mathbb{C}^n$ for $n>1$ (see [@AlarconForstneric2018PAMS Theorem 1.1]). Furthermore, if $X$ intersects the ball $\mathbb{B}^n$ and $K$ is a connected compact subset of $X\cap\mathbb{B}^n$, then there is a Runge domain $\Omega\subset X$ containing $K$ which admits a complete injective holomorphic immersion $\Omega\to\mathbb{B}^n$ whose image is dense in $\mathbb{B}^n$.
Submanifolds with exotic boundary behaviour {#ss:exotic}
-------------------------------------------
The boundary behavior of proper holomorphic maps between bounded domains with smooth boundaries in complex Euclidean spaces has been studied extensively; see the recent survey by Pinchuk et al. [@Pinchuk2017]. It is generally believed, and has been proved under a variety of additional conditions, that proper holomorphic maps between relatively compact smoothly bounded domains of the same dimension always extend smoothly up to the boundary. In dimension $1$ this is the classical theorem of Carath[é]{}odory (see [@Caratheodory1913MA] or [@Pommerenke1992 Theorem 2.7]). On the other hand, proper holomorphic maps into higher dimensional domains may have rather wild boundary behavior. For example, Globevnik [@Globevnik1987MZ] proved in 1987 that, given $n\in {\mathbb{N}}$, if $N\in{\mathbb{N}}$ is sufficiently large then there exists a continuous map $f\colon \overline {\mathbb{B}}^n \to \overline {\mathbb{B}}^N$ which is holomorphic in ${\mathbb{B}}^n$ and satisfies $f(b{\mathbb{B}}^n)=b{\mathbb{B}}^N$. Recently, the author [@Forstneric2017Sept] constructed a properly embedded holomorphic disc ${\mathbb D}{\hookrightarrow}{\mathbb{B}}^2$ in the $2$-ball with arbitrarily small area (hence it is the zero set of a bounded holomorphic function on $\mathbb{B}^2$ according to Berndtsson [@Berndtsson1980]) which extends holomorphically across the boundary of the disc, with the exception of one boundary point, such that its boundary curve is injectively immersed and everywhere dense in the sphere $b\mathbb{B}^2$. Examples of proper (not necessarily embedded) discs with similar behavior were found earlier by Globevnik and Stout [@GlobevnikStout1986].
The soft Oka principle for proper holomorphic embeddings {#sec:soft}
========================================================
By combining the technique in the proof of Theorem \[th:BDF2010\] with methods from the papers by Slapar and the author [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] one can prove the following seemingly new result.
\[th:soft\] Let $(X,J)$ and $Y$ be Stein manifolds, where $J\colon TX\to TX$ denotes the complex structure operator on $X$. If $\dim Y > 2\dim X$ then for every continuous map $f\colon X\to Y$ there exists a Stein structure $J'$ on $X$, homotopic to $J$, and a proper holomorphic embedding $f'\colon (X,J'){\hookrightarrow}Y$ homotopic to $f$. If $\dim Y\ge 2\dim X$ then $f'$ can be chosen a proper holomorphic immersion having only simple double points. The same holds if the manifold $Y$ is $q$-complete for some $q\in \{1,2,\ldots, \dim Y-2\dim X+1\}$, where $q=1$ corresponds to Stein manifolds.
Intuitively speaking, every Stein manifold $X$ embeds properly holomorphically into any other Stein manifold $Y$ of dimension $\dim Y >2\dim X$ up to a change of the Stein structure on $X$. The main result of [@ForstnericSlapar2007MZ] amounts to the same statement for holomorphic maps (instead of proper embeddings), but without any hypothesis on the target complex manifold $Y$. In order to obtain [*proper*]{} holomorphic maps $X\to Y$, we need a suitable geometric hypothesis on $Y$ in view of the examples of noncompact (even Oka) manifolds without any closed complex subvarieties (see [@Forstneric2017E Example 9.8.3]).
The results from [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] were extended by Prezelj and Slapar [@PrezeljSlapar2011] to $1$-convex source manifolds. For Stein manifolds $X$ of complex dimension $2$, these results also stipulate a change of the underlying ${\mathscr{C}}^\infty$ structure on $X$. It was later shown by Cieliebak and Eliashberg that such change is not necessary if one begins with an integrable Stein structure; see [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44]. For the constructions of exotic Stein structures on smooth orientable $4$-manifolds, in particular on ${\mathbb{R}}^4$, see Gompf [@Gompf1998; @Gompf2005; @Gompf2017GT].
In order to fully understand the proof, the reader should be familiar with [@ForstnericSlapar2007MZ proof of Theorem 1.1]. (Theorem 1.2 in the same paper gives an equivalent formulation where one does not change the Stein structure on $X$, but instead finds a desired holomorphic map on a Stein Runge domain $\Omega\subset X$ which is diffeotopic to $X$. An exposition is also available in [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44] and [@Forstneric2017E §10.9].)
We explain the main step in the case $\dim Y > 2\dim X$; the theorem follows by using it inductively as in [@ForstnericSlapar2007MZ]. An interested reader is invited to provide the details.
Assume that $X_0\subset X_1$ is a pair of relatively compact, smoothly bounded, strongly pseudoconvex domains in $X$ such that there exists a strongly plurisubharmonic Morse function $\rho$ on an open set $U\supset \overline {X_1\setminus X_0}$ in $X$ satisfying $$X_0 \cap U = \{x\in U\colon \rho(x)<a\},\quad
X_1 \cap U = \{x\in U \colon \rho(x)<b\},$$ for a pair of constants $a<b$ and $d\rho\ne 0$ on $bX_0\cup bX_1$. Let $L_0\subset L_1$ be a pair of compact sets in $Y$. (In the induction, $L_0$ and $L_1$ are sublevel sets of a strongly $q$-convex exhaustion function on $Y$.) Assume that $f_0\colon X\to Y$ is a continuous map whose restriction to a neighborhood of $\overline X_0$ is a $J$-holomorphic embedding satisfying $f_0(bX_0) \subset Y\setminus L_0$. The goal is to find a new Stein structure $J_1$ on $X$, homotopic to $J$ by a smooth homotopy that is fixed in a neighborhood of $\overline X_0$, such that $f_0$ can be deformed to a map $f_1\colon X\to Y$ whose restriction to a neighborhood of $\overline X_1$ is a $J_1$-holomorphic embedding which approximates $f_0$ uniformly on $\overline X_0$ as closely as desired and satisfies $$\label{eq:lifting}
f_1(\overline {X_1\setminus X_0})\subset Y\setminus L_0, \qquad
f_1(bX_1) \subset Y\setminus L_1.$$ An inductive application of this result proves Theorem \[th:soft\] as in [@ForstnericSlapar2007MZ]. (For the case $\dim X=2$, see [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44].)
By subdividing the problem into finitely many steps of the same kind, it suffices to consider the following two basic cases:
- [*The noncritical case:*]{} $d\rho\ne 0$ on $\overline{X_1\setminus X_0}$. In this case we say that $X_1$ is a [*noncritical strongly pseudoconvex extension*]{} of $X_0$.
- [*The critical case:*]{} $\rho$ has exactly one critical point $p$ in $\overline{X_1\setminus X_0}$.
Let $U_0 \subset U'_0 \subset X$ be a pair of small open neighborhoods of $\overline X_0$ such that $f_0$ is an embedding on $U'_0$. Also, let $U_1\subset U'_1\subset X$ be small open neighborhoods of $\overline X_1$.
In case (a), there exists a smooth diffeomorphism $\phi\colon X\to X$ which is diffeotopic to the identity map on $X$ by a diffeotopy which is fixed on $U_0\cup (X\setminus U'_1)$ such that $\phi(U_1)\subset U'_0$. The map $\tilde f_0=f_0\circ \phi \colon X \to Y$ is then a holomorphic embedding on the set $U_1$ with respect to the Stein structure $J_1=\phi^*(J)$ on $X$ (the pullback of $J$ by $\phi$). Applying the lifting procedure in the proof of Theorem \[th:BDF2010\] and up to shrinking $U_1$ around $\overline X_1$, we can homotopically deform $\tilde f_0$ to a continuous map $f_1\colon X\to Y$ whose restriction to $U_1$ is a $J_1$-holomorphic embedding $U_1{\hookrightarrow}Y$ satisfying conditions .
In case (b), the change of topology of the sublevel sets of $\rho$ at the critical point $p$ is described by attaching to the strongly pseudoconvex domain $\overline X_0$ a smoothly embedded totally real disc $M\subset X_1\setminus X_0$, with $p\in M$ and $bM\subset bX_0$, whose dimension equals the Morse index of $\rho$ at $p$. As shown in [@Eliashberg1990; @CieliebakEliashberg2012; @ForstnericSlapar2007MZ], $M$ can be chosen such that $\overline X_0\cup M$ has a basis of smooth strongly pseudoconvex neighborhoods (handlebodies) $H$ which deformation retract onto $\overline X_0\cup M$ such that $X_1$ is a noncritical strongly pseudoconvex extension of $H$. Furthermore, as explained in [@ForstnericSlapar2007MZ], we can homotopically deform the map $f_0\colon X\to Y$, keeping it fixed in some neighborhood of $\overline X_0$, to a map that is holomorphic on $H$ and maps $H\setminus \overline X_0$ to $L_1\setminus L_0$. By Proposition \[prop:generic\] we can assume that the new map is a holomorphic embedding on $H$. This reduces case (b) to case (a).
In the inductive construction, we alternate the application of cases (a) and (b). If $\dim Y\ge 2\dim X$ then the same procedure applies to immersions.
A version of this construction, for embedding open Riemann surfaces into ${\mathbb{C}}^2$ or $({\mathbb{C}}^*)^2$ up to a deformation of their complex structure, can be found in the papers by Alarcón and L[ó]{}pez [@AlarconLopez2013] and Ritter [@Ritter2014]. However, they use holomorphic automorphisms in order to push the boundary curves to infinity without introducing self-intersections of the image complex curve. The technique in the proof of Theorem \[th:BDF2010\] will in general introduce self-intersections in double dimension.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The author is supported in part by the research program P1-0291 and grants J1-7256 and nd J1-9104 from ARRS, Republic of Slovenia. I wish to thank Antonio Alarc[ó]{}n and Rafael Andrist for a helpful discussion concerning Corollary \[cor:harmonic\] and the Schoen-Yau conjecture, Barbara Drinovec Drnov[š]{}ek for her remarks on the exposition, Josip Globevnik for the reference to the paper of Bo[ž]{}in [@Bozin1999IMRN], Frank Kutzschebauch for having proposed to include the material in §\[ss:wild\], and Peter Landweber for his remarks which helped me to improve the language and presentation.
Franc Forstnerič
Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI–1000 Ljubljana, Slovenia
Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI–1000 Ljubljana, Slovenia
e-mail: [franc.forstneric@fmf.uni-lj.si]{}
|
{
"pile_set_name": "arxiv"
}
|
UT-HET 039\
\
[**Shinya Kanemura**]{}$^{(a)}$ [^1], [**Shigeki Matsumoto**]{}$^{(a)}$ [^2],\
[**Takehiro Nabeshima**]{}$^{(a)}$ [^3], and [**Nobuchika Okada**]{}$^{(b)}$ [^4]\
$^{(a)}$[*Department of Physics, University of Toyama, Toyama 930-8555, Japan*]{}\
$^{(b)}$[*Department of Physics and Astronomy, University of Alabama,\
Tuscaloosa, AL 35487, USA*]{}\
Introduction
============
In spite of the tremendous success of the Standard Model (SM) of particle physics, it is widely believed that new physics beyond the SM should appear at a certain high energy scale. The main theoretical insight on this belief is based on the hierarchy problem in the SM. In other words, the electroweak scale is unstable against quantum corrections and is, in turn, quite sensitive to the ultraviolet energy scale, which is naturally taken to be the scale of new physic beyond the SM. Therefore, in order for the SM to be naturally realized as a low energy effective theory, the scale of new physics should not be far beyond the TeV scale and the most likely at the TeV scale.
After the recent success of the first collision of protons at the Large Hadron Collider (LHC) with the center of energy 7 TeV, the LHC is now taking data to explore particle physics at the TeV scale. The discovery of new physics at the TeV scale as well as the Higgs boson which is the last particle in the SM to be directly observed is the most important mission of the LHC. New physics beyond the SM, once discovered, will trigger a revolution in particle physics.
However, it is generally possible that even if new physics beyond the SM indeed exists, the energy scale of new physics might be beyond the LHC reach and that the LHC could find only the Higgs boson but nothing else. This is the so-called “nightmare scenario”. The electroweak precision measurements at the LEP may support this scenario. The LEP experiment has established excellent agreements of the SM with results and has provided very severe constraints on new physics dynamics. We consider some of non-renormalizable operators invariant under the SM gauge group as effective operators obtained by integrating out some new physics effects, where the scale of new physics is characterized by a cutoff scale of the operators. It has been shown [@LEPparadox] that the lower bound on the cutoff scale given by the results of the LEP experiment is close to 10 TeV rather than 1 TeV. This fact is the so-called “LEP paradox”. If such higher dimensional operators are from tree level effects of new physics, the scale of new physics lies around 10 TeV, beyond the reach of the LHC. As the scale of new physics becomes higher, the naturalness of the SM gets violated. However, for the 10 TeV scale, the fine-tuning required to realize the correct electroweak scale is not so significant but about a few percent level [@Kolda:2000wi]. Such little hierarchy may be realized in nature.
On the other hand, recent various cosmological observations, in particular, the Wilkinson Microwave Anisotropy Probe (WMAP) satellite [@Komatsu:2008hk], have established the $\Lambda$CDM cosmological model with a great accuracy. The relic abundance of the cold dark matter at 2$\sigma$ level is measured as $$\begin{aligned}
\Omega_{\rm CDM} h^2 = 0.1131 \pm 0.0034. \end{aligned}$$ To clarify the nature of the dark matter is still a prime open problem in particle physics and cosmology. Since the SM has no suitable candidate for the cold dark matter, the observation of the dark matter indicate new physics beyond the SM. Many candidates for dark matter have been proposed in various new physics models.
Among several possibilities, the Weakly Interacting Massive Particle (WIMP) is one of the most promising candidates for dark matter and in this case, the dark matter in the present universe is the thermal relic and its relic abundance is insensitive to the history of the early universe before the freeze-out time of the dark matter particle, such as the mechanism of reheating after inflation etc. This scenario allows us to evaluate the dark matter relic density by solving the Boltzmann equation, and we arrive at a very interesting conclusion: in order to obtain the right relic abundance, the WIMP dark matter mass lies below the TeV. Therefore, even if the nightmare scenario is realized, it is plausible that the mass scale of the WIMP dark matter is accessible to the LHC.
In this paper, we extend the SM by introducing the WIMP dark matter in the context of the nightmare scenario, and investigate a possibility that the WIMP dark matter can overcome the nightmare scenario through various phenomenology such as the dark matter relic abundance, the direct detection experiments for the dark matter particle, and LHC physics. Among many possibilities, we consider the “worst case” that the WIMP dark matter is singlet under the SM gauge group, otherwise the WIMP dark matter can be easily observed through its coupling with the weak gauge boson. In this setup, the WIMP dark matter communicates with the SM particles through its coupling with the Higgs boson, so that the Higgs boson plays a crucial role in phenomenology of dark matter.
The paper is organized as follows. In the next section, we introduce the WIMP dark matter which is singlet under the SM gauge group. We consider three different cases for the dark matter particle; a scalar, fermion and vector dark matter, respectively. In section 3, we investigate cosmological aspects of the WIMP dark matter and identify a parameter region which is consistent with the WMAP observation and the direct detection measurements for the WIMP dark matter. The collider signal of the dark matter particle is explored in section 4. The dark matter particles are produced at the LHC associated with the Higgs boson production. The last section is devoted to summary and discussions.
The Model {#Sec2}
=========
Since all new particles except a WIMP dark matter are supposed to be at the scale of 10 TeV in the nightmare scenario, the effective Lagrangian at the scale of 1 TeV involves only a field of the WIMP dark matter and those of the SM particles. We consider the worst case of the WIMP dark matter, namely the dark matter is assumed to be singlet under gauge symmetries of the SM. Otherwise, the WIMP dark matter accompanies a charged partner with mass at the scale less than 1 TeV, which would be easily detected at collider experiments in near future, and such a scenario is not nightmare. We postulate the global $Z_2$ symmetry (parity) in order to guarantee the stability of the dark matter, where the WIMP dark matter has odd charge while particles in the SM have even one. We consider three cases for the spin of the dark matter; the scalar dark matter $\phi$, the fermion dark matter $\chi$, and the vector dark matter $V_\mu$. In all cases, the dark matter is assumed to be an identical particle for simplicity, so that these are described by real Klein-Gordon, Majorana, and real Proca fields, respectively.
The Lagrangian which is invariant under the symmetries of the SM is written as $$\begin{aligned}
{\cal L}_S
&=&
{\cal L}_{\rm SM} + \frac{1}{2} \left(\partial \phi\right)^2 - \frac{M_S^2 }{2} \phi^2
- \frac{c_S}{2}|H|^2 \phi^2 - \frac{d_S}{4!} \phi^4,
\label{Lagrangian S}
\\
{\cal L}_F
&=&
{\cal L}_{\rm SM} + \frac{1}{2} \bar{\chi} \left(i\Slash{\partial} - M_F\right) \chi
- \frac{c_F}{2\Lambda} |H|^2 \bar\chi \chi
- \frac{d_F}{2\Lambda} \bar{\chi}\sigma^{\mu\nu}\chi B_{\mu\nu},
\label{Lagrangian F}
\\
{\cal L}_V
&=&
{\cal L}_{\rm SM} - \frac{1}{4} V^{\mu\nu} V_{\mu \nu} + \frac{M_V^2 }{2} V_\mu V^\mu
+ \frac{c_V}{2} |H|^2 V_\mu V^\mu - \frac{d_V}{4!} (V_\mu V^\mu)^2,
\label{Lagrangian V}\end{aligned}$$ where $V_{\mu\nu} = \partial_\mu V_\nu - \partial_\nu V_\mu$, $B_{\mu\nu}$ is the field strength tensor of the hypercharge gauge boson, and ${\cal L}_{\rm SM}$ is the Lagrangian of the SM with $H$ being the Higgs boson. The last terms in RHS in Eqs.(\[Lagrangian S\]) and (\[Lagrangian V\]) proportional to coefficients $d_S$ and $d_V$ represent self-interactions of the WIMP dark matter, which are not relevant for the following discussion. On the other hand, the last term in RHS in Eq.(\[Lagrangian F\]) proportional to the coefficient $d_F$ is the interaction between WIMP dark matter and the hypercharge gauge boson, however this term is most likely obtained by 1-loop diagrams of new physics dynamics at the scale of 10 TeV, since the dark matter particle carries no hypercharge. The term therefore can be ignored in comparison with the term proportional to $c_F$ which can be obtained by tree-level diagrams. As can be seen in the Lagrangian, the WIMP dark matter in our scenario interacts with particles in the SM only through the Higgs boson. Such a scenario is sometimes called the “[*Higgs portal*]{}” scenario.
After the electroweak symmetry breaking, masses of the dark matters are given by $$\begin{aligned}
m_S^2 &=& M_S^2 + c_Sv^2/2, \\
m_F &=& M_F + c_Fv^2/(2\Lambda), \\
m_V^2 &=& M_V^2 + c_Vv^2/2,\end{aligned}$$ where the vacuum expectation value of the Higgs field is set to be $\langle H \rangle = (0,v)^T/\sqrt{2}$ with $v$ being $v \simeq 246$ GeV. Although the model parameter $M_{\rm DM}$ (DM $= S$, $F$, and $V$) may be related to the parameter $c_{\rm DM}$ and may depend on details of new physics at the scale of 10 TeV, we treat $m_{\rm DM}$ and $c_{\rm DM}$ as free parameters in the following discussion. There are some examples of new physics models with dark matter, which realize the Higgs portal scenario at low energies. The scenario with the scalar Higgs portal dark matter appears in models discussed in Refs. [@higgsportal-scalar1; @higgsportal-scalar3; @higgsportal-scalar4]. R-parity invariant supersymmetric standard models with the Bino-like lightest super particle can correspond to the fermion Higgs portal dark matter scenario when the other super-partners are heavy enough [@higgsportal-fermion1]. The vector dark matter can be realized in such as the littlest Higgs model with T-parity if the breaking scale is very high [@higgsportal-vector1].
Cosmological Aspects
====================
We first consider cosmological aspects of the scenario with paying particular attention to the WMAP experiment [@Komatsu:2008hk], and direct detection measurements for the dark matter particle by using the data from CDMS II [@CDMSII] and the first data from the XENON100 [@Aprile:2010um] experiment. We also discuss whether the signal of the WIMP dark matter is observed or not in near future at XMASS [@Abe:2008zzc] and SuperCDMS [@Brink:2005ej] and XENON100 [@Aprile:2009yh] experiments.
Relic abundance of dark matter
------------------------------
![Feynman diagrams for dark matter annihilation.[]{data-label="fig:diagrams"}](Diagrams.eps)
The WIMP dark matter in our scenario annihilates into particles in the SM only through the exchange of the Higgs boson. Processes of the annihilation are shown in Fig. \[fig:diagrams\], where $h$ is the physical mode of $H$, $W(Z)$ is the charged (neutral) weak gauge boson, and $f$ represents quarks and leptons in the SM.
The relic abundance of the WIMP dark matter, which is nothing but the averaged mass density of the dark matter in the present universe, is obtained by integrating out the following Boltzmann equation [@Gondolo:1990dk], $$\begin{aligned}
\frac{dY}{dx}
=
- \frac{m_{\rm DM}}{x^2}\sqrt{\frac{\pi}{45 g_*^{1/2} G_N}}
\left(g_{*s} + \frac{m_{\rm DM}}{3x} \frac{dg_{*s}}{dT}\right)
\langle\sigma v\rangle
\left[
Y^2 - \left\{\frac{45x^2g_{\rm DM}}{4\pi^4g_{*s}} K_2(x)\right\}^2
\right],
\label{Boltzmann}\end{aligned}$$ where $x \equiv m_{\rm DM}/T$ and $Y \equiv n/s$ with $m$, $T$, $n$, and $s$ being the mass of the dark matter, the temperature of the universe, the number density of the dark matter, and the entropy density of the universe, respectively. The gravitational constant is denoted by $G_N = 6.7 \times 10^{-39}$ GeV$^{-2}$. The massless degree of freedom in the energy (entropy) density of the universe is given by $g_*(g_{*s})$, while $g_{\rm DM}$ is the spin degree of freedom of the dark matter. The function $K_{2}(x)$ is the second modified Bessel function, and $\langle\sigma v\rangle$ is the thermal average of the total annihilation cross section (times relative velocity) of the dark matter. With the asymptotic value of the yield $Y(\infty)$, the cosmological parameter of the dark matter density $\Omega_{\rm DM}h^2$ is written $$\begin{aligned}
\Omega_{\rm DM} h^2 = \frac{m_{\rm DM} s_0 Y(\infty)}{\rho_c/h^2},\end{aligned}$$ where $s_0 = 2890$ cm$^{-3}$ is the entropy density of the present universe, while $\rho_c/h^2 = 1.05 \times 10^{-5}$ GeV cm$^{-3}$ is the critical density.
We have numerically integrated out the Boltzmann equation (\[Boltzmann\]) including the effect of temperature-dependent $g_*(T)$ and $g_{*S}(T)$ to obtain the relic abundance accurately. The result is shown in Fig.\[fig:results\] as magenta regions, where the regions are consistent with the WMAP experiment at 2$\sigma$ level in $(m_{\rm DM}, c_{\rm DM})$-plain. In upper three figures, the Higgs mass is fixed to be $m_h = $ 120 GeV, while $m_h =$ 150 GeV in lower ones. It can be seen that the coupling constant $c_{\rm DM}$ should not be so small in order to satisfy the constraint from the WMAP experiment except the region $m_{\rm DM} \simeq m_h/2$ where the resonant annihilation due to the $s$-channel Higgs boson is efficient.
![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](S120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](S150fD.eps "fig:")\
![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](F120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](F150fD.eps "fig:")\
![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](V120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](V150fD.eps "fig:")
Direct detection of dark matter
-------------------------------
After integrating the Higgs boson out, Eqs.(\[Lagrangian S\])-(\[Lagrangian V\]) lead to effective interactions of the WIMP dark matter with gluon and light quarks such as $$\begin{aligned}
{\cal L}_S^{(\rm eff)}
&=&
\frac{c_S}{2m_h^2} \phi^2
(\sum_q m_q \bar{q}q
-
\frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}), \\
{\cal L}_F^{(\rm eff)}
&=&
\frac{c_F}{2\Lambda m_h^2} \bar{\chi}\chi
(\sum_q m_q \bar{q}q
-
\frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}), \\
{\cal L}_V^{(\rm eff)}
&=&
-
\frac{c_V}{2m_h^2} V_\mu V^\mu
(\sum_q m_q \bar{q}q
-
\frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}),\end{aligned}$$ where $q$ represents light quarks (u, d, and s quarks) with $m_q$ being their current masses. Strong coupling constant is denoted by $\alpha_s$ and the field strength tensor of the gluon field is given by $G_{\mu\nu}$. Using these interactions, the scattering cross section between dark matter and nucleon for the momentum transfer being small enough is calculated as $$\begin{aligned}
\sigma_S(\phi N \rightarrow \phi N)
&=&
\frac{c_S^2}{4 m_h^4} \frac{m_N^2}{\pi (m_S + m_N)^2}f_N^2, \\
\sigma_F(\chi N \rightarrow \chi N)
&=&
\frac{c_F^2}{4 \Lambda^2 m_h^4} \frac{4 m_N^2 m_F^2}{\pi (m_F + m_N)^2}f_N^2, \\
\sigma_V(V N \rightarrow V N)
&=&
\frac{c_V^2}{4 m_h^4} \frac{m_N^2}{\pi (m_V + m_N)^2}f_N^2, \end{aligned}$$ where $N$ represents a nucleon (proton or neutron) with the mass of the nucleon $m_N \simeq$ 1 GeV. The parameter $f_N$ depends on hadronic matrix elements, $$\begin{aligned}
f_N
=
\sum_q m_q \langle N |\bar{q}q| N \rangle
-
\frac{\alpha_s}{4\pi} \langle N |G_{\mu\nu}G^{\mu\nu}| N \rangle
=
\sum_q m_N f_{Tq} + \frac{2}{9} m_N f_{TG}.\end{aligned}$$ The value of $f_{Tq}$ has recently been evaluated accurately by the lattice QCD simulation using the overlap fermion formulation. The result of the simulation has shown that $f_{Tu} + f_{Tu} \simeq 0.056$ and $|f_{Ts}| \leq 0.08$[^5] [@fTq]. On the other hand, the parameter $f_{TG}$ is obtained by $f_{Tq}$ trough the trace anomaly, $1 = f_{Tu} + f_{Td} + f_{Ts} + f_{TG}$ [@Trace; @anomaly].
The result from CDMS II and the new data from the XENON 100 experiment give the most severe constraint on the scattering cross section between dark matter particle and nucleon. The result of the constraint is shown in Fig.\[fig:results\], where the regions in brown are excluded by the experiments at 90% confidence level. It can be seen that most of the parameter space for a light dark matter particle has already been ruled out. In Fig.\[fig:results\], we also depict experimental sensitivities to detect the signal of the dark matter in near future experiments, XMASS, SuperCDMS, and Xenon100. The sensitivities are shown as light brown lines, where the signal can be discovered in the regions above these lines at 90% confidence level. Most of the parameter region will be covered by the future direct detection experiments. Note that the WIMP dark matter in the nightmare scenario predicts a large scattering rate in the region $m_h \lesssim 80$ GeV. It is interesting to show a region corresponding to “positive signal” of dark matter particle reported by the CDMS II experiment very recently [@CDMSII], which is depicted in dark cyan and this closed region only appears at 1$\sigma$ confidence level [@CDMSanalysis]. The parameter region consistent with the WMAP results has some overlap with the signal region. When a lighter Higgs boson mass is taken, the two regions better overlap.
Signals at the LHC
==================
Finally, we investigate the signal of the WIMP dark matter at the LHC experiment [@LHC]. The main purpose here is to clarify the parameter region where the signal can be detected. We first consider the case in which the mass of the dark matter is less than a half of the Higgs boson mass. In this case, the dark matter particles can be produced through the decay of the Higgs boson. Then, we consider the other case where the mass of the dark matter particle is heavier than a half of the Higgs boson mass.
The case $m_{\rm DM} < m_h/2$
-----------------------------
In this case, the coupling of the dark matter particle with the Higgs boson can cause a significant change in the branching ratio of the Higgs boson while the production process of the Higgs boson at the LHC remains the same. The partial decay width of the Higgs boson into dark matter particles is given by $$\begin{aligned}
\Gamma_S
&=&
\frac{c_S^2 v^2}{32 \pi m_h} \sqrt{1 - \frac{4 m_S^2}{m_h^2}},
\\
\Gamma_F
&=&
\frac{c_F^2 v^2 m_h}{16 \pi \Lambda^2}
\left(1 - \frac{4 m_F^2}{m_h^2}\right)^{3/2},
\\
\Gamma_V
&=&
\frac{c_V^2 v^2 m_h^3}{128 \pi m_V^4}
\left(1 - 4\frac{m_V^2}{m_h^2} + 12\frac{m_V^4}{m_h^4}\right)
\sqrt{1 - \frac{4 m_V^2}{m_h^2}}.\end{aligned}$$ When the mass of the Higgs boson is not heavy ($m_h \lesssim 150$ GeV), its partial decay width into quarks and leptons is suppressed due to small Yukawa couplings. As a result, the branching ratio into dark matter particles can be almost 100% unless the interaction between the dark matter and the Higgs boson is too weak. In this case, most of the Higgs boson produced at the LHC decay invisibly.
There are several studies on the invisible decay of the Higgs boson at the LHC. The most significant process for investigating such a Higgs boson is found to be its production through weak gauge boson fusions. For this process, the forward and backward jets with a large pseudo-rapidity gap show the missing transverse energy corresponding to the production of the invisibly decaying Higgs boson. According to the analysis in Ref. [@InvH], the 30 fb$^{-1}$ data can allow us to identify the production of the invisibly decaying Higgs boson at the 95% confidence level when its invisible branching ratio is larger than 0.250 for $m_h = 120$ GeV and 0.238 for $m_h = 150$ GeV. In this analysis [@InvH], both statistical and systematical errors are included. With the use of the analysis, we plot the experimental sensitivity to detect the signal in Fig.\[fig:results\]. The sensitivity is shown as green lines with $m_{\rm DM} \leq m_h/2$, where the signal can be observed in the regions above these lines. Most of parameter regions with $m_{\rm DM} \leq m_h/2$ can be covered by investigating the signal of the invisible decay at the LHC. It is also interesting to notice that the signal of the WIMP dark matter can be obtained in both direct detection measurement and LHC experiment, which arrow us to perform a non-trivial check for the scenario.
The case $m_{\rm DM} \geq m_h/2$
--------------------------------
![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_S.eps "fig:") ![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_F.eps "fig:") ![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_V.eps "fig:")
In this case, the WIMP dark matter cannot be produced from the decay of the Higgs boson. We consider, however, the process of weak gauge boson fusions again. With $V$ and $h^*$ being a weak gauge boson and virtual Higgs boson, the signal is from the process $qq \rightarrow qqVV \rightarrow qqh^* \rightarrow qq$DMDM, which is characterized by two energetic quark jets with large missing energy and a large pseudo-rapidity gap between them.
There are several backgrounds against the signal. One is the production of a weak boson associated with two jets thorough QCD or electroweak interaction, which mimics the signal when the weak boson decays into neutrino. Another background is from the production of three jets thorough QCD interaction, which mimics the signal when one of the jets is missed to detect. Following the Ref. [@Eboli:2000ze], we apply kinematical cuts for two tagging jets in order to reduce these backgrounds, $$\begin{aligned}
&&
p^j_T > 40~{\rm GeV},
\qquad
\Slash{p}_T > 100~{\rm GeV},
\nonumber \\
&&
|\eta_j| < 5.0,
\qquad
|\eta_{j_1} - \eta_{j_2}| > 4.4,
\qquad
\eta_{j_1} \cdot \eta_{j_2} < 0,
\nonumber \\
&&
M_{j_1j_2} > 1200~{\rm GeV},
\qquad
\phi_{j_1j_2} < 1,
\label{kinematical cuts}\end{aligned}$$ where $p^j_T$, $\Slash{p}_T$, and $\eta_j$ are the transverse momentum of $j$, the missing energy, and the pseudo-rapidity of $j$, respectively. The invariant mass of the two jets is denoted by $M_{jj}$, while $\phi_{jj}$ is the azimuthal angle between two jets. We also impose a veto of central jet activities with $p_T > 20$ GeV in the same manner of this reference. From the analysis of these backgrounds, it turns out that, at the LHC with the energy of $\sqrt{s}=14$ TeV and the integrated luminosity of 100 fb$^{-1}$, the signal will be detected at 95% confidence level when its cross section exceeds 4.8 fb after applying these kinematical cuts.
Cross sections of the signal before and after applying the kinematical cuts are depicted in Fig.\[fig:LHC XS\] as a function of the dark matter mass with $m_h$ being fixed to be 120 and 150 GeV. We also fix the coupling constant between dark matter and Higgs boson as shown in these figures. It turns out that the cross section after applying the kinematical cuts exceeds 4.8 fb if the mass of the dark matter particle is small enough. With this analysis, we have estimated the experimental sensitivity to detect the signal at the LHC. The result is shown in Fig.\[fig:results\] as green lines for $m_{\rm DM} \geq m_h/2$, where with an integrated luminosity of 100 fb$^{-1}$ the signal at 95% confidence level can be observed in the regions above these lines. The sensitivity does not reach the region consistent with the WMAP observation, but it is close for fermion and vector dark matters with $m_h = 120$ GeV. When we use more sophisticated analysis or accumulate more data, the signal may be detectable.
Summary and Discussions
=======================
The physics operation of the LHC has begun and exploration of particle physics at the TeV scale will continue over next decades. Discovery of not only the Higgs boson but also new physics beyond the SM is highly expected for the LHC experiment. However, the little hierarchy might exist in nature and if this is the case, new physics scale can be around 10 TeV, so that the LHC could find only the SM-like Higgs boson but nothing else. This is the nightmare scenario.
On the other hand, cosmological observations strongly suggest the necessity of extension of the SM so as to incorporate the dark matter particle. According to the WIMP dark matter hypothesis, the mass scale of the dark matter particle lies below the TeV, hence, within the reach of the LHC.
We have investigated the possibility that the WIMP dark matter can be a clue to overcome the nightmare scenario. As the worst case scenario, we have considered the WIMP dark matter singlet under the SM gauge symmetry, which communicates with the SM particles only through the Higgs boson. Analyzing the relic density of the dark matter particle and its elastic scattering cross section with nucleon, we have identified the parameter region which is consistent with the WMAP observation and the current direct detection measurements of the dark matter particle. The direct detection measurements provide severe constraints on the parameter space and in near future almost of all parameter region can be explored except a region with a dark matter mass close to a half of Higgs boson mass.
We have also considered the dark matter signal at the LHC. The dark matter particle can be produced at the LHC only through its interaction with the Higgs boson. If the Higgs boson is light, $m_h \lesssim 150$ GeV, and the dark matter particle is also light, $m_{\rm DM}^{} < m_h/2$, the Higgs boson decays into a pair of dark matter particles with a large branching ratio. Such an invisibly decaying Higgs boson can be explored at the LHC by the Higgs boson production process through the weak gauge boson fusions. When the invisible branching ratio is sizable, $B(h \to {\rm DM}{\rm DM}) \gtrsim 0.25$, the signal of invisibly decaying Higgs boson can be observed. Interestingly, corresponding parameter region is also covered by the future experiments for the direct detection measurements of dark matter particle. In the case of $m_{DM} \geq m_h/2$, we have also analyzed the dark matter particle production mediated by the virtual Higgs boson in the weak boson fusion channel. Although the detection of the dark matter particle production turns out to be challenging in our present analysis, more sophisticated analysis may enhance the ratio of the signal to background .
Even if the nightmare scenario is realized in nature, the WIMP dark matter may exist and communicate with the SM particles only through the Higgs boson. Therefore, the existence of new physics may be revealed associated with the discovery of the Higgs boson. Finding the Higgs boson but nothing else would be more of a portal to new findings, the WIMP dark matter, rather than nightmare.
[**Acknowledgments**]{}
This work is supported, in part, by the Grant-in-Aid for Science Research, Ministry of Education, Culture, Sports, Science and Technology, Japan (Nos.19540277 and 22244031 for SK, and Nos. 21740174 and 22244021 for SM).
[99]{}
R. Barbieri and A. Strumia, arXiv:hep-ph/0007265.
C. F. Kolda and H. Murayama, JHEP [**0007**]{}, 035 (2000). E. Komatsu [*et al.*]{} \[WMAP Collaboration\], Astrophys. J. Suppl. [**180**]{} (2009) 330.
J. McDonald, Phys. Rev. D [**50**]{}, 3637 (1994); C. P. Burgess, M. Pospelov and T. ter Veldhuis, Nucl. Phys. B [**619**]{}, 709 (2001).
M. C. Bento, O. Bertolami, R. Rosenfeld and L. Teodoro, Phys. Rev. D [**62**]{}, 041302 (2000); R. Barbieri, L. J. Hall and V. S. Rychkov, Phys. Rev. D [**74**]{}, 015007 (2006); V. Barger, P. Langacker, M. McCaskey, M. J. Ramsey-Musolf and G. Shaughnessy, Phys. Rev. D [**77**]{}, 035005 (2008); M. Aoki, S. Kanemura and O. Seto, Phys. Rev. Lett. [**102**]{}, 051805 (2009); Phys. Rev. D [**80**]{}, 033007 (2009).
X. G. He, T. Li, X. Q. Li, J. Tandean and H. C. Tsai, Phys. Lett. B [**688**]{}, 332 (2010); M. Farina, D. Pappadopulo and A. Strumia, Phys. Lett. B [**688**]{}, 329 (2010); M. Kadastik, K. Kannike, A. Racioppi and M. Raidal, Phys. Lett. B [**685**]{}, 182 (2010) arXiv:0912.3797 \[hep-ph\]; K. Cheung and T. C. Yuan, Phys. Lett. B [**685**]{}, 182 (2010); M. Aoki, S. Kanemura and O. Seto, Phys. Lett. B [**685**]{}, 313 (2010); M. Asano and R. Kitano, Phys. Rev. D [**81**]{}, 054506 (2010) \[arXiv:1001.0486 \[hep-ph\]\]; A. Bandyopadhyay, S. Chakraborty, A. Ghosal and D. Majumdar, arXiv:1003.0809 \[hep-ph\]; S. Andreas, C. Arina, T. Hambye, F. S. Ling and M. H. G. Tytgat, arXiv:1003.2595 \[hep-ph\].
For a review, see the following and the references therein: G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. [**267**]{}, 195 (1996); G. Bertone, D. Hooper and J. Silk, Phys. Rept. [**405**]{}, 279 (2005).
For a revew, see: M. Perelstein, Prog. Part. Nucl. Phys. [**58**]{}, 247 (2007). Z. Ahmed [*et al.*]{} \[CDMS Collaboration\], \[arXiv:0912.3592 \[astro-ph\]\]; Z. Ahmed [*et al.*]{} \[CDMS Collaboration\], Phys. Rev. Lett. [**102**]{} (2009) 011301.
E. Aprile [*et al.*]{} \[XENON100 Collaboration\], arXiv:1005.0380 \[astro-ph.CO\].
K. Abe \[XMASS Collaboration\], J. Phys. Conf. Ser. [**120**]{} (2008) 042022.
P. L. Brink [*et al.*]{} \[CDMS-II Collaboration\], [*In the Proceedings of 22nd Texas Symposium on Relativistic Astrophysics at Stanford University, Stanford, California, 13-17 Dec 2004, pp 2529*]{} \[arXiv:astro-ph/0503583\].
E. Aprile, L. Baudis and f. t. X. Collaboration, arXiv:0902.4253 \[astro-ph.IM\].
P. Gondolo and G. Gelmini, Nucl. Phys. B [**360**]{} (1991) 145.
H. Ohki [*et al.*]{}, Phys. Rev. D [**78**]{} (2008) 054502; arXiv:0910.3271 \[hep-lat\].
R. Crewther, Phys. Rev. Lett. [**28**]{} (1972) 1421; M. Chanowitz and J. Ellis, Phys. Lett. [**40B**]{} (1972) 397; Phys. Rev. D [**7**]{} (1973) 2490; J. Collins, L. Duncan and S. Joglekar, Phys. Rev. D [**16**]{} (1977) 438; M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Phys. Lett. B [**78**]{} (1978) 443.
J. Kopp, T. Schwetz and J. Zupan, JCAP [**1002**]{}, 014 (2010).
G. Aad [*et al.*]{} \[The ATLAS Collaboration\], arXiv:0901.0512 \[hep-ex\]; G. L. Bayatian [*et al.*]{} \[CMS Collaboration\], J. Phys. G [**34**]{} (2007) 995.
M. Warsinsky \[ATLAS Collaboration\], J. Phys. Conf. Ser. [**110**]{} (2008) 072046; Di Girolamo B and Neukermans L, 2003 Atlas Note ATL-PHYS-2003-006.
O. J. P. Eboli and D. Zeppenfeld, Phys. Lett. B [**495**]{} (2000) 147.
[^1]: kanemu@sci.u-toyama.ac.jp
[^2]: smatsu@sci.u-toyama.ac.jp
[^3]: nabe@jodo.sci.u-toyama.ac.jp
[^4]: okadan@ua.edu
[^5]: For conservative analysis, we use $
f_{Ts} = 0$ in our numerical calculations.
|
{
"pile_set_name": "arxiv"
}
|
--- sandbox/linux/BUILD.gn.orig 2019-04-08 08:18:26 UTC
+++ sandbox/linux/BUILD.gn
@@ -12,12 +12,12 @@ if (is_android) {
}
declare_args() {
- compile_suid_client = is_linux
+ compile_suid_client = is_linux && !is_bsd
- compile_credentials = is_linux
+ compile_credentials = is_linux && !is_bsd
# On Android, use plain GTest.
- use_base_test_suite = is_linux
+ use_base_test_suite = is_linux && !is_bsd
}
if (is_nacl_nonsfi) {
@@ -379,7 +379,7 @@ component("sandbox_services") {
public_deps += [ ":sandbox_services_headers" ]
}
- if (is_nacl_nonsfi) {
+ if (is_nacl_nonsfi || is_bsd) {
cflags = [ "-fgnu-inline-asm" ]
sources -= [
@@ -387,6 +387,8 @@ component("sandbox_services") {
"services/init_process_reaper.h",
"services/scoped_process.cc",
"services/scoped_process.h",
+ "services/syscall_wrappers.cc",
+ "services/syscall_wrappers.h",
"services/yama.cc",
"services/yama.h",
"syscall_broker/broker_channel.cc",
@@ -405,6 +407,10 @@ component("sandbox_services") {
"syscall_broker/broker_process.h",
"syscall_broker/broker_simple_message.cc",
"syscall_broker/broker_simple_message.h",
+ ]
+ sources += [
+ "services/libc_interceptor.cc",
+ "services/libc_interceptor.h",
]
} else if (!is_android) {
sources += [
|
{
"pile_set_name": "github"
}
|
Filatima asiatica
Filatima asiatica is a moth of the family Gelechiidae. It is found in New Guinea, where it has been recorded from the Prince Alexander Mountains.
References
Category:Moths described in 1961
Category:Filatima
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We generalize the concept of quasiparticle for one-dimensional (1D) interacting electronic systems. The $\uparrow $ and $\downarrow $ quasiparticles recombine the pseudoparticle colors $c$ and $s$ (charge and spin at zero magnetic field) and are constituted by one many-pseudoparticle [*topological momenton*]{} and one or two pseudoparticles. These excitations cannot be separated. We consider the case of the Hubbard chain. We show that the low-energy electron – quasiparticle transformation has a singular charater which justifies the perturbative and non-perturbative nature of the quantum problem in the pseudoparticle and electronic basis, respectively. This follows from the absence of zero-energy electron – quasiparticle overlap in 1D. The existence of Fermi-surface quasiparticles both in 1D and three dimensional (3D) many-electron systems suggests there existence in quantum liquids in dimensions 1$<$D$<$3. However, whether the electron – quasiparticle overlap can vanish in D$>$1 or whether it becomes finite as soon as we leave 1D remains an unsolved question.'
author:
- 'J. M. P. Carmelo$^{1}$ and A. H. Castro Neto$^{2}$'
---
= 10000
Electrons, pseudoparticles, and quasiparticles in the\
one-dimensional many-electron problem
$^{1}$ Department of Physics, University of Évora, Apartado 94, P-7001 Évora Codex, Portugal\
and Centro de Física das Interacções Fundamentais, I.S.T., P-1096 Lisboa Codex, Portugal
$^{2}$ Department of Physics, University of California, Riverside, CA 92521
INTRODUCTION
============
The unconventional electronic properties of novel materials such as the superconducting coper oxides and synthetic quasi-unidimensional conductors has attracted much attention to the many-electron problem in spatial dimensions 1$\leq$D$\leq$3. A good understanding of [*both*]{} the different and common properties of the 1D and 3D many-electron problems might provide useful indirect information on quantum liquids in dimensions 1$<$D$<$3. This is important because the direct study of the many-electron problem in dimensions 1$<$D$<$3 is of great complexity. The nature of interacting electronic quantum liquids in dimensions 1$<$D$<$3, including the existence or non existence of quasiparticles and Fermi surfaces, remains an open question of crucial importance for the clarification of the microscopic mechanisms behind the unconventional properties of the novel materials.
In 3D the many-electron quantum problem can often be described in terms of a one-particle quantum problem of quasiparticles [@Pines; @Baym], which interact only weakly. This Fermi liquid of quasiparticles describes successfully the properties of most 3D metals, which are not very sensitive to the presence of electron-electron interactions. There is a one to one correspondence between the $\sigma $ quasiparticles and the $\sigma $ electrons of the original non-interacting problem (with $\sigma =\uparrow \, , \downarrow$). Moreover, the coherent part of the $\sigma $ one-electron Green function is quite similar to a non-interacting Green function except that the bare $\sigma $ electron spectrum is replaced by the $\sigma $ quasiparticle spectrum and for an electron renormalization factor, $Z_{\sigma }$, smaller than one and such that $0<Z_{\sigma }<1$. A central point of Fermi-liquid theory is that quasiparticle - quasihole processes describe exact low-energy and small-momentum Hamiltonian eigenstates and “adding” or “removal” of one quasiparticle connects two exact ground states of the many-electron Hamiltonian.
On the other hand, in 1D many-electron systems [@Solyom; @Haldane; @Metzner], such as the hUBbard chain solvable by Bethe ansatz (BA) [@Bethe; @Yang; @Lieb; @Korepinrev], the $\sigma $ electron renormalization factor, $Z_{\sigma }$, vanishes [@Anderson; @Carmelo95a]. Therefore, the many-particle problem is not expected to be descibed in terms of a one-particle problem of Fermi-liquid quasiparticles. Such non-perturbative electronic problems are usually called Luttinger liquids [@Haldane]. In these systems the two-electron vertex function at the Fermi momentum diverges in the limit of vanishing excitation energy [@Carmelo95a]. In a 3D Fermi liquid this quantity is closely related to the interactions of the quasiparticles [@Pines; @Baym]. Its divergence seems to indicate that there are no quasiparticles in 1D interacting electronic systems. A second possibility is that there are quasiparticles in the 1D many-electron problem but without overlap with the electrons in the limit of vanishing excitation energy.
While the different properties of 1D and 3D many-electron problems were the subject of many Luttinger-liquid studies in 1D [@Solyom; @Haldane; @Metzner], the characterization of their common properties is also of great interest because the latter are expected to be present in dimensions 1$<$D$<$3 as well. One example is the Landau-liquid character common to Fermi liquids and some Luttinger liquids which consists in the generation of the low-energy excitations in terms of different momentum-occupation configurations of anti-commuting quantum objects (quasiparticles or pseudoparticles) whose forward-scattering interactions determine the low-energy properties of the quantum liquid. This generalized Landau-liquid theory was first applied in 1D to contact-interaction soluble problems [@Carmelo90] and shortly after also to $1/r^2$-interaction integrable models [@Haldane91]. Within this picture the 1D many-electron problem can also be described in terms of weakly interacting “one-particle” objects, the pseudoparticles, which, however, have no one-to-one correspondence with the electrons, as is shown in this paper.
In spite of the absence of the one to one principle in what concerns single pseudoparticles and single electrons, following the studies of Refs. [@Carmelo90; @Carmelo91b; @Carmelo92] a generalized adiabatic principle for small-momentum pseudoparticle-pseudohole and electron-hole excitations was introduced for 1D many-electron problems in Refs. [@Carmelo92b]. The pseudoparticles of 1D many-electron systems show other similarities with the quasiparticles of a Fermi liquid, there interactions being determined by [*finite*]{} forward-scattering $f$ functions [@Carmelo91b; @Carmelo92; @Carmelo92b]. At constant values of the electron numbers this description of the quantum problem is very similar to Fermi-liquid theory, except for two main differences: (i) the $\uparrow $ and $\downarrow $ quasiparticles are replaced by the $c$ and $s$ pseudoparticles [@Carmelo93; @Carmelo94; @Carmelo94b; @Carmelo94c; @Carmelo95], and (ii) the discrete pseudoparticle momentum (pseudomomentum) is of the usual form $q_j={2\pi\over {N_a}}I_j^{\alpha}$ but the numbers $I_j^{\alpha}$ (with $\alpha =c,s$) are not always integers. They are integers or half integers depending on whether the number of particles in the system is even or odd. This plays a central role in the present quasiparticle problem. The connection of these perturbative pseudoparticles to the non-perturbative 1D electronic basis remains an open problem. By perturbative we mean here the fact that the two-pseudoparticle $f$ functions and forward-scattering amplitudes are finite [@Carmelo92b; @Carmelo94], in contrast to the two-electron vertice functions.
The low-energy excitations of the Hubbard chain at constant electron numbers and in a finite magnetic field and chemical potential were shown [@Carmelo91b; @Carmelo92; @Carmelo92b; @Carmelo94; @Carmelo94b; @Carmelo94c] to be $c$ and $s$ pseudoparticle-pseudohole processes relative to the canonical-ensemble ground state. This determines the $c$ and $s$ low-energy separation [@Carmelo94c], which at zero magnetization leads to the so called charge and spin separation. In this paper we find that in addition to the above pseudoparticle-pseudohole excitations there are also Fermi-surface [*quasiparticle*]{} transitions in the 1D many-electron problem. Moreover, it is the study of such quasiparticle which clarifies the complex and open problem of the low-energy electron – pseudoparticle transformation.
As in 3D Fermi liquids, the quasiparticle excitation is a transition between two exact ground states of the interacting electronic problem differing in the number of electrons by one. When one electron is added to the electronic system the number of these excitations [*also*]{} increases by one. Naturally, its relation to the electron excitation will depend on the overlap between the states associated with this and the quasiparticle excitation and how close we are in energy from the initial interacting ground state. Therefore, in order to define the quasiparticle we need to understand the properties of the actual ground state of the problem as, for instance, is given by its exact solution via the BA. We find that in the 1D Hubbard model adding one $\uparrow $ or $\downarrow $ electron of lowest energy is associated with adding one $\uparrow $ or $\downarrow $ quasiparticle, as in a Fermi liquid. These are many-pseudoparticle objects which recombine the colors $c$ and $s$ giving rise to the spin projections $\uparrow $ and $\downarrow $. We find that the quasiparticle is constituted by individual pseudoparticles and by a many-pseudoparticle object of large momentum that we call topological momenton. Importantly, these excitations cannot be separated. Although one quasiparticle is basically one electron, we show that in 1D the quasiparticle – electron transformation is singular because it involves the vanishing one-electron renormalization factor. This also implies a low-energy singular electron - pseudoparticle transformation. This singular character explains why the problem becomes perturbative in the pseudoparticle basis while it is non perturbative in the usual electronic picture.
The singular nature of the low-energy electron - quasiparticle and electron – pseudoparticle transformations reflects the fact that the one-electron density of states vanishes in the 1D electronic problem when the excitation energy $\omega\rightarrow 0$. The diagonalization of the many-electron problem is at lowest excitation energy associated with the singular electron – quasiparticle transformation which absorbes the vanishing electron renormalization factor and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. For instance, by absorbing the renormalization factor the electron - quasiparticle transformation renormalizes divergent two-electron vertex functions onto finite two-quasiparticle scattering parameters. These quantities fully determine the finite $f$ functions and scattering amplitudes of the pseudoparticle theory [@Carmelo92; @Carmelo92b; @Carmelo94b]. The pseudoparticle $f$ functions and amplitudes determine all the static and low-energy quantities of the 1D many-electron problem and are associated with zero-momentum two-pseudoparticle forward scattering.
The paper is organized as follows: the pseudoparticle operator basis is summarized in Sec. II. In Sec. III we find the quasiparticle operational expressions in the pseudoparticle basis and characterize the corresponding $c$ and $s$ recombination in the $\uparrow $ and $\downarrow $ spin projections. The singular electron – quasiparticle (and electron – pseudoparticle) transformation is studied in Sec. IV. Finally, in Sec. V we present the concluding remarks.
THE PERTURBATIVE PSEUDOPARTICLE OPERATOR BASIS
==============================================
It is useful for the studies presented in this paper to introduce in this section some basic information on the perturbative operator pseudoparticle basis, as it is obtained directly from the BA solution [@Carmelo94; @Carmelo94b; @Carmelo94c]. We consider the Hubbard model in 1D [@Lieb; @Frahm; @Frahm91] with a finite chemical potential $\mu$ and in the presence of a magnetic field $H$ [@Carmelo92b; @Carmelo94; @Carmelo94b]
$$\begin{aligned}
\hat{H} = -t\sum_{j,\sigma}\left[c_{j,\sigma}^{\dag }c_{j+1,\sigma}+c_
{j+1,\sigma}^{\dag }c_{j,\sigma}\right] +
U\sum_{j} [c_{j,\uparrow}^{\dag }c_{j,\uparrow} - 1/2]
[c_{j,\downarrow}^{\dag }c_{j,\downarrow} - 1/2]
- \mu \sum_{\sigma} \hat{N}_{\sigma } - 2\mu_0 H\hat{S}_z \, ,\end{aligned}$$
where $c_{j,\sigma}^{\dag }$ and $c_{j,\sigma}$ are the creation and annihilation operators, respectively, for electrons at the site $j$ with spin projection $\sigma=\uparrow, \downarrow$. In what follows $k_{F\sigma}=\pi n_{\sigma}$ and $k_F=[k_{F\uparrow}+k_{F\downarrow}]/2=\pi n/2$, where $n_{\sigma}=N_{\sigma}/N_a$ and $n=N/N_a$, and $N_{\sigma}$ and $N_a$ are the number of $\sigma$ electrons and lattice sites, respectively ($N=\sum_{\sigma}N_{\sigma}$). We also consider the spin density, $m=n_{\uparrow}-n_{\downarrow}$.
The many-electron problem $(1)$ can be diagonalized using the BA [@Yang; @Lieb]. We consider all finite values of $U$, electron densities $0<n<1$, and spin densities $0<m<n$. For this parameter space the low-energy physics is dominated by the lowest-weight states (LWS’s) of the spin and eta-spin algebras [@Korepin; @Essler] of type I [@Carmelo94; @Carmelo94b; @Carmelo95]. The LWS’s I are described by real BA rapidities, whereas all or some of the BA rapidities which describe the LWS’s II are complex and non-real. Both the LWS’s II and the non-LWS’s out of the BA solution [@Korepin] have energy gaps relative to each canonical ensemble ground state [@Carmelo94; @Carmelo94b; @Carmelo95]. Fortunately, the quasiparticle description involves only LWS’s I because these quantum objects are associated with ground-state – ground-state transitions and in the present parameter space all ground states of the model are LWS’s I. On the other hand, the electronic excitation involves transitions to LWS’s I, LWS’s II, and non-LWS’s, but the electron – quasiparticle transformation involves only LWS’s I. Therefore, our results refer mainly to the Hilbert sub space spanned by the LWS’s I and are valid at energy scales smaller than the above gaps. (Note that in simpler 1D quantum problems of symmetry $U(1)$ the states I span the whole Hilbert space [@Anto].)
In this Hilbert sub space the BA solution was shown to refer to an operator algebra which involves two types of [*pseudoparticle*]{} creation (annihilation) operators $b^{\dag }_{q,\alpha }$ ($b_{q,\alpha }$). These obey the usual anti-commuting algebra [@Carmelo94; @Carmelo94b; @Carmelo94c]
$$\{b^{\dag }_{q,\alpha},b_{q',\alpha'}\}
=\delta_{q,q'}\delta_{\alpha ,\alpha'}, \hspace{0.5cm}
\{b^{\dag }_{q,\alpha},b^{\dag }_{q',\alpha'}\}=0, \hspace{0.5cm}
\{b_{q,\alpha},b_{q',\alpha'}\}=0 \, .$$
Here $\alpha$ refers to the two pseudoparticle colors $c$ and $s$ [@Carmelo94; @Carmelo94b; @Carmelo94c]. The discrete pseudomomentum values are
$$q_j = {2\pi\over {N_a}}I_j^{\alpha } \, ,$$
where $I_j^{\alpha }$ are [*consecutive*]{} integers or half integers. There are $N_{\alpha }^*$ values of $I_j^{\alpha }$, [*i.e.*]{} $j=1,...,N_{\alpha }^*$. A LWS I is specified by the distribution of $N_{\alpha }$ occupied values, which we call $\alpha $ pseudoparticles, over the $N_{\alpha }^*$ available values. There are $N_{\alpha }^*-
N_{\alpha }$ corresponding empty values, which we call $\alpha $ pseudoholes. These are good quantum numbers such that
$$N_c^* = N_a \, ; \hspace{0.5cm}
N_c = N \, ; \hspace{0.5cm}
N_s^* = N_{\uparrow} \, ; \hspace{0.5cm}
N_s = N_{\downarrow} \, .$$
The numbers $I_j^c$ are integers (or half integers) for $N_s$ even (or odd), and $I_j^s$ are integers (or half integers) for $N_s^*$ odd (or even) [@Lieb]. All the states I can be generated by acting onto the vacuum $|V\rangle $ (zero-electron density) suitable combinations of pseudoparticle operators [@Carmelo94; @Carmelo94b]. The ground state
$$|0;N_{\sigma }, N_{-\sigma}\rangle = \prod_{\alpha=c,s}
[\prod_{q=q_{F\alpha }^{(-)}}^{q_{F\alpha }^{(+)}}
b^{\dag }_{q,\alpha }]
|V\rangle \, ,$$
and all LWS’s I are Slatter determinants of pseudoparticle levels. In Appendix A we define the pseudo-Fermi points, $q_{F\alpha }^{(\pm )}$, of $(5)$. In that Appendix we also present other quantities of the pseudoparticle representation which are useful for the present study.
In the pseudoparticle basis spanned by the LWS’s I and in normal order relatively to the ground state $(5)$ the Hamiltonian $(1)$ has the following form [@Carmelo94; @Carmelo94c]
$$:\hat{H}: = \sum_{i=1}^{\infty}\hat{H}^{(i)} \, ,$$
where, to second pseudoparticle scattering order
$$\begin{aligned}
\hat{H}^{(1)} & = & \sum_{q,\alpha}
\epsilon_{\alpha}(q):\hat{N}_{\alpha}(q): \, ;\nonumber\\
\hat{H}^{(2)} & = & {1\over {N_a}}\sum_{q,\alpha} \sum_{q',\alpha'}
{1\over 2}f_{\alpha\alpha'}(q,q')
:\hat{N}_{\alpha}(q)::\hat{N}_{\alpha'}(q'): \, .\end{aligned}$$
Here $(7)$ are the Hamiltonian terms which are [ *relevant*]{} at low energy [@Carmelo94b]. Furthermore, at low energy and small momentum the only relevant term is the non-interacting term $\hat{H}^{(1)}$. Therefore, the $c$ and $s$ pseudoparticles are non-interacting at the small-momentum and low-energy fixed point and the spectrum is described in terms of the bands $\epsilon_{\alpha}(q)$ (studied in detail in Ref. [@Carmelo91b]) in a pseudo-Brillouin zone which goes between $q_c^{(-)}\approx -\pi$ and $q_c^{(+)}\approx \pi$ for the $c$ pseudoparticles and $q_s^{(-)}\approx -k_{F\uparrow}$ and $q_s^{(+)}\approx k_{F\uparrow}$ for the $s$ pseudoparticles. In the ground state $(5)$ these are occupied for $q_{F\alpha}^{(-)}\leq q\leq q_{F\alpha}^{(+)}$, where the pseudo-Fermi points (A1)-(A3) are such that $q_{Fc}^{(\pm)}\approx \pm 2k_F$ and $q_{Fs}^{(\pm)}\approx \pm k_{F\downarrow}$ (see Appendix A).
At higher energies and (or ) large momenta the pseudoparticles start to interact via zero-momentum transfer forward-scattering processes of the Hamiltonian $(6)-(7)$. As in a Fermi liquid, these are associated with $f$ functions and Landau parameters [@Carmelo92; @Carmelo94], whose expressions we present in Appendix A, where we also present the expressions for simple pseudoparticle-pseudohole operators which are useful for the studies of next sections.
THE QUASIPARTICLES AND $c$ AND $s$ RECOMBINATION
================================================
In this section we introduce the 1D quasiparticle and express it in the pseudoparticle basis. In Sec. IV we find that this clarifies the low-energy transformation between the electrons and the pseudoparticles. We define the quasiparticle operator as the generator of a ground-state – ground-state transition. The study of ground states of form $(5)$ differing in the number of $\sigma $ electrons by one reveals that their relative momentum equals [*presisely* ]{} the $U=0$ Fermi points, $\pm k_{F\sigma}$. Following our definition, the quasiparticle operator, ${\tilde{c}}^{\dag
}_{k_{F\sigma },\sigma }$, which creates one quasiparticle with spin projection $\sigma$ and momentum $k_{F\sigma}$ is such that
$${\tilde{c}}^{\dag }_{k_{F\sigma},\sigma}
|0; N_{\sigma}, N_{-\sigma}\rangle =
|0; N_{\sigma} + 1, N_{\sigma}\rangle \, .$$
The quasiparticle operator defines a one-to-one correspondence between the addition of one electron to the system and the creation of one quasiparticle: the electronic excitation, $c^{\dag }_{k_{F\sigma},\sigma}|0; N_{\sigma}, N_{-\sigma}\rangle$, defined at the Fermi momentum but arbitrary energy, contains a single quasiparticle, as we show in Sec. IV. In that section we will study this excitation as we take the energy to be zero, that is, as we approach the Fermi surface, where the problem is equivalent to Landau’s.
Since we are discussing the problem of addition or removal of one particle the boundary conditions play a crucial role. As discussed in Secs. I and II, the available Hamiltonian eigenstates I depend on the discrete numbers $I_j^{\alpha}$ of Eq. $(3)$ which can be integers of half-integers depending on whether the number of particles in the system is even or odd \[the pseudomomentum is given by Eq. $(3)$\]. When we add or remove one electron to or from the many-body system we have to consider the transitions between states with integer and half-integer quantum numbers \[or equivalently, between states with an odd (even) and even (odd) number of $\sigma $ electrons\]. The transition between two ground states differing in the number of electrons by one is associated with two different processes: a backflow in the Hilbert space of the pseudoparticles with a shift of all the pseudomomenta by $\pm\frac{\pi}{N_a}$ \[associated with the change from even (odd) to odd (even) number of particles\], which we call [*topological momenton*]{}, and the creation of one or a pair of pseudoparticles at the pseudo-Fermi points.
According to the integer or half-integer character of the $I_j^{\alpha}$ numbers we have four “topological” types of Hilbert sub spaces. Since that character depends on the parities of the electron numbers, we refer these sub spaces by the parities of $N_{\uparrow}$ and $N_{\downarrow}$, respectively: (a) even, even; (b) even, odd; (c) odd, even; and (d) odd, odd. The ground-state total momentum expression is different for each type of Hilbert sub space in such a way that the relative momentum, $\Delta P$, of $U>0$ ground states differing in $N_{\sigma }$ by one equals the $U=0$ Fermi points, ie $\Delta P=\pm k_{F\sigma }$. Moreover, we find that the above quasiparticle operator $\tilde{c}^{\dag
}_{k_{F\sigma },\sigma }$ involves the generator of one low-energy and large-momentum topological momenton. The $\alpha $ topological momenton is associated with the backflow of the $\alpha $ pseudoparticle pseudomomentum band and cannot occur without a second type of excitation associated with the adding or removal of pseudoparticles. The $\alpha $-topological-momenton generator, $U^{\pm 1}_{\alpha }$, is an unitary operator which controls the topological transformations of the pseudoparticle Hamiltonian $(6)-(7)$. For instance, in the $\Delta P=\pm k_{F\uparrow }$ transitions (a)$\rightarrow $(c) and (b)$\rightarrow $(d) the Hamiltonian $(6)-(7)$ transforms as
$$:H: \rightarrow U^{\pm\Delta N_{\uparrow}}_s
:H: U^{\mp\Delta N_{\uparrow}}_s
\, ,$$
and in the $\Delta P=\pm k_{F\downarrow }$ transitions (a)$\rightarrow $(b) and (c)$\rightarrow $(d) as
$$:H:\rightarrow U^{\pm \Delta
N_{\downarrow}}_c:H:U^{\mp \Delta N_{\downarrow}}_c
\, ,$$
where $\Delta N_{\sigma}=\pm 1$ and the expressions of the generator $U^{\pm 1}_{\alpha }$ is obtained below.
In order to arrive to the expressions for the quasipaticle operators and associate topological-momenton generators $U^{\pm 1}_{\alpha }$ we refer again to the ground-state pseudoparticle representation $(5)$. For simplicity, we consider that the initial ground state of form $(5)$ is non degenerate and has zero momentum. Following equations (A1)-(A3) this corresponds to the situation when both $N_{\uparrow }$ and $N_{\downarrow }$ are odd, ie the initial Hilbert sub space is of type (d). However, note that our results are independent of the choice of initial ground state. The pseudoparticle numbers of the initial state are $N_c=N_{\uparrow }+N_{\downarrow }$ and $N_s=N_{\downarrow }$ and the pseudo-Fermi points $q_{F\alpha }^{(\pm)}$ are given in Eq. (A1).
We express the electronic and pseudoparticle numbers and pseudo-Fermi points of the final states in terms of the corresponding values for the initial state. We consider here the case when the final ground state has numbers $N_{\uparrow }$ and $N_{\downarrow }+1$ and momentum $k_{F\downarrow }$. The procedures for final states with these numbers and momentum $-k_{F\downarrow }$ or numbers $N_{\uparrow }+1$ and $N_{\downarrow }$ and momenta $\pm k_{F\uparrow }$ are similiar and are omitted here.
The above final state belongs the Hilbert sub space (c). Our goal is to find the quasiparticle operator $\tilde{c}^{\dag}_{k_{F\downarrow },\downarrow}$ such that
$$|0; N_{\uparrow}, N_{\downarrow}+1\rangle =
\tilde{c}^{\dag}_{k_{F\downarrow },\downarrow}
|0; N_{\uparrow}, N_{\downarrow}\rangle\, .$$
Taking into account the changes in the pseudoparticle quantum numbers associated with this (d)$\rightarrow $(c) transition we can write the final state as follows
$$|0; N_{\uparrow}, N_{\downarrow}+1\rangle =
\prod_{q=q^{(-)}_{Fc}-\frac{\pi}{N_a}}^{q^{(+)}_{Fc}+
\frac{\pi}{N_a}}\prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}+\frac{2\pi}{N_a}}
b^{\dag}_{q,c} b^{\dag}_{q,s} |V\rangle \, ,$$
which can be rewritten as
$$|0; N_{\uparrow}, N_{\downarrow}+1\rangle =
b^{\dag}_{q^{(+)}_{Fc}+\frac{\pi}{N_a},c}
b^{\dag}_{q^{(+)}_{Fs}+\frac{2\pi}{N_a},s}
\prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}}
\prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}}
b^{\dag}_{q-\frac{\pi}{N_a},c} b^{\dag}_{q,s} |V\rangle \, ,$$
and further, as
$$|0; N_{\uparrow}, N_{\downarrow}+1\rangle =
b^{\dag}_{q^{(+)}_{Fc}+\frac{\pi}{N_a},c}
b^{\dag}_{q^{(+)}_{Fs}+\frac{2\pi}{N_a},s}
U_c^{+1}|0; N_{\uparrow}, N_{\downarrow}\rangle \, ,$$
where $U_c^{+1}$ is the generator of expression $(10)$. Both this operator and the operator $U_s^{+1}$ of Eq. $(9)$ obey the relation
$$U^{\pm 1}_{\alpha }b^{\dag }_{q,\alpha }U^{\mp 1}_{\alpha }=
b^{\dag }_{q\mp {\pi\over {N_a}},\alpha }
\, .$$
The pseudoparticle vacuum remains invariant under the application of $U^{\pm 1}_{\alpha }$
$$U^{\pm 1}_{\alpha }|V\rangle = |V\rangle \, .$$
(The $s$-topological-momenton generator, $U_s^{+1}$, appears if we consider the corresponding expressions for the up-spin electron.) Note that the $\alpha $ topological momenton is an excitation which only changes the integer or half-integer character of the corresponding pseudoparticle quantum numbers $I_j^{\alpha }$. In Appendix B we derive the following expression for the generator $U^{\pm 1}_{\alpha }$
$$U^{\pm 1}_{\alpha }=U_{\alpha }
\left(\pm\frac{\pi}{N_a}\right)
\, ,$$
where
$$U_{\alpha}(\delta q) = \exp\left\{ - i\delta q
G_{\alpha}\right\} \, ,$$
and
$$G_{\alpha} = -i\sum_{q} \left[{\partial\over
{\partial q}} b^{\dag }_{q,\alpha }\right]b_{q,\alpha }
\, ,$$
is the Hermitian generator of the $\mp {\pi\over {N_a}}$ topological $\alpha $ pseudomomentum translation. The operator $U^{\pm 1}_{\alpha }$ has the following discrete representation
$$U^{\pm 1}_{\alpha } = \exp\left\{\sum_{q}
b^{\dag }_{q\pm {\pi\over {N_a}},\alpha }b_{q,\alpha }\right\}
\, .$$
When acting on the initial ground state of form $(5)$ the operator $U^{\pm 1}_{\alpha }$ produces a vanishing-energy $\alpha $ topological momenton of large momentum, $k=\mp N_{\alpha
}{\pi\over {N_a}}\simeq q_{F\alpha}^{(\mp)}$. As referred above, the topological momenton is always combined with adding or removal of pseudoparticles.
In the two following equations we change notation and use $q_{F\alpha }^{(\pm)}$ to refer the pseudo-Fermi points of the final state (otherwise our reference state is the initial state). Comparing equations $(11)$ and $(14)$ it follows that
$$\tilde{c}^{\dag }_{\pm k_{F\downarrow },\downarrow } =
b^{\dag }_{q_{Fc}^{(\pm)},c}b^{\dag }_{q_{Fs}^{(\pm)},s}
U^{\pm 1}_{c } \, ,$$
and a similar procedure for the up-spin electron leads to
$$\tilde{c}^{\dag }_{\pm k_{F\uparrow },\uparrow } =
b^{\dag }_{q_{Fc}^{(\pm)},c}
U^{\pm 1}_{s} \, .$$
According to these equations the $\sigma $ quasiparticles are constituted by one topological momenton and one or two pseudoparticles. The topological momenton cannot be separated from the pseudoparticle excitation, ie both these excitations are confined inside the quasiparticle. Moreover, since the generators $(17)-(20)$ have a many-pseudoparticle character, following Eqs. $(21)-(22)$ the quasiparticle is a many-pseudoparticle object. Note also that both the $\downarrow $ and $\uparrow $ quasiparticles $(21)$ and $(22)$, respectively, are constituted by $c$ and $s$ excitations. Therefore, the $\sigma $ quasiparticle is a quantum object which recombines the pseudoparticle colors $c$ and $s$ (charge and spin in the limit $m\rightarrow 0$ [@Carmelo94]) giving rise to spin projection $\uparrow $ or $\downarrow $. It has “Fermi surface” at $\pm k_{F\sigma }$.
However, two-quasiparticle objects can be of two-pseudoparticle character because the product of the two corresponding many-pseudoparticle operators is such that $U^{+ 1}_{\alpha }U^{- 1}_{\alpha }=\openone$, as for the triplet pair $\tilde{c}^{\dag }_{+k_{F\uparrow },\uparrow }
\tilde{c}^{\dag }_{-k_{F\uparrow },\uparrow }=
b^{\dag }_{q_{Fc}^{(+)},c}b^{\dag }_{q_{Fc}^{(-)},c}$. Such triplet quasiparticle pair is constituted only by individual pseudoparticles because it involves the mutual annihilation of the two topological momentons of generators $U^{+ 1}_{\alpha }$ and $U^{- 1}_{\alpha }$. Therefore, relations $(21)$ and $(22)$ which connect quasiparticles and pseudoparticles have some similarities with the Jordan-Wigner transformation.
Finally, we emphasize that the Hamiltonian-eigenstate generators of Eqs. $(26)$ and $(27)$ of Ref. [@Carmelo94b] are not general and refer to finite densities of added and removed electrons, respectively, corresponding to even electron numbers. The corresponding general generator expressions will be studied elsewhere and involve the topological-momenton generators $(17)-(20)$.
THE ELECTRON - QUASIPARTICLE TRANSFORMATION
===========================================
In this section we study the relation of the 1D quasiparticle introduced in Sec. III to the electron. This study brings about the question of the low-excitation-energy relation between the electronic operators $c_{k,\sigma}^{\dag }$ in momentum space at $k=\pm k_{F\sigma }$ and the pseudoparticle operators $b_{q,\alpha}^{\dag }$ at the pseudo-Fermi points.
The quasiparticle operator, ${\tilde{c}}^{\dag }_{k_{F\sigma
},\sigma}$, which creates one quasiparticle with spin projection $\sigma$ and momentum $k_{F\sigma}$, is defined by Eq. $(8)$. In the pseudoparticle basis the $\sigma $ quasiparticle operator has the form $(21)$ or $(22)$. However, since we do not know the relation between the electron and the pseudoparticles, Eqs. $(21)$ and $(22)$ do not provide direct information on the electron content of the $\sigma $ quasiparticle. Equation $(8)$ tells us that the quasiparticle operator defines a one-to-one correspondence between the addition of one electron to the system and the creation of one quasiparticle, exactly as we expect from the Landau theory in 3D: the electronic excitation, $c^{\dag }_{k_{F\sigma },\sigma}|0; N_{\uparrow}=N_c-N_s,
N_{\downarrow}=N_s\rangle$, defined at the Fermi momentum but arbitrary energy, contains a single $\sigma $ quasiparticle, as we show below. When we add or remove one electron from the many-body system this includes the transition to the suitable final ground state as well as transitions to excited states. The former transition is nothing but the quasiparticle excitation of Sec. III.
Although our final results refer to momenta $k=\pm k_{F\sigma }$, in the following analysis we consider for simplicity only the momentum $k=k_{F\sigma }$. In order to relate the quasiparticle operators $\tilde{c}^{\dag }_{k_{F\sigma },\sigma }$ to the electronic operators $c^{\dag }_{k_{F\sigma },\sigma }$ we start by defining the Hilbert sub space where the low-energy $\omega $ projection of the state
$$c^{\dag }_{k_{F\sigma},\sigma}
|0; N_{\sigma}, N_{-\sigma} \rangle \, ,$$
is contained. Notice that the electron excitation $(23)$ is [ *not*]{} an eigenstate of the interacting problem: when acting onto the initial ground state $|0;i\rangle\equiv |0; N_{\sigma}, N_{-\sigma} \rangle$ the electronic operator $c^{\dag }_{k_{F\sigma},\sigma }$ can be written as
$$c^{\dag }_{k_{F\sigma},\sigma } = \left[\langle 0;f|c^{\dag
}_{k_{F\sigma},\sigma }|0;i\rangle + {\hat{R}}\right]
\tilde{c}^{\dag }_{k_{F\sigma},\sigma } \, ,$$
where
$${\hat{R}}=\sum_{\gamma}\langle \gamma;k=0|c^{\dag
}_{k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma} \, ,$$
and
$$|\gamma;k=0\rangle = {\hat{A}}_{\gamma}
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }|0;i\rangle
= {\hat{A}}_{\gamma}|0;f\rangle \, .$$
Here $|0;f\rangle\equiv |0; N_{\sigma}+1, N_{-\sigma} \rangle$ denotes the final ground state, $\gamma$ represents the set of quantum numbers needed to specify each Hamiltonian eigenstate present in the excitation $(23)$, and ${\hat{A}}_{\gamma}$ is the corresponding generator. The first term of the rhs of Eq. $(24)$ refers to the ground state - ground state transition and the operator $\hat{R}$ generates $k=0$ transitions from $|0,f\rangle $ to states I, states II, and non LWS’s. Therefore, the electron excitation $(23)$ contains the quantum superposition of both the suitable final ground state $|0;f\rangle$, of excited states I relative to that state which result from multiple pseudoparticle-pseudohole processes associated with transitions to states I, and of LWS’s II and non-LWS’s. All these states have the same electron numbers as the final ground state. The transitions to LWS’s II and to non-LWS’s require a minimal finite energy which equals their gap relative to the final ground state. The set of all these Hamiltonian eigenstates spans the Hilbert sub space where the electronic operators $c^{\dag }_{k_{F\sigma
},\sigma }$ $(24)$ projects the initial ground state.
In order to show that the ground-state – ground-state leading order term of $(24)$ controls the low-energy physics, we study the low-energy sector of the above Hilbert sub space. This is spanned by low-energy states I. In the case of these states the generator ${\hat{A}}_{\gamma}$ of Eq. $(26)$ reads
$${\hat{A}}_{\gamma}\equiv
{\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l} = \prod_{\alpha=c,s}
{\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l) \, ,$$
where the operator ${\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)$ is given in Eq. $(56)$ of Ref. [@Carmelo94b] and produces a number $N_{ph}^{\alpha ,\iota}$ of $\alpha ,\iota$ pseudoparticle-pseudohole processes onto the final ground state. Here $\iota =sgn (q)1=\pm 1$ defines the right ($\iota=1$) and left ($\iota=-1$) pseudoparticle movers, $\{N_{ph}^{\alpha ,\iota}\}$ is a short notation for
$$\{N_{ph}^{\alpha ,\iota}\}\equiv
N_{ph}^{c,+1}, N_{ph}^{c,-1},
N_{ph}^{s,+1}, N_{ph}^{s,-1} \, ,$$
and $l$ is a quantum number which distinguishes different pseudoparticle-pseudohole distributions characterized by the same values for the numbers $(28)$. In the case of the lowest-energy states I the above set of quantum numbers $\gamma $ is thus given by $\gamma\equiv \{N_{ph}^{\alpha ,\iota}\},l$. (We have introduced the argument $(l)$ in the operator $L^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)$ which for the same value of the $N_{ph}^{\alpha\iota}$ number defines different $\alpha\iota$ pseudoparticle - pseudohole configurations associated with different choices of the pseudomomenta in the summation of expression $(56)$ of Ref. [@Carmelo94b].) In the particular case of the lowest-energy states expression $(26)$ reads
$$|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle =
{\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l}
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }|0;i\rangle
= {\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l}|0;f\rangle \, .$$
The full electron – quasiparticle transformation $(24)$ involves other Hamiltonian eigenstates which are irrelevant for the quasiparticle problem studied in the present paper. Therefore, we omit here the study of the general generators ${\hat{A}}_{\gamma}$ of Eq. $(26)$.
The momentum expression (relative to the final ground state) of Hamiltonian eigenstates with generators of the general form $(27)$ is [@Carmelo94b]
$$k = {2\pi\over {N_a}}\sum_{\alpha ,\iota}\iota
N_{ph}^{\alpha\iota} \, .$$
Since our states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ have zero momentum relative to the final ground state they have restrictions in the choice of the numbers $(28)$. For these states these numbers are such that
$$\sum_{\alpha ,\iota}\iota
N_{ph}^{\alpha ,\iota} = 0 \, ,$$
which implies that
$$\sum_{\alpha }N_{ph}^{\alpha ,+1} =
\sum_{\alpha }N_{ph}^{\alpha ,-1} =
\sum_{\alpha }N_{ph}^{\alpha ,\iota} \, .$$
Since
$$N_{ph}^{\alpha ,\iota}=1,2,3,.... \, ,$$
it follows from Eqs. $(31)-(33)$ that
$$\sum_{\alpha ,\iota}
N_{ph}^{\alpha ,\iota} = 2,4,6,8.... \, ,$$
is always an even positive integer.
The vanishing chemical-potential excitation energy,
$$\omega^0_{\sigma }=\mu(N_{\sigma }+1,N_{-\sigma })
-\mu(N_{\sigma },N_{-\sigma }) \, ,$$
can be evaluated by use of the Hamiltonian $(6)-(7)$ and is given by
$$\omega^0_{\uparrow } = {\pi\over {2N_a}}\left[v_c + F_{cc}^1
+ v_s + F_{ss}^1 - 2F_{cs}^1 + v_c + F_{cc}^0\right] \, ,$$
and
$$\omega^0_{\downarrow } = {\pi\over {2N_a}}\left[v_s + F_{ss}^1
+ v_c + F_{cc}^0 + v_s + F_{ss}^0 + 2F_{cs}^0\right] \, ,$$
for up and down spin, respectively, and involves the pseudoparticle velocities (A6) and Landau parameters (A8). Since we measure the chemical potencial from its value at the canonical ensemble of the reference initial ground state, ie we consider $\mu(N_{\sigma },N_{-\sigma })=0$, $\omega^0_{\sigma }$ measures also the ground-state excitation energy $\omega^0_{\sigma }=E_0(N_{\sigma
}+1,N_{-\sigma })-E_0(N_{\sigma },N_{-\sigma })$.
The excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of the states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ (relative to the initial ground state) involve the energy $\omega^0_{\sigma }$ and are $l$ independent. They are given by
$$\omega (\{N_{ph}^{\alpha ,\iota}\}) =
\omega^0_{\sigma } + {2\pi\over {N_a}}\sum_{\alpha ,\iota}
v_{\alpha} N_{ph}^{\alpha ,\iota} \, .$$
We denote by $N_{\{N_{ph}^{\alpha ,\iota}\}}$ the number of these states which obey the condition-equations $(31)$, $(32)$, and $(34)$ and have the same values for the numbers $(28)$.
In order to study the main corrections to the (quasiparticle) ground-state – ground-state transition it is useful to consider the simplest case when $\sum_{\alpha
,\iota}N_{ph}^{\alpha ,\iota}=2$. In this case we have $N_{\{N_{ph}^{\alpha ,\iota}\}}=1$ and, therefore, we can omit the index $l$. There are four of such Hamiltonian eigenstates. Using the notation of the right-hand side (rhs) of Eq. $(28)$ these states are $|1,1,0,0;k=0\rangle $, $|0,0,1,1;k=0\rangle $, $|1,0,0,1;k=0\rangle $, and $|0,1,1,0;k=0\rangle $. They involve two pseudoparticle-pseudohole processes with $\iota=1$ and $\iota=-1$, respectively and read
$$|1,1,0,0;k=0\rangle = \prod_{\iota=\pm 1}
{\hat{\rho}}_{c ,\iota}(\iota {2\pi\over {N_a}})
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }
|0;i\rangle \, ,$$
$$|0,0,1,1;k=0\rangle = \prod_{\iota=\pm 1}
{\hat{\rho}}_{s ,\iota}(\iota {2\pi\over {N_a}})
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }
|0;i\rangle \, ,$$
$$|1,0,0,1;k=0\rangle =
{\hat{\rho}}_{c,+1}({2\pi\over {N_a}})
{\hat{\rho}}_{s,-1}(-{2\pi\over {N_a}})
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }
|0;i\rangle \, ,$$
$$|0,1,1,0;k=0\rangle =
{\hat{\rho}}_{c,-1}(-{2\pi\over {N_a}})
{\hat{\rho}}_{s,+1}({2\pi\over {N_a}})
\tilde{c}^{\dag }_{k_{F\sigma },\sigma }
|0;i\rangle \, ,$$
where ${\hat{\rho}}_{\alpha ,\iota}(k)$ is the fluctuation operator of Eq. (A12). This was studied in some detail in Ref. [@Carmelo94c].
From equations $(26)$, $(27)$, and $(29)$ we can rewrite expression $(24)$ as
$$\begin{aligned}
c^{\dag }_{k_{F\sigma},\sigma } & = &
\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle
\left[1 + \sum_{\{N_{ph}^{\alpha ,\iota}\},l}
{\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle \over \langle
0;f|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle} \prod_{\alpha=c,s}
{\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)\right]
\tilde{c}^{\dag }_{k_{F\sigma},\sigma }
\nonumber\\
& + & \sum_{\gamma '}\langle \gamma ';k=0|c^{\dag
}_{k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma '}
\tilde{c}^{\dag }_{k_{F\sigma},\sigma } \, ,\end{aligned}$$
where $\gamma '$ refers to the Hamiltonian eigenstates of form $(26)$ whose generator ${\hat{A}}_{\gamma '}$ are not of the particular form $(27)$.
In Appendix C we evaluate the matrix elements of expression $(43)$ corresponding to transitions to the final ground state and excited states of form $(29)$. Following Ref. [@Carmelo94b], these states refer to the conformal-field-theory [@Frahm; @Frahm91] critical point. They are such that the ratio $N_{ph}^{\alpha ,\iota}/N_a$ vanishes in the thermodynamic limit, $N_a\rightarrow 0$. Therefore, in that limit the positive excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of Eq. $(38)$ are vanishing small. The results of that Appendix lead to
$$\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle
= \sqrt{Z_{\sigma }}\, ,$$
where, as in a Fermi liquid [@Nozieres], the one-electron renormalization factor
$$Z_{\sigma}=\lim_{\omega\to 0}Z_{\sigma}(\omega) \, ,$$
is closed related to the $\sigma $ self energy $\Sigma_{\sigma}
(k,\omega)$. Here the function $Z_{\sigma}(\omega)$ is given by the small-$\omega $ leading-order term of
$$|\varsigma_{\sigma}||1-{\partial \hbox{Re}
\Sigma_{\sigma} (\pm k_{F\sigma},\omega)
\over {\partial\omega}}|^{-1}\, ,$$
where
$$\varsigma_{\uparrow}=-2+\sum_{\alpha}
{1\over 2}[(\xi_{\alpha c}^1-\xi_{\alpha s}^1)^2
+(\xi_{\alpha c}^0)^2] \, ,$$
and
$$\varsigma_{\downarrow}=-2
+\sum_{\alpha}{1\over 2}[(\xi_{\alpha s}^1)^2+(\xi_{\alpha
c}^0+\xi_{\alpha s}^0)^2] \, ,$$
are $U$, $n$, and $m$ dependent exponents which for $U>0$ are negative and such that $-1<\varsigma_{\sigma}<-1/2$. In equations $(47)$ and $(48)$ $\xi_{\alpha\alpha'}^j$ are the parameters (A7). From equations $(46)$, (C11), and (C15) we find
$$Z_{\sigma}(\omega)=a^{\sigma }_0
\omega^{1+\varsigma_{\sigma}} \, ,$$
where $a^{\sigma }_0$ is a real and positive constant such that
$$\lim_{U\to 0}a^{\sigma }_0=1 \, .$$
Equation $(49)$ confirms that the renormalization factor $(45)$ vanishes, as expected for a 1D many-electron problem [@Anderson]. It follows from Eq. $(44)$ that in the present 1D model the electron renormalization factor can be identified with a single matrix element [@Anderson; @Metzner94]. We emphasize that in a Fermi liquid $\varsigma_{\sigma}=-1$ and Eq. $(46)$ recovers the usual Fermi-liquid relation. In the different three limits $U\rightarrow 0$, $m\rightarrow 0$, and $m\rightarrow n$ the exponents $\varsigma_{\uparrow}$ and $\varsigma_{\downarrow}$ are equal and given by $-1$, $-2+{1\over 2}[{\xi_0\over
2}+{1\over {\xi_0}}]^2$, and $-{1\over 2}-\eta_0[1-{\eta_0\over
2}]$, respectively. Here the $m\rightarrow 0$ parameter $\xi_0$ changes from $\xi_0=\sqrt{2}$ at $U=0$ to $\xi_0=1$ as $U\rightarrow\infty$ and $\eta_0=({2\over {\pi}})\tan^{-1}\left({4t\sin
(\pi n)\over U}\right)$.
The evaluation in Appendix C for the matrix elements of the rhs of expression $(43)$ refers to the thermodynamic limit and follows the study of the small-$\omega $ dependencies of the one-electron Green function $G_{\sigma} (\pm k_{F\sigma},
\omega)$ and self energy $\Sigma_{\sigma} (\pm k_{F\sigma},
\omega)$. This leads to $\omega $ dependent quantities \[as $(46)$ and $(49)$ and the function $F_{\sigma}^{\alpha
,\iota}(\omega )$ of Eq. $(51)$ below\] whose $\omega\rightarrow
0$ limits provide the expressions for these matrix elements. Although these matrix elements vanish, it is physicaly important to consider the associate $\omega $-dependent functions. These are matrix-element expressions only in the limit $\omega\rightarrow 0$, yet at small finite values of $\omega $ they provide revelant information on the electron - quasiparticle overlap at low energy $\omega $. In addition to expression $(44)$, in Appendix C we find the following expression which is valid only for matrix elements involving the excited states of form $(29)$ referring to the conformal-field-theory critical point
$$\begin{aligned}
\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle & = & \lim_{\omega\to 0}
F_{\sigma}^{\alpha ,\iota}(\omega ) = 0\, ,\nonumber\\
F_{\sigma}^{\alpha ,\iota}(\omega ) & = &
e^{i\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)}
\sqrt{{a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\over
a^{\sigma }_0}}\sqrt{Z_{\sigma }(\omega )}\,
\omega^{\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}}
\, .\end{aligned}$$
Here $\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)$ and $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)$ are real numbers and the function $Z_{\sigma }(\omega )$ was defined above. Notice that the function $F_{\sigma}^{\alpha ,\iota}(\omega )$ vanishes with different powers of $\omega $ for different sets of $N_{ph}^{\alpha ,\iota}$ numbers. This is because these powers reflect directly the order of the pseudoparticle-pseudohole generator relative to the final ground state of the corresponding state I.
Although the renormalization factor $(45)$ and matrix elements $(51)$ vanish, Eqs. $(49)$ and $(51)$ provide relevant information in what concerns the ratios of the different matrix elements which can either diverge or vanish. Moreover, in the evaluation of some $\omega $-dependent quantities we can use for the matrix elements $(51)$ the function $F_{\sigma}^{\alpha ,\iota}(\omega )$ and assume that $\omega $ is vanishing small, which leads to correct results. This procedure is similar to replacing the renormalization factor $(45)$ by the function $(49)$. While the renormalization factor is zero because in the limit of vanishing excitation energy there is no overlap between the electron and the quasiparticle, the function $(49)$ is associated with the small electron - quasiparticle overlap which occurs at low excitation energy $\omega $.
Obviously, if we introduced in the rhs of Eq. $(43)$ zero for the matrix elements $(44)$ and $(51)$ we would loose all information on the associate low-energy singular electron - quasiparticle transformation (described by Eq. $(58)$ below). The vanishing of the matrix elements $(44)$ and $(51)$ just reflects the fact that the one-electron density of states vanishes in the 1D many-electron problem when the excitation energy $\omega\rightarrow 0$. This justifies the lack of electron - quasiparticle overlap in the limit of zero excitation energy. However, the diagonalization of that problem absorbes the renormalization factor $(45)$ and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. This process can only be suitable described if we keep either ${1\over {N_a}}$ corrections in the case of the large finite system or small virtual $\omega $ corrections in the case of the infinite system. (The analysis of Appendix C has considered the thermodynamic limit and, therefore, we consider in this section the case of the infinite system.)
In spite of the vanishing of the matrix elements $(44)$ and $(51)$, following the above discussion we introduce Eqs. $(44)$ and $(51)$ in Eq. $(43)$ with the result
$$\begin{aligned}
c^{\dag }_{\pm k_{F\sigma},\sigma } & = &
\lim_{\omega\to 0} \sqrt{Z_{\sigma }(\omega )}
\left[1 + \sum_{\{N_{ph}^{\alpha ,\iota}\},l}
e^{i\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)}
\sqrt{{a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\over
a^{\sigma }_0}}\omega^{\sum_{\alpha
,\iota} N_{ph}^{\alpha ,\iota }}\prod_{\alpha=c,s}
{\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)\right]
\tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma }
\nonumber\\
& + & \sum_{\gamma '}\langle \gamma ';k=0|c^{\dag
}_{\pm k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma '}
\tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma } \, .\end{aligned}$$
(Note that the expression is the same for momenta $k=k_{F\sigma}$ and $k=-k_{F\sigma}$.)
Let us confirm the key role played by the “bare” quasiparticle ground-state – ground-state transition in the low-energy physics. Since the $k=0$ higher-energy LWS’s I and finite-energy LWS’s II and non-LWS’s represented in Eq. $(52)$ by $|\gamma ';k=0\rangle$ are irrelevant for the low-energy physics, we focus our attention on the lowest-energy states of form $(29)$.
Let us look at the leading-order terms of the first term of the rhs of Eq. $(52)$. These correspond to the ground-state – ground-state transition and to the first-order pseudoparticle-pseudohole corrections. These corrections are determined by the excited states $(39)-(42)$. The use of Eqs. $(34)$ and $(39)-(42)$ allows us rewriting the leading-order terms as
$$\lim_{\omega\to 0}\sqrt{Z_{\sigma }(\omega )}\left[1 +
\omega^2\sum_{\alpha ,\alpha ',\iota}
C_{\alpha ,\alpha '}^{\iota }
\rho_{\alpha\iota } (\iota{2\pi\over {N_a}})
\rho_{\alpha '-\iota } (-\iota{2\pi\over {N_a}})
+ {\cal O}(\omega^4)\right]
\tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma } \, ,$$
where $C_{\alpha ,\alpha '}^{\iota }$ are complex constants such that
$$C_{c,c}^{1} = C_{c,c}^{-1} = e^{i\chi_{\sigma }(1,1,0,0)}
\sqrt{{a^{\sigma }(1,1,0,0)\over
a^{\sigma }_0}} \, ,$$
$$C_{s,s}^{1} = C_{s,s}^{-1} = e^{i\chi_{\sigma }(0,0,1,1)}
\sqrt{{a^{\sigma }(0,0,1,1)\over
a^{\sigma }_0}} \, ,$$
$$C_{c,s}^{1} = C_{s,c}^{-1} =
e^{i\chi_{\sigma }(1,0,0,1)}
\sqrt{{a^{\sigma }(1,0,0,1)\over
a^{\sigma }_0}} \, ,$$
$$C_{c,s}^{-1} = C_{s,c}^{1} =
e^{i\chi_{\sigma }(0,1,1,0)}
\sqrt{{a^{\sigma }(0,1,1,0)\over
a^{\sigma }_0}} \, ,$$
and ${\hat{\rho}}_{\alpha ,\iota } (k)=\sum_{\tilde{q}}
b^{\dag}_{\tilde{q}+k,\alpha ,\iota}b_{\tilde{q},\alpha ,\iota}$ is a first-order pseudoparticle-pseudohole operator. The real constants $a^{\sigma }$ and $\chi_{\sigma }$ in the rhs of Eqs. $(54)-(57)$ are particular cases of the corresponding constants of the general expression $(51)$. Note that the $l$ independence of the states $(39)-(42)$ allowed the omission of the index $l$ in the quantities of the rhs of Eqs. $(54)-(57)$ and that we used the notation $(28)$ for the argument of the corresponding $l$-independent $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ constants and $\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ phases.
The higher-order contributions to expression $(53)$ are associated with low-energy excited Hamiltonian eigenstates I orthogonal both to the initial and final ground states and whose matrix-element amplitudes are given by Eq. $(51)$. The corresponding functions $F_{\sigma}^{\alpha ,\iota}(\omega )$ vanish as $\lim_{\omega\to 0}\omega^{1+\varsigma_{\sigma}+4j\over 2}$ (with $2j$ the number of pseudoparticle-pseudohole processes relative to the final ground state and $j=1,2,...$). Therefore, the leading-order term of $(52)-(53)$ and the exponent $\varsigma_{\sigma}$ $(47)-(48)$ fully control the low-energy overlap between the $\pm k_{F\sigma}$ quasiparticles and electrons and determines the expressions of all $k=\pm k_{F\sigma }$ one-electron low-energy quantities. That leading-order term refers to the ground-state – ground-state transition which dominates the electron - quasiparticle transformation $(24)$. This transition corresponds to the “bare” quasiparticle of Eq. $(8)$. We follow the same steps as Fermi liquid theory and consider the low-energy non-canonical and non-complete transformation one derives from the full expression $(53)$ by only taking the corresponding leading-order term which leads to
$${\tilde{c}}^{\dag }_{\pm k_{F\sigma},\sigma } =
{c^{\dag }_{\pm k_{F\sigma},\sigma }\over {\sqrt{Z_{\sigma }}}} \, .$$
This relation refers to a singular transformation. Combining Eqs. $(21)-(22)$ and $(58)$ provides the low-energy expression for the electron in the pseudoparticle basis. The singular nature of the transformation $(58)$ which maps the vanishing-renormalization-factor electron onto the one-renormalization-factor quasiparticle, explains the perturbative character of the pseudoparticle-operator basis [@Carmelo94; @Carmelo94b; @Carmelo94c].
If we replace in Eq. $(58)$ the renormalization factor $Z_{\sigma }$ by $Z_{\sigma }(\omega )$ or omit $\lim_{\omega\to 0}$ from the rhs of Eqs. $(52)$ and $(53)$ and in both cases consider $\omega$ being very small leads to effective expressions which contain information on the low-excitation-energy electron – quasiparticle overlap. Since these expressions correspond to the infinite system, the small $\omega $ finite contributions contain the same information as the ${1\over {N_a}}$ corrections of the corresponding large but finite system at $\omega =0$.
It is the perturbative character of the pseudoparticle basis that determines the form of expansion $(53)$ which except for the non-classical exponent in the $\sqrt{Z_{\sigma }(\omega )}
\propto \omega^{1+\varsigma_{\sigma}\over 2}$ factor \[absorbed by the electron - quasiparticle transformation $(58)$\] includes only classical exponents, as in a Fermi liquid [@Nozieres]. At low energy the BA solution performs the singular transformation $(58)$ which absorbes the one-electron renormalization factor $(45)$ and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. By that process the transformation $(58)$ renormalizes divergent two-electron scattering vertex functions onto finite two-quasiparticle scattering quantities. These quantities are related to the finite $f$ functions [@Carmelo92] of form given by Eq. (A4) and amplitudes of scattering [@Carmelo92b] of the pseudoparticle theory.
It was shown in Refs. [@Carmelo92; @Carmelo92b; @Carmelo94b] that these $f$ functions and amplitudes of scattering determine all static and low-energy quantities of the 1D many-electron problem, as we discuss below and in Appendices A and D. The $f$ functions and amplitudes are associated with zero-momentum two-pseudoparticle forward scattering. These scattering processes interchange no momentum and no energy, only giving rise to two-pseudoparticle phase shifts. The corresponding pseudoparticles control all the low-energy physics. In the limit of vanishing energy the pseudoparticle spectral weight leads to finite values for the static quantities, yet it corresponds to vanishing one-electron spectral weight.
To diagonalize the problem at lowest energy is equivalent to perform the electron - quasiparticle transformation $(58)$: it maps divergent irreducible (two-momenta) charge and spin vertices onto finite quasiparticle parameters by absorbing $Z_{\sigma }$. In a diagramatic picture this amounts by multiplying each of these vertices appearing in the diagrams by $Z_{\sigma }$ and each one-electron Green function (propagator) by ${1\over Z_{\sigma }}$. This procedure is equivalent to renormalize the electron quantities onto corresponding quasiparticle quantities, as in a Fermi liquid. However, in the present case the renormalization factor is zero.
This also holds true for more involved four-momenta divergent two-electron vertices at the Fermi points. In this case the electron - quasiparticle transformation multiplies each of these vertices by a factor $Z_{\sigma }Z_{\sigma '}$, the factors $Z_{\sigma }$ and $Z_{\sigma '}$ corresponding to the pair of $\sigma $ and $\sigma '$ interacting electrons. The obtained finite parameters control all static quantities. Performimg the transformation $(58)$ is equivalent to sum all vertex contributions and we find that this transformation is unique, ie it maps the divergent Fermi-surface vertices on the same finite quantities independently on the way one chooses to approach the low energy limit. This cannot be detected by looking only at logarithmic divergences of some diagrams [@Solyom; @Metzner]. Such non-universal contributions either cancel or are renormalized to zero by the electron - quasiparticle transformation. We have extracted all our results from the exact BA solution which takes into account all relevant contributions. We can choose the energy variables in such a way that there is only one $\omega $ dependence. We find that the relevant vertex function divergences are controlled by the electron - quasiparticle overlap, the vertices reading
$$\Gamma_{\sigma\sigma '}^{\iota }(k_{F\sigma },\iota
k_{F\sigma '};\omega) = {1\over
{Z_{\sigma}(\omega)Z_{\sigma '}(\omega)}}
\{\sum_{\iota '=\pm 1}(\iota ')^{{1-\iota\over 2}}
[v_{\rho }^{\iota '} + (\delta_{\sigma ,\sigma '}
- \delta_{\sigma ,-\sigma '})v_{\sigma_z}^{\iota '}]
- \delta_{\sigma ,\sigma '}v_{F,\sigma }\}
\, ,$$
where the expressions for the charge $v_{\rho}^{\iota }$ and spin $v_{\sigma_z}^{\iota }$ velocities are given in Appendix D. The divergent character of the function $(59)$ follows exclusively from the ${1\over Z_{\sigma}(\omega)Z_{\sigma
'}(\omega)}$ factor, with $Z_{\sigma}(\omega)$ given by $(49)$. The transformation $(58)$ maps the divergent vertices onto the $\omega $-independent finite quantity $Z_{\sigma}(\omega)
Z_{\sigma '}(\omega)\Gamma_{\sigma\sigma '}^{\iota }(k_{F\sigma },
\iota k_{F\sigma '};\omega )$. The low-energy physics is determined by the following $v_{F,\sigma }$-independent Fermi-surface two-quasiparticle parameters
$$L^{\iota }_{\sigma ,\sigma'} = lim_{\omega\to 0}
\left[\delta_{\sigma ,\sigma '}v_{F,\sigma }+
Z_{\sigma}(\omega) Z_{\sigma '}(\omega)\Gamma_{\sigma\sigma
'}^{\iota }(k_{F\sigma },\iota k_{F\sigma '};\omega )\right] \, .$$
From the point of view of the electron - quasiparticle transformation the divergent vertices $(59)$ originate the finite quasiparticle parameters $(60)$ which define the above charge and spin velocities. These are given by the following simple combinations of the parameters $(60)$
$$\begin{aligned}
v_{\rho}^{\iota} = {1\over 4}\sum_{\iota '=\pm 1}(\iota
')^{{1-\iota\over 2}}\left[L_{\sigma ,\sigma}^{\iota '} +
L_{\sigma ,-\sigma}^{\iota '}\right]
\, , \nonumber\\
v_{\sigma_z}^{\iota} = {1\over 4}\sum_{\iota '=\pm 1}(\iota
')^{{1-\iota\over 2}}\left[L_{\sigma ,\sigma}^{\iota '} -
L_{\sigma ,-\sigma}^{\iota '}\right] \, .\end{aligned}$$
As shown in Appendix D, the parameters $L_{\sigma ,\sigma '}^{\iota}$ can be expressed in terms of the pseudoparticle group velocities (A6) and Landau parameters (A8) as follows
$$\begin{aligned}
L_{\sigma ,\sigma}^{\pm 1} & = & 2\left[{(v_s + F^0_{ss})\over L^0}
\pm (v_c + F^1_{cc}) - {L_{\sigma ,-\sigma}^{\pm 1}\over 2}
\right] \, , \nonumber\\
L_{\sigma ,-\sigma}^{\pm 1} & = & -4\left[{(v_c + F^0_{cc} +
F^0_{cs})\over L^0}\pm (v_s + F^1_{ss} - F^1_{cs})\right]
\, ,\end{aligned}$$
where $L^0=(v_c+F^0_{cc})(v_s+F^0_{ss})-(F^0_{cs})^2$. Combining equations $(61)$ and $(62)$ we find the expressions of the Table for the charge and spin velocities. These velocities were already known through the BA solution and determine the expressions for all static quantities [@Carmelo94c]. Equations $(62)$ clarify their origin which is the singular electron - quasiparticle transformation $(58)$. It renders a non-perturbative electronic problem into a perturbative pseudoparticle problem. In Appendix D we show how the finite two-pseudoparticle forward-scattering $f$ functions and amplitudes which determine the static quantities are directly related to the two-quasiparticle finite parameters $(60)$ through the velocities $(61)$. This study confirms that it is the singular electron - quasiparticle transformation $(58)$ which justifies the [*finite character*]{} of the $f_{\alpha\alpha
'}(q,q')$ functions (A4) and the associate perturbative origin of the pseudoparticle Hamiltonian $(6)-(7)$ [@Carmelo94].
In order to further confirm that the electron - quasiparticle transformation $(58)$ and associate electron - quasiparticle overlap function $(49)$ control the whole low-energy physics we close this section by considering the one-electron spectral function. The spectral function was studied numerically and for $U\rightarrow\infty$ in Refs. [@Muramatsu] and [@Shiba], respectively. The leading-order term of the real-part expression for the $\sigma $ Green function at $k=\pm k_{F\sigma}$ and small excitation energy $\omega $ (C10)-(C11) is given by, $\hbox{Re}G_{\sigma} (\pm k_{F\sigma},\omega)=a^{\sigma}_0
\omega^{\varsigma_{\sigma}}$. From Kramers-Kronig relations we find $\hbox{Im}G_{\sigma} (\pm k_{F\sigma},\omega)=
-i\pi a^{\sigma}_0 (1 + \varsigma_{\sigma})
\omega^{\varsigma_{\sigma }}$ for the corresponding imaginary part. Based on these results we arrive to the following expression for the low-energy spectral function at $k=\pm k_{F\sigma}$
$$A_{\sigma}(\pm k_{F\sigma},\omega) =
2\pi a^{\sigma }_0 (1 + \varsigma_{\sigma})
\omega^{\varsigma_{\sigma}} = 2\pi {\partial
Z_{\sigma}(\omega)\over\partial\omega} \, .$$
This result is a generalization of the $U\rightarrow\infty$ expression of Ref. [@Shiba]. It is valid for all parameter space where both the velocities $v_c$ and $v_s$ (A6) are finite. (This excludes half filling $n=1$, maximum spin density $m=n$, and $U=\infty$ when $m\neq 0$.) The use of Kramers-Kronig relations also restricts the validity of expression $(63)$ to the energy $\omega $ continuum limit. On the other hand, we can show that $(63)$ is consistent with the general expression
$$\begin{aligned}
A_{\sigma} (\pm k_{F\sigma},\omega)
& = & \sum_{\{N_{ph}^{\alpha ,\iota}\},l}
|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2
2\pi\delta (\omega - \omega (\{N_{ph}^{\alpha
,\iota}\}))\nonumber \\
& + & \sum_{\gamma '}|\langle\gamma ';k=0|c^{\dag }_{k_{F\sigma},\sigma}
|0;i\rangle |^2 2\pi\delta (\omega - \omega_{\gamma '}) \, ,\end{aligned}$$
whose summations refer to the same states as the summations of expressions $(43)$ and $(52)$. The restriction of the validity of expression $(63)$ to the energy continuum limit requires the consistency to hold true only for the spectral weight of $(64)$ associated with the quasiparticle ground-state – ground-state transition. This corresponds to the first $\delta $ peak of the rhs of Eq. $(64)$. Combining equations $(44)$ and $(64)$ and considering that in the present limit of vanishing $\omega $ replacing the renormalization factor $(45)$ by the electron - quasiparticle overlap function $(49)$ leads to the correct result (as we confirm below) we arrive to
$$A_{\sigma}(\pm k_{F\sigma},\omega) =
a^{\sigma }_0\omega^{1+\varsigma_{\sigma}}
2\pi\delta (\omega - \omega^0_{\sigma })
= Z_{\sigma}(\omega ) 2\pi\delta (\omega -
\omega^0_{\sigma }) \, .$$
Let us then show that the Kramers-Kronig continuum expression $(63)$ is an approximation consistent with the Dirac-delta function representation $(65)$. This consistency just requires that in the continuum energy domain from $\omega =0$ to the ground-state – ground-state transition energy $\omega =\omega^0_{\sigma }$ (see Eq. $(35)$) the functions $(63)$ and $(65)$ contain the same amount of spectral weight. We find that both the $A_{\sigma}(\pm k_{F\sigma},\omega)$ representations $(63)$ and $(65)$ lead to
$$\int_{0}^{\omega^0_{\sigma }}A_{\sigma}(\pm k_{F\sigma},\omega)
=2\pi a^{\sigma }_0 [\omega^0_{\sigma }]^{\varsigma_{\sigma }+1}
\, ,$$
which confirms they contain the same spectral weight. The representation $(63)$ reveals that the spectral function diverges at $\pm k_{F\sigma}$ and small $\omega$ as a Luttinger-liquid power law. However, both the small-$\omega $ density of states and the integral $(66)$ vanish in the limit of vanishing excitation energy.
Using the method of Ref. [@Carmelo93] we have also studied the spectral function $A_{\sigma}(k,\omega)$ for all values of $k$ and vanishing positive $\omega $. We find that $A_{\sigma}(k,\omega)$ \[and the Green function $Re G_{\sigma} (k,\omega)$\] vanishes when $\omega\rightarrow 0$ for all momentum values [*except*]{} at the non-interacting Fermi-points $k=\pm k_{F\sigma}$ where it diverges as the power law $(63)$. This divergence is fully controlled by the quasiparticle ground-state - ground-state transition. The transitions to the excited states $(29)$ give only vanishing contributions to the spectral function. This further confirms the dominant role of the bare quasiparticle ground-state - ground-state transition and of the associate electron - quasiparticle transformation $(58)$ which control the low-energy physics.
It follows from the above behavior of the spectral function at small $\omega $ that for $\omega\rightarrow 0$ the density of states,
$$D_{\sigma} (\omega)=\sum_{k}A_{\sigma}(k,\omega) \, ,$$
results, exclusively, from contributions of the peaks centered at $k=\pm k_{F\sigma}$ and is such that $D_{\sigma} (\omega)\propto
\omega A_{\sigma}(\pm k_{F\sigma},\omega)$ [@Carmelo95a]. On the one hand, it is known from the zero-magnetic field studies of Refs. [@Shiba; @Schulz] that the density of states goes at small $\omega $ as
$$D_{\sigma} (\omega)\propto\omega^{\nu_{\sigma}} \, ,$$
where $\nu_{\sigma}$ is the exponent of the equal-time momentum distribution expression,
$$N_{\sigma}(k)\propto |k\mp k_{F\sigma}|^{\nu_{\sigma }} \, ,$$
[@Frahm91; @Ogata]. (The exponent $\nu_{\sigma }$ is defined by Eq. $(5.10)$ of Ref. [@Frahm91] for the particular case of the $\sigma $ Green function.) On the other hand, we find that the exponents $(47)-(48)$ and $\nu_{\sigma}$ are such that
$$\varsigma_{\sigma}=\nu_{\sigma }-1 \, ,$$
in agreement with the above analysis. However, this simple relation does not imply that the equal-time expressions [@Frahm91; @Ogata] provide full information on the small-energy instabilities. For instance, in addition to the momentum values $k=\pm k_{F\sigma}$ and in contrast to the spectral function, $N_{\sigma}(k)$ shows singularities at $k=\pm [k_{F\sigma}+2k_{F-\sigma}]$ [@Ogata]. Therefore, only the direct low-energy study reveals all the true instabilities of the quantum liquid.
Note that in some Luttinger liquids the momentum distribution is also given by $N(k)\propto |k\mp k_F|^{\nu }$ but with $\nu >1$ [@Solyom; @Medem; @Voit]. We find that in these systems the spectral function $A(\pm k_F,\omega)
\propto\omega^{\nu -1}$ does not diverge.
CONCLUDING REMARKS
==================
One of the goals of this paper was, in spite of the differences between the Luttinger-liquid Hubbard chain and 3D Fermi liquids, detecting common features in these two limiting problems which we expect to be present in electronic quantum liquids in spatial dimensions $1<$D$<3$. As in 3D Fermi liquids, we find that there are Fermi-surface quasiparticles in the Hubbard chain which connect ground states differing in the number of electrons by one and whose low-energy overlap with electrons determines the $\omega\rightarrow 0$ divergences. In spite of the vanishing electron density of states and renormalization factor, the spectral function vanishes at all momenta values [*except*]{} at the Fermi surface where it diverges (as a Luttinger-liquid power law).
While low-energy excitations are described by $c$ and $s$ pseudoparticle-pseudohole excitations which determine the $c$ and $s$ separation [@Carmelo94c], the quasiparticles describe ground-state – ground-state transitions and recombine $c$ and $s$ (charge and spin in the zero-magnetization limit), being labelled by the spin projection $\sigma $. They are constituted by one topological momenton and one or two pseudoparticles which cannot be separated and are confined inside the quasiparticle. Moreover, there is a close relation between the quasiparticle contents and the Hamiltonian symmetry in the different sectors of parameter space. This can be shown if we consider pseudoholes instead of pseudoparticles [@Carmelo95a] and we extend the present quasiparticle study to the whole parameter space of the Hubbard chain.
Importantly, we have written the low-energy electron at the Fermi surface in the pseudoparticle basis. The vanishing of the electron renormalization factor implies a singular character for the low-energy electron – quasiparticle and electron – pseudoparticle transformations. This singular process extracts from vanishing electron spectral weight quasiparticles of spectral-weight factor one. The BA diagonalization of the 1D many-electron problem is at lowest excitation energy equivalent to perform such singular electron – quasiparticle transformation. This absorves the vanishing one-electron renormalization factor giving rise to the finite two-pseudoparticle forward-scattering $f$ functions and amplitudes which control the expressions for all static quantities [@Carmelo92; @Carmelo92b; @Carmelo94]. It is this transformation which justifies the perturbative character of the many-electron Hamiltonian in the pseudoparticle basis [@Carmelo94].
From the existence of Fermi-surface quasiparticles both in the 1D and 3D limits, our results suggest their existence for quantum liquids in dimensions 1$<$D$<$3. However, the effect of increasing dimensionality on the electron – quasiparticle overlap remains an unsolved problem. The present 1D results do not provide information on whether that overlap can vanish for D$>$1 or whether it always becomes finite as soon as we leave 1D.
ACKNOWLEDGMENTS
===============
We thank N. M. R. Peres for many fruitful discussions and for reproducing and checking some of our calculations. We are grateful to F. Guinea and K. Maki for illuminating discussions. This research was supported in part by the Institute for Scientific Interchange Foundation under the EU Contract No. ERBCHRX - CT920020 and by the National Science Foundation under the Grant No. PHY89-04035.
In this Appendix we present some quantities of the pseudoparticle picture which are useful for the present study. We start by defining the pseudo-Fermi points and limits of the pseudo-Brillouin zones. When $N_{\alpha }$ (see Eq. $(4)$) is odd (even) and the numbers $I_j^{\alpha }$ of Eq. $(3)$ are integers (half integers) the pseudo-Fermi points are symmetric and given by [@Carmelo94; @Carmelo94c]
$$q_{F\alpha }^{(+)}=-q_{F\alpha }^{(-)} =
{\pi\over {N_a}}[N_{\alpha}-1] \, .$$
On the other hand, when $N_{\alpha }$ is odd (even) and $I_j^{\alpha }$ are half integers (integers) we have that
$$q_{F\alpha }^{(+)} = {\pi\over {N_a}}N_{\alpha }
\, , \hspace{1cm}
-q_{F\alpha }^{(-)} ={\pi\over {N_a}}[N_{\alpha }-2] \, ,$$
or
$$q_{F\alpha }^{(+)} = {\pi\over {N_a}}[N_{\alpha }-2]
\, , \hspace{1cm}
-q_{F\alpha }^{(-)} = {\pi\over {N_a}}N_{\alpha } \, .$$
Similar expressions are obtained for the pseudo-Brioullin zones limits $q_{\alpha }^{(\pm)}$ if we replace in Eqs. (A1)-(A3) $N_{\alpha }$ by the numbers $N_{\alpha }^*$ of Eq. $(4)$.
The $f$ functions were studied in Ref. [@Carmelo92] and read
$$\begin{aligned}
f_{\alpha\alpha'}(q,q') & = & 2\pi v_{\alpha}(q)
\Phi_{\alpha\alpha'}(q,q')
+ 2\pi v_{\alpha'}(q') \Phi_{\alpha'\alpha}(q',q) \nonumber \\
& + & \sum_{j=\pm 1} \sum_{\alpha'' =c,s}
2\pi v_{\alpha''} \Phi_{\alpha''\alpha}(jq_{F\alpha''},q)
\Phi_{\alpha''\alpha'}(jq_{F\alpha''},q') \, ,\end{aligned}$$
where the pseudoparticle group velocities are given by
$$v_{\alpha}(q) = {d\epsilon_{\alpha}(q) \over {dq}} \, ,$$
and
$$v_{\alpha }=\pm
v_{\alpha }(q_{F\alpha}^{(\pm)}) \, ,$$
are the pseudo-Fermi points group velocities. In expression (A4) $\Phi_{\alpha\alpha '}(q,q')$ mesures the phase shift of the $\alpha '$ pseudoparticle of pseudomomentum $q'$ due to the forward-scattering collision with the $\alpha $ pseudoparticle of pseudomomentum $q$. These phase shifts determine the pseudoparticle interactions and are defined in Ref. [@Carmelo92]. They control the low-energy physics. For instance, the related parameters
$$\xi_{\alpha\alpha '}^j = \delta_{\alpha\alpha '}+
\Phi_{\alpha\alpha '}(q_{F\alpha}^{(+)},q_{F\alpha '}^{(+)})+
(-1)^j\Phi_{\alpha\alpha '}(q_{F\alpha}^{(+)},q_{F\alpha '}^{(-)})
\, , \hspace{2cm} j=0,1 \, ,$$
play a determining role at the critical point. ($\xi_{\alpha\alpha '}^1$ are the entries of the transpose of the dressed-charge matrix [@Frahm].) The values at the pseudo-Fermi points of the $f$ functions (A4) include the parameters (A7) and define the Landau parameters,
$$F_{\alpha\alpha'}^j =
{1\over {2\pi}}\sum_{\iota =\pm 1}(\iota )^j
f_{\alpha\alpha'}(q_{F\alpha}^{(\pm)},\iota
q_{F\alpha '}^{(\pm)}) \, , \hspace{1cm} j=0,1 \, .$$
These are also studied in Ref. [@Carmelo92]. The parameters $\delta_{\alpha ,\alpha'}v_{\alpha }+
F_{\alpha\alpha'}^j$ appear in the expressions of the low-energy quantities.
We close this Appendix by introducing pseudoparticle-pseudohole operators which will appear in Sec. IV. Although the expressions in the pseudoparticle basis of one-electron operators remains an unsolved problem, in Ref. [@Carmelo94c] the electronic fluctuation operators
$${\hat{\rho}}_{\sigma }(k)=
\sum_{k'}c^{\dag }_{k'+k\sigma}c_{k'\sigma} \, ,$$
were expressed in terms of the pseudoparticle fluctuation operators
$${\hat{\rho}}_{\alpha }(k)=\sum_{q}b^{\dag }_{q+k\alpha}
b_{q\alpha} \, .$$
This study has revealed that $\iota =sgn (k)1=\pm 1$ electronic operators are made out of $\iota =sgn (q)1=\pm 1$ pseudoparticle operators only, $\iota $ defining the right ($\iota=1$) and left ($\iota=-1$) movers.
Often it is convenient measuring the electronic momentum $k$ and pseudomomentum $q$ from the $U=0$ Fermi points $k_{F\sigma}^{(\pm)}=\pm \pi n_{\sigma}$ and pseudo-Fermi points $q_{F\alpha}^{(\pm)}$, respectively. This adds the index $\iota$ to the electronic and pseudoparticle operators. The new momentum $\tilde{k}$ and pseudomomentum $\tilde{q}$ are such that
$$\tilde{k} =k-k_{F\sigma}^{(\pm)} \, , \hspace{2cm}
\tilde{q}=q-q_{F\alpha}^{(\pm)} \, ,$$
respectively, for $\iota=\pm 1$. For instance,
$${\hat{\rho}}_{\sigma ,\iota }(k)=\sum_{\tilde{k}}
c^{\dag }_{\tilde{k}+k\sigma\iota}c_{\tilde{k}\sigma\iota} \, ,
\hspace{2cm}
{\hat{\rho}}_{\alpha ,\iota }(k) =
\sum_{\tilde{q}}b^{\dag }_{\tilde{q}+k\alpha\iota}
b_{\tilde{q}\alpha\iota} \, .$$
In this Appendix we evaluate the expression for the topological-momenton generator $(17)-(19)$. In order to derive the expression for $U_c^{+1}$ we consider the Fourier transform of the pseudoparticle operator $b^{\dag}_{q,c}$ which reads
$$\beta^{\dag}_{x,c} = \frac{1}{\sqrt{N_a}}
\sum_{q_c^{(-)}}^{q_c^{(+)}}
e^{-i q x} b^{\dag}_{q,c} \, .$$
From Eq. $(15)$ we arrive to
$$U_c^{+1} \beta^{\dag}_{x,c} U_c^{-1} = \frac{1}{\sqrt{N_a}}
\sum_{q_c^{(-)}}^{q_c^{(+)}}
e^{-i q x} b^{\dag}_{q-\frac{\pi}{N_a},c} \, .$$
By performing a $\frac{\pi}{N_a}$ pseudomomentum translation we find
$$U_c^{+1} \beta^{\dag}_{x,c} U_c^{-1} =
e^{i\frac{\pi}{N_a} x}\beta^{\dag}_{x,c} \, ,$$
and it follows that
$$\begin{aligned}
U_{c}^{\pm 1} = \exp\left\{\pm i\frac{\pi}{N_a}
\sum_{y}y\beta^{\dag}_{y,c}\beta_{y,c} \, ,
\right\}.\end{aligned}$$
By inverse-Fourier transforming expression (B4) we find expression $(17)-(19)$ for this unitary operator, which can be shown to also hold true for $U_{s}^{\pm 1}$.
In this Appendix we derive the expressions for the matrix elements $(44)$ and $(51)$.
At energy scales smaller than the gaps for the LWS’s II and non-LWS’s referred in this paper and in Refs. [@Carmelo94; @Carmelo94b; @Carmelo94c] the expression of the $\sigma $ one-electron Green function $G_{\sigma} (k_{F\sigma},\omega)$ is fully defined in the two Hilbert sub spaces spanned by the final ground state $|0;f\rangle $ and associate $k=0$ excited states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ of form $(29)$ belonging the $N_{\sigma }+1$ sector and by a corresponding set of states belonging the $N_{\sigma }-1$ sector, respectively. Since $|0;f\rangle $ corresponds to zero values for all four numbers $(28)$ in this Appendix we use the notation $|0;f\rangle\equiv|\{N_{ph}^{\alpha ,\iota}=0\},l;k=0\rangle$. This allows a more compact notation for the state summations. The use of a Lehmann representation leads to
$$G_{\sigma} (k_{F\sigma},\omega) =
G_{\sigma}^{(N_{\sigma }+1)} (k_{F\sigma},\omega)
+ G_{\sigma}^{(N_{\sigma }-1)} (k_{F\sigma},\omega) \, ,$$
where
$$G_{\sigma}^{(N_{\sigma }+1)} (k_{F\sigma},\omega) =
\sum_{\{N_{ph}^{\alpha ,\iota}\},l}
{|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2\over
{\omega - \omega (\{N_{ph}^{\alpha ,\iota}\})
+ i\xi}} \, ,$$
has divergences for $\omega >0$ and $G_{\sigma}^{(N_{\sigma }-1)} (k_{F\sigma},\omega)$ has divergences for $\omega <0$. We emphasize that in the $\{N_{ph}^{\alpha ,\iota}\}$ summation of the rhs of Eq. (C2), $N_{ph}^{\alpha ,\iota}=0$ for all four numbers refers to the final ground state, as we mentioned above. Below we consider positive but vanishing values of $\omega $ and, therefore, we need only to consider the function (C2). We note that at the conformal-field critical point [@Frahm; @Frahm91] the states which contribute to (C2) are such that the ratio $N_{ph}^{\alpha
,\iota}/N_a$ vanishes in the thermodynamic limit, $N_a\rightarrow 0$ [@Carmelo94b]. Therefore, in that limit the positive excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of Eq. (C2), which are of the form $(38)$, are vanishing small. Replacing the full Green function by (C2) (by considering positive values of $\omega $ only) we find
$$\lim_{N_a\to\infty}\hbox{Re}G_{\sigma} (k_{F\sigma},\omega) =
\sum_{\{N_{ph}^{\alpha ,\iota}\}}\left[
{\sum_{l} |\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 \over {\omega }}
\right] \, .$$
We emphasize that considering the limit (C3) implies that all the corresponding expressions for the $\omega $ dependent quantities we obtain in the following are only valid in the limit of vanishing positive energy $\omega $. Although many of these quantities are zero in that limit, their $\omega $ dependence has physical meaning because different quantities vanish as $\omega\rightarrow 0$ in different ways, as we discuss in Sec. IV. Therefore, our results allow the classification of the relative importance of the different quantities.
In order to solve the present problem we have to combine a suitable generator pseudoparticle analysis [@Carmelo94b] with conformal-field theory [@Frahm; @Frahm91]. Let us derive an alternative expression for the Green function (C3). Comparison of both expressions leads to relevant information. This confirms the importance of the pseudoparticle operator basis [@Carmelo94; @Carmelo94b; @Carmelo94c] which allows an operator description of the conformal-field results for BA solvable many-electron problems [@Frahm; @Frahm91].
The asymptotic expression of the Green function in $x$ and $t$ space is given by the summation of many terms of form $(3.13)$ of Ref. [@Frahm] with dimensions of the fields suitable to that function. For small energy the Green function in $k$ and $\omega $ space is obtained by the summation of the Fourier transforms of these terms, which are of the form given by Eq. $(5.2)$ of Ref. [@Frahm91]. However, the results of Refs. [@Frahm; @Frahm91] do not provide the expression at $k=k_{F\sigma }$ and small positive $\omega $. In this case the above summation is equivalent to a summation in the final ground state and excited states of form $(29)$ obeying to Eqs. $(31)$, $(32)$, and $(34)$ which correspond to different values for the dimensions of the fields.
We emphasize that expression $(5.7)$ of Ref. [@Frahm91] is not valid in our case. Let us use the notation $k_0=k_{F\sigma }$ (as in Eqs. $(5.6)$ and $(5.7)$ of Ref. [@Frahm91]). While we consider $(k-k_0)=(k-k_{F\sigma })=0$ expression $(5.7)$ of Ref. [@Frahm91] is only valid when $(k-k_0)=(k-k_{F\sigma })$ is small but finite. We have solved the following general integral
$$\tilde{g}(k_0,\omega ) = \int_{0}^{\infty}dt
e^{i\omega t}F(t) \, ,$$
where
$$F(t) = \int_{-\infty}^{\infty}dx\prod_{\alpha ,\iota}
{1\over {(x+\iota v_{\alpha }t)^{2\Delta_{\alpha
}^{\iota}}}} \, ,$$
with the result
$$\tilde{g}(k_0,\omega )\propto
\omega^{[\sum_{\alpha ,\iota}
2\Delta_{\alpha }^{\iota}-2]} \, .$$
Comparing our expression (C6) with expression $(5.7)$ of Ref. [@Frahm91] we confirm these expressions are different.
In the present case of the final ground state and excited states of form $(29)$ obeying Eqs. $(31)$, $(32)$, and $(34)$ we find that the dimensions of the fields are such that
$$\sum_{\alpha ,\iota} 2\Delta_{\alpha }^{\iota}=
2+\varsigma_{\sigma}+2\sum_{\alpha ,\iota}
N_{ph}^{\alpha ,\iota} \, ,$$
with $\varsigma_{\sigma}$ being the exponents $(47)$ and $(48)$. Therefore, equation (C6) can be rewritten as
$$\tilde{g}(k_0,\omega )\propto
\omega^{\varsigma_{\sigma}+2\sum_{\alpha ,\iota}
N_{ph}^{\alpha ,\iota}} \, .$$
Summing the terms of form (C8) corresponding to different states leads to an alternative expression for the function (C3) with the result
$$\lim_{N_a\to\infty} \hbox{Re}G_{\sigma} (k_{F\sigma},\omega) =
\sum_{\{N_{ph}^{\alpha ,\iota}\}}\left[
{a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})
\omega^{\varsigma_{\sigma } + 1
+ 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota} }
\over {\omega }}
\right] \, ,$$
or from Eq. $(34)$,
$$\lim_{N_a\to\infty} \hbox{Re}G_{\sigma} (k_{F\sigma},\omega) =
\sum_{j=0,1,2,...}\left[
{a^{\sigma }_j \omega^{\varsigma_{\sigma } + 1 + 4j}
\over {\omega }}\right] \, ,$$
where $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ and $a^{\sigma }_j$ are complex constants. From equation (C10) we find
$$\hbox{Re}\Sigma_{\sigma} ( k_{F\sigma},\omega ) =
\omega - {1\over {\hbox{Re} G_{\sigma} (k_{F\sigma },\omega )}}
= \omega [1-{\omega^{-1-\varsigma_{\sigma}}\over
{a^{\sigma }_0+\sum_{j=1}^{\infty}a^{\sigma }_j\omega^{4j}}}]
\, .$$
While the function $Re G_{\sigma} (k_{F\sigma},\omega)$ (C9)-(C10) diverges as $\omega\rightarrow 0$, following the form of the self energy (C11) the one-electron renormalization factor $(45)$ vanishes and there is no overlap between the quasiparticle and the electron, in contrast to a Fermi liquid. (In equation (C11) $\varsigma_{\sigma}\rightarrow -1$ and $a^{\sigma }_0\rightarrow 1$ when $U\rightarrow 0$.)
Comparision of the terms of expressions (C3) and (C9) with the same $\{N_{ph}^{\alpha ,\iota}\}$ values, which refer to contributions from the same set of $N_{\{N_{ph}^{\alpha ,\iota}\}}$ Hamiltonian eigenstates $|\{N_{ph}^{\alpha
,\iota}\},l;k=0\rangle$ and refer to the limit $\omega\rightarrow 0$, leads to
$$\sum_{l} |\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 =
\lim_{\omega\to 0} a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})\,
\omega^{\varsigma_{\sigma } + 1
+ 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} = 0\, .$$
Note that the functions of the rhs of Eq. (C12) corresponding to different matrix elements go to zero with different exponents.
On the other hand, as for the corresponding excitation energies $(38)$, the dependence of functions associated with the amplitudes $|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |$ on the vanishing energy $\omega $ is $l$ independent. Therefore, we find
$$|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0|
c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 =
\lim_{\omega\to 0}
a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\,
\omega^{\varsigma_{\sigma } + 1
+ 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} = 0\, ,$$
where the constants $a^{\sigma }(\{N_{ph}^{\alpha
,\iota}\},l)$ are $l$ dependent and obey the normalization condition
$$a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\}) =
\sum_{l} a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l) \, .$$
It follows that the matrix elements of Eq. (C12) have the form given in Eq. $(51)$.
Moreover, following our notation for the final ground state when the four $N_{ph}^{\alpha ,\iota}$ vanish Eq. (C13) leads to
$$|\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 =
\lim_{\omega\to 0}
a^{\sigma }_0\,\omega^{\varsigma_{\sigma } + 1} =
\lim_{\omega\to 0} Z_{\sigma}(\omega) = Z_{\sigma} = 0 \, ,$$
where $a^{\sigma }_0=a^{\sigma }(\{N_{ph}^{\alpha ,\iota}=0\},l)$ is a positive real constant and $Z_{\sigma}(\omega)$ is the function $(49)$. Following equation (C11) the function $Z_{\sigma}(\omega)$ is given by the leading-order term of expression $(46)$. Since $a^{\sigma }_0$ is real and positive expression $(44)$ follows from Eq. (C15).
In this Appendix we confirm that the finite two-quasiparticle functions $(60)$ of form $(62)$ which are generated from the divergent two-electron vertex functions $(59)$ by the singular electron - quasiparticle transformation $(58)$ control the charge and spin static quantities of the 1D many-electron problem.
On the one hand, the parameters $v_{\rho}^{\iota}$ and $v_{\sigma_z}^{\iota}$ of Eq. $(59)$ can be shown to be fully determined by the two-quasiparticle functions $(60)$. By inverting relations $(60)$ with the vertices given by Eq. $(59)$ expressions $(61)$ follow. Physically, the singular electron - quasiparticle transformation $(58)$ maps the divergent two-electron functions onto the finite parameters $(60)$ and $(61)$.
On the other hand, the “velocities” $(61)$ play a relevant role in the charge and spin conservation laws and are simple combinations of the zero-momentum two-pseudoparticle forward-scattering $f$ functions and amplitudes introduced in Refs. [@Carmelo92] and [@Carmelo92b], respectively. Here we follow Ref. [@Carmelo94c] and use the general parameter $\vartheta $ which refers to $\vartheta =\rho$ for charge and $\vartheta =\sigma_z$ for spin. The interesting quantity associated with the equation of motion for the operator ${\hat{\rho}}_{\vartheta }^{(\pm)}(k,t)$ defined in Ref. [@Carmelo94c] is the following ratio
$${i\partial_t {\hat{\rho}}_{\vartheta }^{(\pm)}(k,t)\over
k}|_{k=0} = {[{\hat{\rho}}_{\vartheta }^{(\pm)}(k,t),
:\hat{{\cal H}}:]\over k}|_{k=0} = v_{\vartheta}^{\mp 1}
{\hat{\rho}}_{\vartheta }^{(\mp)}(0,t) \, ,$$
where the functions $v_{\vartheta}^{\pm 1}$ $(61)$ are closely related to two-pseudoparticle forward-scattering quantities as follows
$$\begin{aligned}
v_{\vartheta}^{+1} & = & {1\over
{\left[\sum_{\alpha ,\alpha'}{k_{\vartheta\alpha}k_{\vartheta\alpha'}
\over {v_{\alpha}v_{\alpha '}}}
\left(v_{\alpha}\delta_{\alpha ,\alpha '} - {[A_{\alpha\alpha '}^{1}+
A_{\alpha\alpha '}^{-1}]
\over {2\pi}}\right)\right]}}\nonumber \\
& = & {1\over {\left[\sum_{\alpha}{1\over {v_{\alpha}}}
\left(\sum_{\alpha'}k_{\vartheta\alpha '}\xi_{\alpha\alpha
'}^1\right)^2\right]}} \, ,\end{aligned}$$
and
$$\begin{aligned}
v_{\vartheta}^{-1} & = &
\sum_{\alpha ,\alpha'}k_{\vartheta\alpha}k_{\vartheta\alpha'}
\left(v_{\alpha}\delta_{\alpha ,\alpha '} + {[f_{\alpha\alpha '}^{1}-
f_{\alpha\alpha '}^{-1}]\over {2\pi}}\right)\nonumber \\
& = & \sum_{\alpha}v_{\alpha}
\left(\sum_{\alpha'}k_{\vartheta\alpha '}
\xi_{\alpha\alpha '}^1\right)^2 \, .\end{aligned}$$
Here $k_{\vartheta\alpha}$ are integers given by $k_{\rho
c}=k_{\sigma_{z} c}=1$, $k_{\rho s}=0$, and $k_{\sigma_{z} s}=-2$, and the parameters $\xi_{\alpha\alpha '}^j$ are defined in Eq. (A7). In the rhs of Eqs. (D2) and (D3) $v_{\alpha }$ are the $\alpha $ pseudoparticle group velocities (A6), the $f$ functions are given in Eq. (A4) and $A_{\alpha\alpha'}^{1}=A_{\alpha\alpha'
}(q_{F\alpha}^{(\pm)}, q_{F\alpha'}^{(\pm)})$ and $A_{\alpha\alpha'}^{-1}= A_{\alpha\alpha'}(q_{F\alpha}^{(\pm)},
q_{F\alpha'}^{(\mp)})$, where $A_{\alpha\alpha'}(q,q')$ are the scattering amplitudes given by Eqs. $(83)-(85)$ of Ref. [@Carmelo92b].
The use of relations $(61)$ and of Eqs. (A5), (A6), (A8), (D2), and (D3) shows that the parameters $(60)$ and corresponding charge and spin velocities $v_{\vartheta}^{\pm 1}$ can also be expressed in terms of the pseudoparticle group velocities (A6) and Landau parameters (A8). These expressions are given in Eq. $(62)$ and in the Table.
The charge and spin velocities control all static quantities of the many-electron system. They determine, for example, the charge and spin susceptibilities,
$$K^{\vartheta }={1\over {\pi v_{\vartheta}^{+1}}} \, ,$$
and the coherent part of the charge and spin conductivity spectrum, $v_{\vartheta}^{-1}\delta (\omega )$, respectively [@Carmelo92; @Carmelo92b; @Carmelo94c].
D. Pines and P. Nozières, in [*The Theory of Quantum Liquids*]{}, (Addison-Wesley, Redwood City, 1989), Vol. I. Gordon Baym and Christopher J. Pethick, in [*Landau Fermi-Liquid Theory Concepts and Applications*]{}, (John Wiley & Sons, New York, 1991). J. Sólyom, Adv. Phys. [**28**]{}, 201 (1979). F. D. M. Haldane, J. Phys. C [**14**]{}, 2585 (1981). I. E. Dzyaloshinskii and A. I. Larkin, Sov. Phys. JETP [**38**]{}, 202 (1974); Walter Metzner and Carlo Di Castro, Phys. Rev. B [**47**]{}, 16 107 (1993). This ansatz was introduced for the case of the isotropic Heisenberg chain by H. A. Bethe, Z. Phys. [**71**]{}, 205 (1931). For one of the first generalizations of the Bethe ansatz to multicomponent systems see C. N. Yang, Phys. Rev. Lett. [**19**]{}, 1312 (1967). Elliott H. Lieb and F. Y. Wu, Phys. Rev. Lett. [**20**]{}, 1445 (1968); For a modern and comprehensive discussion of these issues, see V. E. Korepin, N. M. Bogoliubov, and A. G. Izergin, [*Quantum Inverse Scattering Method and Correlation Functions*]{} (Cambridge University Press, 1993). P. W. Anderson, Phys. Rev. Lett. [**64**]{}, 1839 (1990); Philip W. Anderson, Phys. Rev. Lett. [**65**]{} 2306 (1990); P. W. Anderson and Y. Ren, in [*High Temperature Superconductivity*]{}, edited by K. S. Bedell, D. E. Meltzer, D. Pines, and J. R. Schrieffer (Addison-Wesley, Reading, MA, 1990). J. M. P. Carmelo and N. M. R. Peres, Nucl. Phys. B. [**458**]{} \[FS\], 579 (1996). J. Carmelo and A. A. Ovchinnikov, Cargèse lectures, unpublished (1990); J. Phys.: Condens. Matter [**3**]{}, 757 (1991). F. D. M. Haldane, Phys. Rev. Lett. [**66**]{}, 1529 (1991); E. R. Mucciolo, B. Shastry, B. D. Simons, and B. L. Altshuler, Phys. Rev. B [**49**]{}, 15 197 (1994). J. Carmelo, P. Horsch, P.-A. Bares, and A. A. Ovchinnikov, Phys. Rev. B [**44**]{}, 9967 (1991). J. M. P. Carmelo, P. Horsch, and A. A. Ovchinnikov, Phys. Rev. B [**45**]{}, 7899 (1992). J. M. P. Carmelo and P. Horsch, Phys. Rev. Lett. [**68**]{}, 871 (1992); J. M. P. Carmelo, P. Horsch, and A. A. Ovchinnikov, Phys. Rev. B [**46**]{}, 14728 (1992). J. M. P. Carmelo, P. Horsch, D. K. Campbell, and A. H. Castro Neto, Phys. Rev. B [**48**]{}, 4200 (1993). J. M. P. Carmelo and A. H. Castro Neto, Phys. Rev. Lett. [**70**]{}, 1904 (1993); J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. B [**50**]{}, 3667 (1994). J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. B [**50**]{}, 3683 (1994). J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. Lett. [**73**]{}, 926 (1994); (E) [*ibid.*]{} [**74**]{}, 3089 (1995). J. M. P. Carmelo and N. M. R. Peres, Phys. Rev. B [**51**]{}, 7481 (1995). Holger Frahm and V. E. Korepin, Phys. Rev. B [**42**]{}, 10 553 (1990). Holger Frahm and V. E. Korepin, Phys. Rev. B [**43**]{}, 5653 (1991). Fabian H. L. Essler, Vladimir E. Korepin, and Kareljan Schoutens, Phys. Rev. Lett. [**67**]{}, 3848 (1991); Nucl. Phys. B [**372**]{}, 559 (1992). Fabian H. L. Essler and Vladimir E. Korepin, Phys. Rev. Lett. [**72**]{}, 908 (1994). A. H. Castro Neto, H. Q. Lin, Y.-H. Chen, and J. M. P. Carmelo, Phys. Rev. B (1994). Philippe Nozières, in [*The theory of interacting Fermi systems*]{} (W. A. Benjamin, NY, 1964), page 100. Walter Metzner and Claudio Castellani, preprint (1994). R. Preuss, A. Muramatsu, W. von der Linden, P. Dierterich, F. F. Assaad, and W. Hanke, Phys. Rev. Lett. [**73**]{}, 732 (1994). Karlo Penc, Frédéric Mila, and Hiroyuki Shiba, Phys. Rev. Lett. [**75**]{}, 894 (1995). H. J. Schulz, Phys. Rev. Lett. [**64**]{}, 2831 (1990). Masao Ogata, Tadao Sugiyama, and Hiroyuki Shiba, Phys. Rev. B [**43**]{}, 8401 (1991). V. Medem and K. Schönhammer, Phys. Rev. B [**46**]{}, 15 753 (1992). J. Voit, Phys. Rev. B [**47**]{}, 6740 (1993).
TABLE
*= *$v^{\iota}_{\rho }$ = *$v^{\iota}_{\sigma_z }$\
$\iota = -1$ $v_c + F^1_{cc}$ $v_c + F^1_{cc} + 4(v_s + F^1_{ss} - F^1_{cs})$\
$\iota = 1$ $(v_s + F^0_{ss})/L^0$ $(v_s + F^0_{ss} + 4[v_c + F^0_{cc} + F^0_{cs}])/L^0$\
\[tableI\]***
Table I - Alternative expressions of the parameters $v^{\iota}_{\rho }$ (D1)-(D4) and $v^{\iota}_{\sigma_z }$ (D2)-(D5) in terms of the pseudoparticle velocities $v_{\alpha}$ (A6) and Landau parameters $F^j_{\alpha\alpha '}$ (A8), where $L^0=(v_c+F^0_{cc})
(v_s+F^0_{ss})-(F^0_{cs})^2$.
|
{
"pile_set_name": "arxiv"
}
|
Stuck with You (Zones song)
"Stuck With You" is the debut disc and 7" single of punk band Zones, released by Zoom Records on February 17, 1978.
It contained its eponymous song, "Stuck With You", which was backed with "No Angels"; both songs were a combination of punk rock and power pop, although more punk than the group's subsequent singles and the album, which were more new wave-oriented.
The single was played a lot by DJ John Peel, who shortly afterwards recorded and broadcast sessions with the band, and garnered the attention of Arista Records, who signed the group.
The band comprised vocalist and guitarist Willie Gardner (previously in Hot Valves), and ex-PVC2 members, bassist Russell Webb, keyboardist Billy McIsaac and drummer Kenny Hyslop. Their next single, "Sign of the Times" was released shortly afterwards in Arista Records.
Track list
Side A: "Stuck With You"
Side B: "No Angels"
Personnel
Willie Gardner: lead vocals, lead guitar.
Russell Webb: bass guitar.
Billy McIsaac: keyboards.
Kenny Hyslop: drums.
References
Category:1978 singles
Category:Zones (band) songs
Category:Debut singles
Category:1978 songs
|
{
"pile_set_name": "wikipedia_en"
}
|
abcdef abc def hij
klm nop qrs
abcdef abc def hij
tuv wxy z
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We analytically derive the upper bound on the overall efficiency of single-photon generation based on cavity quantum electrodynamics (QED), where cavity internal loss is treated explicitly. The internal loss leads to a tradeoff relation between the internal generation efficiency and the escape efficiency, which results in a fundamental limit on the overall efficiency. The corresponding lower bound on the failure probability is expressed only with an “internal cooperativity," introduced here as the cooperativity parameter with respect to the cavity internal loss rate. The lower bound is obtained by optimizing the cavity external loss rate, which can be experimentally controlled by designing or tuning the transmissivity of the output coupler. The model used here is general enough to treat various cavity-QED effects, such as the Purcell effect, on-resonant or off-resonant cavity-enhanced Raman scattering, and vacuum-stimulated Raman adiabatic passage. A repumping process, where the atom is reused after its decay to the initial ground state, is also discussed.'
author:
- 'Hayato Goto,$^1$ Shota Mizukami,$^2$ Yuuki Tokunaga,$^3$ and Takao Aoki$^2$'
title: 'Fundamental Limit on the Efficiency of Single-Photon Generation Based on Cavity Quantum Electrodynamics'
---
*Introduction*. Single-photon sources are a key component for photonic quantum information processing and quantum networking [@Kimble2008a]. Single-photon sources based on cavity quantum electrodynamics (QED) [@Eisaman2011a; @Rempe2015a; @Kuhn2010a; @Law1997a; @Vasilev2010a; @Maurer2004a; @Barros2009a; @Kuhn1999a; @Duan2003a] are particularly promising, because they enable deterministic emission into a single mode, which is desirable for low-loss and scalable implementations. Many single-photon generation schemes have been proposed and studied using various cavity-QED effects, such as the Purcell effect [@Eisaman2011a; @Rempe2015a; @Kuhn2010a], on-resonant [@Kuhn2010a; @Law1997a; @Vasilev2010a] or off-resonant [@Maurer2004a; @Barros2009a] cavity-enhanced Raman scattering, and vacuum-stimulated Raman adiabatic passage (vSTIRAP) [@Eisaman2011a; @Rempe2015a; @Kuhn2010a; @Vasilev2010a; @Kuhn1999a; @Duan2003a; @Maurer2004a].
The overall efficiency of single-photon generation based on cavity QED is composed of two factors: the internal generation efficiency $\eta_{\mathrm{in}}$ (probability that a photon is generated inside the cavity) and the escape efficiency $\eta_{\mathrm{esc}}$ (probability that a generated photon is extracted to a desired external mode). The upper bounds on $\eta_{\mathrm{in}}$, based on the cooperativity parameter $C$ [@Rempe2015a], have been derived for some of the above schemes [@Rempe2015a; @Kuhn2010a; @Law1997a; @Vasilev2010a]. $C$ is inversely proportional to the total cavity loss rate, $\kappa=\kappa_{\mathrm{ex}}+\kappa_{\mathrm{in}}$, where $\kappa_{\mathrm{ex}}$ and $\kappa_{\mathrm{in}}$ are the external and internal loss rates, respectively [@comment-loss]. Note that $\kappa_{\mathrm{ex}}$ can be experimentally controlled by designing or tuning the transmissivity of the output coupler. Thus, $\eta_{\mathrm{in}}$ is maximized by setting $\kappa_{\mathrm{ex}}$ to a small value so that $\kappa \approx \kappa_{\mathrm{in}}$. However, a low $\kappa_{\mathrm{ex}}$ results in a low escape efficiency $\eta_{\mathrm{esc}}=\kappa_{\mathrm{ex}}/\kappa$, which limits the channelling of the generated photons into the desired mode. There is therefore a *tradeoff* relation between $\eta_{\mathrm{in}}$ and $\eta_{\mathrm{esc}}$ with respect to $\kappa_{\mathrm{ex}}$, and $\kappa_{\mathrm{ex}}$ should be optimized to maximize the overall efficiency. This tradeoff relation has not been examined in previous studies, where the internal loss rate $\kappa_{\mathrm{in}}$ has not been treated explicitly. Additionally, previous studies on the photon-generation efficiency have not taken account of a repumping process, where the atom decays to the initial ground state via spontaneous emission and is “reused" for cavity-photon generation [@Barros2009a].
In this paper, we analytically derive the upper bound on the overall efficiency of single-photon generation based on cavity QED, by taking into account both the cavity internal loss and the repumping process. We use the model shown in Fig. \[fig-system\], which is able to describe most of the previously proposed generation schemes, with or without the repumping process, in a unified and generalized manner.
In particular, we show that the lower bound on the failure probability for single-photon generation, $P_F$, for the case of no repumping, is given by [@comment-on-Goto; @Goto2008a; @Goto2010a] $$\begin{aligned}
P_F
\ge
\frac{2}{\displaystyle 1+\sqrt{1+2C_{\mathrm{in}}}}
\approx \sqrt{\frac{2}{C_{\mathrm{in}}}},
\label{eq-PF}\end{aligned}$$ where we have introduced the “internal cooperativity," $$\begin{aligned}
C_{\mathrm{in}}=
\frac{g^2}{2\kappa_{\mathrm{in}} \gamma },
\label{eq-Cin}\end{aligned}$$ as the cooperativity parameter with respect to $\kappa_{\mathrm{in}}$ instead of $\kappa$ for the standard definition, $C=g^2/(2\kappa \gamma )$ [@Rempe2015a]. The approximation in Eq. (\[eq-PF\]) holds when $C_{\mathrm{in}} \gg 1$.
The lower bound on $P_F$ in Eq. (\[eq-PF\]) is obtained when $\kappa_{\mathrm{ex}}$ is set to its optimal value, $$\begin{aligned}
\kappa_{\mathrm{ex}}^{\mathrm{opt}}
\equiv \kappa_{\mathrm{in}} \sqrt{1+2C_{\mathrm{in}}},
\label{eq-optimal-kex}\end{aligned}$$ and is simply expressed as $2\kappa_{\mathrm{in}}/\kappa^{\mathrm{opt}}$, where $\kappa^{\mathrm{opt}}
\equiv
\kappa_{\mathrm{in}}
+
\kappa_{\mathrm{ex}}^{\mathrm{opt}}$ [@comment-kex]. Note that the experimental values of $(g,\gamma,\kappa_{\mathrm{in}})$ determine which regime the system should be in: the Purcell regime ($\kappa \gg g \gg \gamma$), the strong-coupling regime ( $g \gg (\kappa, \gamma)$), or the intermediate regime ($\kappa \approx g \gg \gamma$).
The remainder of this paper is organized as follows. First, we show that the present model is applicable to various cavity-QED single-photon generation schemes. Next, we provide the basic equations for the present analysis. Using these equations, we analytically derive an upper bound on the success probability, $P_S=1-P_F$, of single-photon generation. From here, we optimize $\kappa_{\mathrm{ex}}$ and derive Ineq. (\[eq-PF\]). We then briefly discuss the condition for typical optical cavity-QED systems. Finally, the conclusion and outlook are presented.
*Model*. As shown in Fig. \[fig-system\], we consider a cavity QED system with a $\Lambda$-type three-level atom in a one-sided cavity. The atom is initially prepared in $|u\rangle$. The $|u\rangle$-$|e\rangle$ transition is driven with an external classical field, while the $|g\rangle$-$|e\rangle$ transition is coupled to the cavity. This system is general enough to describe most of the cavity QED single-photon generation schemes.
For instance, by first exciting the atom to $|e\rangle$ with a resonant $\pi$ pulse (with time-dependent $\Omega$), or fast adiabatic passage (with time-dependent $\Delta_u$), the atom is able to decay to $|g\rangle$ with a decay rate enhanced by the Purcell effect [@Purcell1946a], generating a single photon. Here, the Purcell regime is assumed. [@Rempe2015a; @Kuhn2010a; @Eisaman2011a].
Another example is where the atom is weakly excited with small $\Omega$ and a cavity photon is generated by cavity-enhanced Raman scattering. Here, $\kappa \gg g$ is assumed in the on-resonant case ($\Delta_e=\Delta_u=0$) [@Kuhn2010a; @Law1997a; @Vasilev2010a], while $\Delta_e \gg g$ is assumed in the off-resonant case ($\Delta_u=0$) [@Barros2009a; @Maurer2004a].
A third example is based on vSTIRAP [@Rempe2015a; @Eisaman2011a; @Kuhn2010a; @Vasilev2010a; @Kuhn1999a; @Duan2003a; @Maurer2004a], where $\Omega$ is gradually increased, and where the strong-coupling regime \[$g \gg (\kappa, \gamma)$\] and small detunings ($|\Delta_e|, |\Delta_u| \ll g$) are assumed.
*Basic equations*. The starting point of our study is the following master equation describing the cavity-QED system: $$\begin{aligned}
\dot{\rho}
=&
\mathcal{L} \rho, ~
\mathcal{L}
=\mathcal{L}_{\mathcal{H}} + \mathcal{J}_u + \mathcal{J}_g + \mathcal{J}_o
+ \mathcal{J}_{\mathrm{ex}} + \mathcal{J}_{\mathrm{in}},
\label{eq-master}
\\
\mathcal{L}_{\mathcal{H}} \rho
=&
-\frac{i}{\hbar} \left( \mathcal{H} \rho - \rho \mathcal{H}^{\dagger} \right),~
\mathcal{H}=H -i\hbar \left( \gamma \sigma_{e,e} + \kappa a^{\dagger} a \right),
\nonumber
\\
H
=&
\hbar \Delta_e \sigma_{e,e} + \hbar \Delta_u \sigma_{u,u}
\nonumber
\\
&+
i\hbar \Omega (\sigma_{e,u} - \sigma_{u,e} )
+
i\hbar g (a \sigma_{e,g} - a^{\dagger} \sigma_{g,e} ),
\label{eq-Hamiltonian}
\\
\mathcal{J}_{u} \rho
=&
2 \gamma r_u \sigma_{u,e} \rho \sigma_{e,u},~
\mathcal{J}_{g} \rho
=
2 \gamma r_g \sigma_{g,e} \rho \sigma_{e,g},
\nonumber
\\
\mathcal{J}_{o} \rho
=&
2 \gamma r_o \sigma_{o,e} \rho \sigma_{e,o},~
\mathcal{J}_{\mathrm{ex}} \rho
=
2 \kappa_{\mathrm{ex}} a \rho a^{\dagger},~
\mathcal{J}_{\mathrm{in}} \rho
=
2 \kappa_{\mathrm{in}} a \rho a^{\dagger},
\nonumber\end{aligned}$$ where $\rho$ is the density operator describing the state of the system; the dot denotes differentiation with respect to time; $H$ is the Hamiltonian for the cavity-QED system; $a$ and $a^{\dagger}$ are respectively the annihilation and creation operators for cavity photons; $|o\rangle$ is, if it exists, a ground state other than $|u\rangle$ and $|g\rangle$; $r_u$, $r_g$, and ${r_o=1-r_u-r_g}$ are respectively the branching ratios for spontaneous emission from $|e\rangle$ to $|u\rangle$, $|g\rangle$, and $|o\rangle$; and $\sigma_{j,l}=|j\rangle \langle l|$ ($j,l=u, g, e, o$) are atomic operators. In the present work, we assume no pure dephasing [@comment-dephasing].
The transitions corresponding to the terms in Eqs. (\[eq-master\]) and (\[eq-Hamiltonian\]) are depicted in Fig. \[fig-transition\], where the second ket vectors denote cavity photon number states. Once the state of the system becomes $|g\rangle |0\rangle$ or $|o\rangle |0\rangle$ by quantum jumps, the time evolution stops. Among the quantum jumps, $\mathcal{J}_{\mathrm{ex}}$ corresponds to the success case where a cavity photon is emitted into the external mode, and the others result in failure of emission. Taking this fact into account, we obtain the following formal solution of the master equation [@Carmichael]: $$\begin{aligned}
\rho_c (t)
=&
\mathcal{V}_{\mathcal{H}}(t,0) \rho_0
+
\int_0^t \! dt'
\mathcal{J}_{\mathrm{ex}}
\mathcal{V}_{\mathcal{H}} (t',0) \rho_0
\nonumber
\\
&+
\int_0^t \! dt'
\mathcal{V}_c (t,t')
\mathcal{J}_u
\mathcal{V}_{\mathcal{H}} (t',0) \rho_0,
\label{eq-rho-c}\end{aligned}$$ where $\rho_c$ denotes the density operator conditioned on no quantum jumps of $\mathcal{J}_g$, $\mathcal{J}_o$, and $\mathcal{J}_{\mathrm{in}}$, $\rho_0 = |u\rangle |0\rangle \langle u| \langle 0|$ is the initial density operator, and $\mathcal{V}_{\mathcal{H}}$ and $\mathcal{V}_c$ are the quantum dynamical semigroups defined as follows: $$\begin{aligned}
\frac{d}{dt} \mathcal{V}_{\mathcal{H}}(t,t') =
\mathcal{L}_{\mathcal{H}}(t) \mathcal{V}_{\mathcal{H}}(t,t'),~
\frac{d}{dt} \mathcal{V}_c(t,t') = \mathcal{L}_c(t) \mathcal{V}_c(t,t'),
\nonumber\end{aligned}$$ where $\mathcal{L}_c
=\mathcal{L}_{\mathcal{H}} + \mathcal{J}_u + \mathcal{J}_{\mathrm{ex}}$ is the Liouville operator for the conditioned time evolution. Note that $\rho_c(t) = \mathcal{V}_c(t,0) \rho_0$. The trace of $\rho_c$ decreases from unity for ${t>0}$. This decrease corresponds to the failure probability due to $\mathcal{J}_g$, $\mathcal{J}_o$, and $\mathcal{J}_{\mathrm{in}}$ [@Carmichael; @Plenio1998a].
Note that $\rho_{\mathcal{H}}(t)=\mathcal{V}_{\mathcal{H}}(t,0) \rho_0$ can be expressed with a state vector as follows: $$\begin{aligned}
\rho_{\mathcal{H}}(t)= |\psi (t) \rangle \langle \psi (t)|,~
i\hbar |\dot{\psi} \rangle = \mathcal{H} |\psi \rangle,~
|\psi (0) \rangle = |u \rangle |0 \rangle.
\nonumber\end{aligned}$$ Setting $|\psi \rangle = \alpha_u |u \rangle |0 \rangle + \alpha_e |e \rangle |0 \rangle + \alpha_g |g \rangle |1 \rangle$, the non-Hermitian Schrödinger equation is given by $$\begin{aligned}
&
\dot{\alpha_u}=
-i\Delta_u \alpha_u
-\Omega \alpha_e,
\label{eq-alpha-u}
\\
&
\dot{\alpha_e}=
-(\gamma + i \Delta_e) \alpha_e + \Omega \alpha_u + g \alpha_g,
\label{eq-alpha-e}
\\
&
\dot{\alpha_g}=
-\kappa \alpha_g - g \alpha_e.
\label{eq-alpha-g}\end{aligned}$$
Using the state vector and the amplitudes, Eq. (\[eq-rho-c\]) becomes $$\begin{aligned}
\rho_c (t)
=&
|\psi (t) \rangle \langle \psi (t)|
+
2 \kappa_{\mathrm{ex}}
\int_0^t \! dt'
|\alpha_g (t')|^2
|g \rangle |0 \rangle \langle g| \langle 0|
\nonumber
\\
&+
2 \gamma r_u
\int_0^t \! dt'
|\alpha_e (t')|^2
\mathcal{V}_c (t,t')
\rho_0.
\label{eq-rho-c-2}\end{aligned}$$
*Upper bound on success probability*. A successful photon generation and extraction event is defined by the condition that the final atom-cavity state is $|g\rangle|0\rangle$, and that the quantum jump $\mathcal{J}_{\mathrm{ex}}$ has occurred. The success probability, $P_S$, of the single-photon generation is therefore formulated by $P_S = \langle g| \langle 0| \rho_c(T) |g \rangle |0 \rangle$ for a sufficiently long time $T$. Using Eq. (\[eq-rho-c-2\]), we obtain $$\begin{aligned}
P_S
=&
2 \kappa_{\mathrm{ex}}
\int_0^T \! dt
|\alpha_g (t)|^2
\nonumber
\\
&+
2 \gamma r_u
\int_0^T \! dt
|\alpha_e (t)|^2
\langle g| \langle 0|
\mathcal{V}_c (T,t)
\rho_0
|g \rangle |0 \rangle.
\label{eq-PS-formula}\end{aligned}$$
Here we assume the following inequality: $$\begin{aligned}
\langle g| \langle 0|
\mathcal{V}_c (T,t)
\rho_0
|g \rangle |0 \rangle
\le
\langle g| \langle 0|
\mathcal{V}_c (T,0)
\rho_0
|g \rangle |0 \rangle = P_S.
\label{eq-Vc}\end{aligned}$$ This assumption is natural because $\mathcal{V}_c (t,t')$ should be designed to maximize $P_S$ [@comment-Vc]. Thus we obtain $$\begin{aligned}
P_S
\le
\frac{2 \kappa_{\mathrm{ex}} I_g}
{\displaystyle 1-2 \gamma r_u I_e},
\label{eq-PS-inequality}\end{aligned}$$ where ${I_g = \int_0^T \! dt |\alpha_g (t)|^2}$ and ${I_e = \int_0^T \! dt |\alpha_e (t)|^2}$.
The two integrals, $I_g$ and $I_e$, can be evaluated as follows. First, we have $$\begin{aligned}
\frac{d}{dt} \langle \psi |\psi \rangle
= -2\gamma |\alpha_e|^2 - 2\kappa |\alpha_g|^2
~\Rightarrow~
2\gamma I_e + 2\kappa I_g \approx 1,
\label{eq-norm}\end{aligned}$$ where ${\langle \psi (0)|\psi (0) \rangle =1}$ and ${\langle \psi (T)|\psi (T) \rangle \approx 0}$ have been used assuming a sufficiently long time $T$. Next, using Eq. (\[eq-alpha-g\]), we obtain $$\begin{aligned}
&
I_e
=
\int_0^T \! dt \frac{|\dot{\alpha_g}(t) + \kappa \alpha_g (t)|^2}{g^2}
\nonumber
\\
&=
\int_0^T \! dt \frac{|\dot{\alpha_g}(t)|^2 + \kappa^2 |\alpha_g (t)|^2}{g^2}
+
\frac{\kappa}{g^2}
\left[
|\alpha_g(T)|^2
-
|\alpha_g(0)|^2
\right]
\nonumber
\\
&\approx
\frac{I'_g}{g^2}
+\frac{\kappa^2}{g^2} I_g,
\label{eq-Ie}\end{aligned}$$ where we have used ${|\alpha_g(0)|^2=0}$ and ${|\alpha_g(T)|^2\approx 0}$ and have set ${I'_g = \int_0^T \! dt |\dot{\alpha}_g (t)|^2}$. Using Eqs. (\[eq-norm\]) and (\[eq-Ie\]), we obtain $$\begin{aligned}
I_g
&=
\frac{C}{\kappa (1+2C)}
\left(
1-
\frac{I'_g}{\kappa C}
\right),
\label{eq-Ig-result}
\\
I_e
&=
\frac{1}{2\gamma}
\left[
1-
\frac{2C}{1+2C}
\left(
1-
\frac{I'_g}{\kappa C}
\right)
\right].
\label{eq-Ie-result}\end{aligned}$$
Substituting Eqs. (\[eq-Ig-result\]) and (\[eq-Ie-result\]) into Ineq. (\[eq-PS-inequality\]), the upper bound on $P_S$ is finally obtained as follows: $$\begin{aligned}
P_S
&\le
\frac{\kappa_{\mathrm{ex}}}{\kappa}
\frac{2C}{1+2C}
\frac{\displaystyle 1-\frac{I'_g}{\kappa C}}
{\displaystyle 1-r_u + r_u \frac{2C}{1+2C} \left( 1-\frac{I'_g}{\kappa C} \right)}
\nonumber
\\
&\le
\left( 1-
\frac{\kappa_{\mathrm{in}}}{\kappa}
\right)
\left( 1-
\frac{1}{1+2C}
\right)
\sum_{n=0}^{\infty}
\left(
\frac{r_u}{1+2C}
\right)^n,
\label{eq-PS}\end{aligned}$$ where we have used $0\le 1 - I'_g/(\kappa C) \le 1$ [@comment-Ig]. The equality approximately holds when the system varies slowly and the following condition holds: $$\begin{aligned}
\frac{1}{\kappa}
\int_0^T \! dt |\dot{\alpha_g}(t)|^2
\ll C.\end{aligned}$$
The upper bound on the success probability given by Ineq. (\[eq-PS\]) is a unified and generalized version of previous results [@Kuhn2010a; @Law1997a; @Vasilev2010a; @comment-storage; @Gorshkov2007a; @Dilley2012a], which did not treat explicitly internal loss, detunings, or repumping. The upper bound has a simple physical meaning. The first factor is the escape efficiency $\eta_{\mathrm{esc}}$. The product of the second and third factors is the internal generation efficiency $\eta_{\mathrm{in}}$. Each term of the third factor represents the probability that the decay from $|e \rangle$ to $|u \rangle$ occurs $n$ times. Note that $\eta_{\mathrm{in}}$ is increased by the repumping process.
So far, the photons generated by repumping after decay to $|u\rangle$ are counted, as in some experiments [@Barros2009a]. However, such photons may have time delays or different pulse shapes from photons generated without repumping, and are therefore not useful for some applications, such as photonic qubits. If the photons generated by repumping are not counted, we should consider the state conditioned further on no quantum jump of $\mathcal{J}_u$. In this case, the upper bound on the success probability is obtained by modifying Ineq. (\[eq-PS\]) with $r_u=0$.
The contribution of the repumping to $P_S$, denoted by $P_{\mathrm{rep}}$, is given by the second term in the right-hand side of Eq. (\[eq-PS-formula\]). Using Eqs. (\[eq-Ie-result\]) and (\[eq-PS\]), we can derive an upper bound on $P_{\mathrm{rep}}$ as follows: $$\begin{aligned}
P_{\mathrm{rep}}
\le
2\gamma r_u I_e P_S
&\le
\frac{\kappa_{\mathrm{ex}}}{\kappa}
\frac{2C}{1+2C}
\sum_{n=1}^{\infty}
\left(
\frac{r_u}{1+2C}
\right)^n
\nonumber
\\
&=
\frac{\kappa_{\mathrm{ex}}}{\kappa}
\frac{2C}{1+2C}
\frac{r_u}{1+2C-r_u}.
\label{eq-Prepump}\end{aligned}$$ Thus, the contribution of the repumping is negligible when $C \gg 1$ or when $r_u \ll 1$.
*Fundamental limit on single-photon generation based on cavity QED*. The reciprocal of the upper bound on $P_S$ is simplified as $$\begin{aligned}
\left(
1 + \frac{\kappa_{\mathrm{in}}}{\kappa_{\mathrm{ex}}}
\right)
\left[
1 + \frac{1-r_u}{2C_{\mathrm{in}}}
\left(
1 + \frac{\kappa_{\mathrm{ex}}}{\kappa_{\mathrm{in}}}
\right)
\right].\end{aligned}$$ This can be easily minimized with respect to $\kappa_{\mathrm{ex}}$, which results in the following lower bound on $P_F$: $$\begin{aligned}
P_F
\ge
\frac{2}{\displaystyle 1+\sqrt{1+2C_{\mathrm{in}}/(1-r_u)}},
\label{eq-PF-ru}\end{aligned}$$ where the lower bound is obtained when $\kappa_{\mathrm{ex}}$ is set to $$\begin{aligned}
\kappa_{\mathrm{ex}}^{\mathrm{opt}}
\equiv \kappa_{\mathrm{in}} \sqrt{1+2C_{\mathrm{in}}/(1-r_u)}.
\label{eq-optimal-kex-ru}\end{aligned}$$ In the case of no repumping, Eqs. (\[eq-PF-ru\]) and (\[eq-optimal-kex-ru\]) are modified by $r_u=0$. This leads to Ineq. (\[eq-PF\]) and Eq. (\[eq-optimal-kex\]).
The approximate lower bound in Ineq. (\[eq-PF\]) can be derived more directly from Ineq. (\[eq-PS\]) (${r_u=0}$) using the arithmetic-geometric mean inequality as follows: $$\begin{aligned}
P_F
\ge
\frac{\kappa_{\mathrm{in}}}{\kappa}
+
\frac{1}{2C+1}
-\frac{\kappa_{\mathrm{in}}}{\kappa}
\frac{1}{2C+1}
\approx
\frac{\kappa_{\mathrm{in}}}{\kappa}
+
\frac{\kappa \gamma}{g^2}
\ge
\sqrt{\frac{2}{C_{\mathrm{in}}}},
\nonumber\end{aligned}$$ where ${\kappa_{\mathrm{in}} \ll \kappa}$ and ${C \gg 1}$ have been assumed. Note that $\kappa$ is cancelled out by multiplying the two terms [@comment-arithmetic-geometric].
*Typical optical cavity-QED systems*. In optical cavity-QED systems where a single atom or ion is coupled to a single cavity mode [@Law1997a; @Vasilev2010a; @Barros2009a; @Maurer2004a; @Kuhn1999a; @Duan2003a], the cavity-QED parameters are expressed as follows [@Rempe2015a]: $$\begin{aligned}
g &=
\sqrt{\frac{\mu_{g,e}^2 \omega_{g,e}}{2\epsilon_0 \hbar A_{\mathrm{eff}} L}},
\label{eq-g}
\\
\kappa_{\mathrm{in}} &=
\frac{c}{2L} \alpha_{\mathrm{loss}},
\label{eq-kappa-in}
\\
r_g \gamma &=
\frac{\mu_{g,e}^2 \omega_{g,e}^3}{6 \pi \epsilon_0 \hbar c^3},
\label{eq-gamma}\end{aligned}$$ where $\epsilon_0$ is the permittivity of vacuum, $c$ is the speed of light in vacuum, $\mu_{g,e}$ and $\omega_{g,e}$ are the dipole moment and frequency of the $|g\rangle$-$|e\rangle$ transition, respectively, $L$ is the cavity length, $A_{\mathrm{eff}}$ is the effective cross-section area of the cavity mode at the atomic position, and $\alpha_{\mathrm{loss}}$ is the one-round-trip cavity internal loss. Substituting Eqs. (\[eq-g\])–(\[eq-gamma\]) into the definition of $C_{\mathrm{in}}$, we obtain $$\begin{aligned}
\frac{2C_{\mathrm{in}}}{1-r_u}
&=
\frac{1}{\alpha_{\mathrm{loss}}}
\frac{1}{r_A}
\frac{r_g}{1-r_u}
\le
\frac{1}{\alpha_{\mathrm{loss}}}
\frac{1}{r_A},
\label{eq-Cin-formula}\end{aligned}$$ where $\lambda = 2\pi c/\omega_{g,e}$ is the wavelength corresponding to $\omega_{g,e}$, $r_A=A_{\mathrm{eff}}/\sigma$ is the ratio of the cavity-mode area to the atomic absorption cross section ${\sigma = 3\lambda^2/(2\pi)}$, and the inequality comes from $r_g/(1-r_u) \le 1$. (The equality holds when ${r_o=0}$.) Note that the cavity length $L$ and the dipole moment $\mu_{g,e}$ are cancelled out. From Ineq. (\[eq-PF-ru\]), it turns out that the single-photon generation efficiency is limited only by the one-round-trip internal loss, ${\alpha_{\mathrm{loss}}}$, and the area ratio, $r_A$, even when counting photons generated by repumping.
*Conclusion and outlook*. By analytically solving the master equation for a general cavity-QED model, we have derived an upper bound on the success probability of single-photon generation based on cavity QED in a unified way. We have taken cavity internal loss into account, which results in a tradeoff relation between the internal generation efficiency and the escape efficiency with respect to the cavity external loss rate $\kappa_{\mathrm{ex}}$. By optimizing $\kappa_{\mathrm{ex}}$, we have derived a lower bound on the failure probability. The lower bound is inversely proportional to the square root of the internal cooperativity $C_{\mathrm{in}}$. This gives the fundamental limit of single-photon generation efficiency based on cavity QED. The optimal value of $\kappa_{\mathrm{ex}}$ has also been given explicitly. The repumping process, where the atom decays to the initial ground state via spontaneous emission and is reused for cavity-photon generation has also been taken into account.
For typical optical cavity-QED systems, the lower bound is determined only by the one-round-trip internal loss and the ratio between the cavity-mode area and the atomic absorption cross section. This result holds even when the photons generated by repumping are counted.
The lower bound is achieved in the limit that the variation of the system is sufficiently slow. When the short generation time is desirable, optimization of the control parameters will be necessary. This problem is left for future work.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors thank Kazuki Koshino, Donald White and Samuel Ruddell for their useful comments. This work was supported by JST CREST Grant Number JPMJCR1771, Japan.
[19]{}
H. J. Kimble, Nature **453**, 1023 (2008).
M. D. Eisaman, J. Fan, A. Migdall, and S. V. Polyakov, Rev. Sci. Instrum. **82**, 071101 (2011), and references therein.
A. Reiserer and G. Rempe, Rev. Mod. Phys. **87**, 1379 (2015).
A. Kuhn and D. Ljunggren, Contemp. Phys. **51**, 289 (2010).
C. K. Law and H. J. Kimble, J. Mod. Opt. **44**, 2067 (1997).
G. S. Vasilev, D. Ljunggren, and A. Kuhn, New J. Phys. **12**, 063024 (2010).
H. G. Barros, A. Stute, T. E. Northup, C. Russo, P. O, Schmidt, and R. Blatt, New J. Phys. **11**, 103004 (2009).
C. Maurer, C. Becher, C. Russo, J. Eschner, and R. Blatt, New J. Phys. **6**, 94 (2004).
A. Kuhn, M. Hennrich, T. Bondo, and G. Rempe, Appl. Phys. B **69**, 373 (1999).
L.-M. Duan, A. Kuzmich, and H. J. Kimble, Phys. Rev. A **67**, 032305 (2003).
The external loss is due to the extraction of cavity photons to the desired external mode via transmission of the mirror, while the internal loss is due to undesirable scattering and absorption inside the cavity.
It is notable that similar lower bounds on failure probabilities, inversely proportional to $\sqrt{C_{\mathrm{in}}}$, have been derived for quantum gate operations based on cavity QED [@Goto2008a; @Goto2010a]. This fact implies that $C_{\mathrm{in}}$ should be regarded as a figure of merit of cavity-QED systems for quantum applications. In Refs. [@Goto2008a; @Goto2010a], the critical atom number [@Rempe2015a], which is the inverse of the cooperativity, was used instead of the internal cooperativity. Note that in Ref. [@Goto2008a], $\kappa$ should be interpreted as $\kappa_{\mathrm{in}}$ because in this case the external field is unnecessary and we can set $\kappa_{\mathrm{ex}}=0$.
H. Goto and K. Ichimura, Phys. Rev. A **77**, 013816 (2008).
H. Goto and K. Ichimura, Phys. Rev. A **82**, 032311 (2010).
Interestingly, this optimal value of $\kappa_{\mathrm{ex}}$ is exactly the same as that for a quantum gate operation in Ref. [@Goto2010a].
E. M. Purcell, Phys. Rev. **69**, 681 (1946).
Pure dephasing may degrade single-photon efficiency, and therefore not affect the upper bound on the efficiency. In typical optical cavity-QED systems where a single atom or ion is coupled to a single cavity mode [@Law1997a; @Vasilev2010a; @Barros2009a; @Maurer2004a; @Kuhn1999a; @Duan2003a], pure dephasing is actually negligible.
H. J. Carmichael, in *An Open Systems Approach to Quantum Optics*, edited by W. Beiglböck, Lecture Notes in Physics Vol. m18, (Springer-Verlag, Berlin, 1993).
M. B. Plenio and P. L. Knight, Rev. Mod. Phys. **70**, 101 (1998).
If $\langle g| \langle 0|
\mathcal{V}_c (T,t)
\rho_0
|g \rangle |0 \rangle
>
\langle g| \langle 0|
\mathcal{V}_c (T,0)
\rho_0
|g \rangle |0 \rangle$, then we should use $\mathcal{V}_c (T,t)$ for the single-photon generation, instead of $\mathcal{V}_c (T,0)$.
Note that $I'_g \ge 0$ by definition and $1 - I'_g/(\kappa C) \ge 0$ because $I_g \ge 0$, by definition, in Eq. (\[eq-Ig-result\]).
Interestingly, it is known that photon storage with cavity-QED systems without internal loss also has a similar upper bound, $2C/(2C+1)$, on the success probability [@Gorshkov2007a; @Dilley2012a]. This, together with the results for quantum gate operations [@Goto2008a; @Goto2010a], implies the universality of the upper bound.
A. V. Gorshkov, A. André, M. D. Lukin, and A. S. S[ø]{}rensen, Phys. Rev. A **76**, 033804 (2007).
J. Dilley, P. Nisbet-Jones, B. W. Shore, and A. Kuhn, Phys. Rev. A **85**, 023834 (2012).
A similar technique has been applied to the derivation of an upper bound on the success probability of a quantum gate operation based on cavity QED [@Goto2008a].
|
{
"pile_set_name": "arxiv"
}
|
Anthony Iluobe
Chief Anthony Iluobe (JP) was born in 1945 to the family of Chief and Mrs Joseph Agimhelen Iluobe (JP) of Ivue-Uromi of the Uromi Kingdom in Edo State. He is the owner, managing director and chief executive officer of Iluobe Oil and Gas Marketing Co. Ltd. He studied Engineering in Japan.
He was previously the Chairman of Edo State Water Board. He was previously the Chairman of Independent Petroleum Marketers of Nigeria (IPMAN) Edo state Chapter.
He is the father of Patrick Eromosele Iluobe, the minority leader of the Edo State House of Assembly. He is also the eldest brother of the jeweller Chris Aire. He lives in Edo State where he presides over his various business investments including Antilu Oil and Gas Ltd.
He is married to Magistrate Martina U. Iluobe, The chief Magistrate II of Edo State and the presiding Magistrate of the Customary Court in Ekpoma, Edo State.
References
Category:1945 births
Category:Living people
Category:Edo people
|
{
"pile_set_name": "wikipedia_en"
}
|
<HTML><HEAD>
<TITLE>Invalid URL</TITLE>
</HEAD><BODY>
<H1>Invalid URL</H1>
The requested URL "[no URL]", is invalid.<p>
Reference #9.44952317.1507271057.135fad8
</BODY></HTML>
|
{
"pile_set_name": "github"
}
|
<html>
<head>
<title>Path test</title>
<style type="text/css">
.pixel {
position: absolute;
width: 1px;
height: 1px;
overflow: hidden;
background: #000;
}
.red { background: red; }
.blue { background: blue; }
</style>
<script language="JavaScript" type="text/javascript">
// Dojo configuration
djConfig = {
isDebug: true
};
</script>
<script language="JavaScript" type="text/javascript"
src="../../dojo.js"></script>
<script language="JavaScript" type="text/javascript">
dojo.require("dojo.math.*");
function drawCurve(curve,steps,className) {
if(!className) className = "pixel";
if(!steps) steps = 100;
this.pixels = new Array(steps)
for(var i=0;i<steps;i++) {
var pt = curve.getValue(i/steps);
this.pixels[i] = document.createElement("div");
this.pixels[i].className = className;
this.pixels[i].style.left = pt[0];
this.pixels[i].style.top = pt[1];
document.body.appendChild(this.pixels[i]);
}
}
function init(){
var c = dojo.math.curves;
var p = new c.Path();
p.add(new c.Line([10,10], [100,100]), 5);
p.add(new c.Line([0,0], [20,0]), 2);
p.add(new c.CatmullRom([[0,0], [400,400], [200,200], [500,50]]), 50);
p.add(new c.Arc([0,0], [100,100]), 20);
p.add(new c.Arc([0,0], [100,100], true), 20);
drawCurve(p, 200, "pixel");
//drawCurve(new c.Line([0,250], [800,250]), 50, "pixel red");
//drawCurve(new c.Line([500,0], [500,600]), 50, "pixel red");
//drawCurve(new c.Arc([300,300], [700,200]), 50, "pixel");
//drawCurve(new c.Arc([200,200], [100,100], false), 50, "pixel blue");
}
dojo.addOnLoad(init);
</script>
</head>
<body>
</body>
</html>
|
{
"pile_set_name": "github"
}
|
#ifndef CREATE_EMPTY_DIRECTED_GRAPH_WITH_GRAPH_NAME_H
#define CREATE_EMPTY_DIRECTED_GRAPH_WITH_GRAPH_NAME_H
#include <boost/graph/adjacency_list.hpp>
boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS,
boost::no_property, boost::no_property,
boost::property<boost::graph_name_t, std::string>>
create_empty_directed_graph_with_graph_name() noexcept;
#endif // CREATE_EMPTY_DIRECTED_GRAPH_WITH_GRAPH_NAME_H
|
{
"pile_set_name": "github"
}
|
Florida National Cemetery
Florida National Cemetery is a United States National Cemetery located near the city of Bushnell in Sumter County, Florida. Administered by the United States Department of Veterans Affairs it encompasses and began interments in 1988. It is now one of the busiest cemeteries in the United States.
History
Florida National Cemetery is located in the Withlacoochee State Forest, approximately north of Tampa. The forest was acquired by the federal government from private landowners between 1936 and 1939 under the provisions of the U.S. Land Resettlement Administration. The United States Forest Service managed the property until a lease-purchase agreement transferred it to the Florida Board of Forestry in 1958. Currently, Withlacoochee State Forest is the second-largest state forest in Florida, divided into eight distinct tracts of land.
In 1842, Congress encouraged settlement here by establishing the Armed Occupation Act. The law granted a patent for to any man who kept a gun and ammunition, built a house, cultivated of the land and remained there for at least five years. Settlers moved in to take advantage of the generous offer. The area contained abundant timber and suitable farmland, appealing attributes to frontiersmen. In 1845 Florida was granted statehood.
During the Civil War, a sugar mill on the Homosassa River supplied sugar to the Confederacy. A robust citrus-growing industry developed in the eastern part of the area and became a focus of intense economic expansion soon after the war.
In 1980, the Department of Veterans Affairs (VA) announced that it would establish a new national cemetery in Florida, its fourth. Two major locations for the cemetery were studied: property near the Cross Florida Barge Canal and the Withlacoochee State Forest. The Withlacoochee site, though more environmentally sensitive, was supported by government officials. In February 1983, the state transferred land to the VA for the development of a Florida National Cemetery. The first burial was in 1988 and a columbarium was opened in November 2001.
In 1999, federal officials asked the Florida Cabinet to grant land for the expansion of the Florida National Cemetery, providing 65,000 to 100,000 grave sites for veterans in the state. Environmentalists argued that Florida Department of Agriculture and Consumer Services Forestry Division officials did not state whether the 179 acres of land within the Withlacoochee State Forest was surplus in accordance to a Florida constitutional amendment concerning the acquisition of land for conservation. Before the Florida Cabinet meeting on October 26, the Department Veterans Affairs and the Florida Cabinet agreed that 42 acres would be removed as they served as the habitat for several endangered species. Florida governor Jeb Bush and the Florida Cabinet voted 7-0 in favor of selling 137 acres of land to the Department of Veterans Affairs for the cemetery's expansion.
Notable interments
Medal of Honor recipients
Master Chief Hospital Corpsman William R. Charette, U.S. Navy, for action with the Marine Corps in the Korean War.
Master Sergeant James R. Hendrix, U.S. Army, for action with the 4th Armored Division at the Battle of the Bulge in World War II.
Sergeant Major Franklin D. Miller, U.S. Army Special Forces, for action in the Vietnam War.
Others
Frank Baker, professional baseball player
Philip J. Corso, U.S. Army lieutenant colonel
Raymond Fernandez, aka "Hercules Hernandez", professional wrestler.
Scott Helvenston, film trainer-stuntman and former Navy SEAL.
Lieutenant Commander Mike Holovak, A U.S. Navy, skipper of PT boat in the South Pacific credited with sinking nine Japanese ships in World War II.
Hal Jeffcoat, Major League Baseball pitcher and outfielder
Major David Moniac, veteran of the Second Seminole War, first Native American graduate of United States Military Academy.
Blackjack Mulligan, professional wrestler, author and football player
Ernie Oravetz, Major League Baseball outfielder
Colonel Leonard T. Schroeder Jr., the first soldier ashore in the Normandy Landings on D-Day, June 6, 1944, during World War II.
Frank Stanley, cinematographer for Clint Eastwood films such as Breezy (1973), Magnum Force (1973), Thunderbolt and Lightfoot (1974) and The Eiger Sanction (1975)
Champ Summers, Major League Baseball outfielder
Notable monuments
A carillon was constructed by the World War II AMVETS organization in an open area adjacent to the first administration building. It was dedicated on October 9, 1993. The cemetery contains a Memorial Pathway that in 2003 featured 47 plaques, statues, monuments, etc., honoring America's soldiers from 20th-century conflicts.
References
External links
National Cemetery Administration
Florida National Cemetery
Category:Cemeteries in Florida
Category:Protected areas of Sumter County, Florida
Category:United States national cemeteries
Category:1988 establishments in Florida
|
{
"pile_set_name": "wikipedia_en"
}
|
The terrifying 38-minute ordeal suffered by Hawaii residents on Saturday, when the state’s emergency-management agency sent out a false alert warning of an imminent ballistic-missile strike amid rising tensions with North Korea, seems to have sparked an unusually rapid response on Capitol Hill.
Hawaii’s Sen. Brian Schatz, a Democrat on the Senate Commerce Committee, told National Journal that he is working with other Senate Democrats on a bill that would implement a federal best-practice framework for the ballistic-missile-alert systems administered by U.S. states, localities, and territories. And while Republicans don’t appear to be involved in the process, relevant GOP chairs in both chambers have expressed a willingness to work with Schatz on the issue.
Initial reports indicate that Hawaii’s screwup—which sent people across the archipelago scrambling for shelter before the all-clear was called more than a half-hour later—was because of an employee mistakenly pressing the wrong link on a confusingly designed interface. But for something as serious as a ballistic-missile alert, Schatz suggested that the potential for human error can, and should, be mitigated through federal safeguards.
“You want a system that accounts for the fact that somebody may be sleepy or careless, or an interface may not be the most user-friendly, and yet it all works anyway,” Schatz said. “We have best practices for disaster notifications for natural disasters, for terrorism events. We just don’t have it for this.”
On Wednesday, Schatz said he had convened a phone call with officials from the Federal Communications Commission, the Homeland Security Department, the Pentagon, and other relevant agencies to address the inconsistency.
“We think it should be done legislatively, but I don’t know that for sure yet,” he told reporters, explaining that the ultimate goal is to craft “a federal law to establish a framework that states can use.”
The way America’s missile-alert system operates is fundamentally different from how citizens are alerted to most other catastrophes, when local authorities often possess the best information. While states and cities are ultimately responsible for alerting civilians of an imminent attack, they lack the ability to detect and track incoming missiles.
In the seconds and minutes after a launch, details of the threat would have to cascade through phone calls from the Pentagon to DHS. From there, officials at the Federal Emergency Management Agency would send the warning to at-risk states and localities, whose own alert systems would only then spring into life.
That chain of causation was disrupted on Saturday. But David Simpson, a former admiral in the U.S. Navy who ran the FCC’s Public Safety and Homeland Security Bureau from November 2013 to January 2017, said federal legislation should seek to dismantle that outdated process altogether.
“That’s a 1950s kind of structure,” Simpson said, arguing that machine-to-machine communication technology should be utilized to eliminate lag time and cut down on human error.
One way to do that could be for the FCC to create, at the direction of Congress, a unique wireless-alert category for ballistic-missile threats. “That would then ensure that the machine elements of this system could be built around that narrow bucket,” Simpson said.
But that still wouldn’t solve the problem entirely. “The machine-to-machine piece of that, so it could be really useful, would require DHS and [Defense Department] plumbing changes that would be beyond the authorities of the FCC,” Simpson said.
Simpson largely endorsed Schatz’s plan for a uniform federal missile-alert framework that states and localities can follow. “There’s over 1,000 alert originators at the state and local level, and I would say five, six, seven vendors for the user-interface systems,” he said.
In a bid to improve innovation, DHS gave state governments broad leeway to design their own missile-alert interfaces. But Simpson said that decision has clearly come with a cost.
“That variation is fine for notification about fire, notification about a tsunami coming in,” Simpson said. “But ballistic-missile warnings ought to be consistent, reliable, secure—because we don’t want it cyberattacked—across the entire country.”
Republicans seem receptive to Schatz’s plan for missile-alert legislation. Schatz said he plans to introduce his bill through the Senate Commerce Committee, which is chaired by Republican John Thune. Frederick Hill, a Thune spokesman, told National Journal that the chairman “is considering convening a full committee hearing which would help inform legislative efforts.”
House Republicans are further along than their Senate counterparts, with plans to hold an Energy and Commerce hearing on Hawaii’s false missile alert in the coming weeks. On Wednesday, committee chairman Greg Walden said he would be “happy to work” with Schatz on legislation, if needed. “We just haven’t got into the weeds on it,” Walden said.
As long as lawmakers can work out issues surrounding committee and agency jurisdiction, Simpson said the chances for bipartisan support are high. But stakeholders from Homeland Security and the Pentagon—as well as the congressional committees that oversee them—will also need to weigh in. And Simpson worries those agencies may be loath to take responsibility for what’s widely viewed as a state-level mistake.
“It’s a perfect bipartisan issue, as long as we don’t let the various lobbies and the competition between agencies pervert and potentially dilute the ultimate outcome,” Simpson said.
"Two more House Republicans have joined the discharge petition to force votes on immigration, potentially leaving centrists just two signatures short of success. Reps. Tom Reed (R-N.Y.) and Brian Fitzpatrick (R-Pa.) signed the discharge petition Thursday before the House left town for the Memorial Day recess. If all Democrats endorse the petition, just two more GOP signatures would be needed to reach the magic number of 218."
Source:
FIRED FROM RUSSIAN LAUNCHER
Investigators Pin Destruction of Malaysian Airliner on Russia
3 hours ago
THE DETAILS
"A missile that brought down Malaysia Airlines Flight 17 in eastern Ukraine in 2014 was fired from a launcher belonging to Russia's 53rd anti-aircraft missile brigade, investigators said Thursday. The announcement is the first time the investigative team has identified a specific division of the Russian military as possibly being involved in the strike. Russia has repeatedly denied involvement in the incident."
Source:
THREE INTERVIEWS PLANNED FOR JUNE
House GOP Will Conduct New Interviews in Clinton Probe
3 hours ago
THE LATEST
"House Republicans are preparing to conduct the first interviews in over four months in their investigation into the FBI’s handling of the Clinton email probe. A joint investigation run by the Judiciary and Oversight Committees has set three witness interviews for June, including testimony from Bill Priestap, the assistant director of the FBI’s counterintelligence division, and Michael Steinbach, the former head of the FBI’s national security division."
Source:
IN OPEN LETTER TO KIM JONG UN
Trump Cancels North Korea Summit
5 hours ago
THE LATEST
GANG OF EIGHT WILL GET SEPARATE MEETING
Briefings at White House Will Now Be Bipartisan
7 hours ago
THE LATEST
"The White House confirmed Wednesday it is planning for a bipartisan group of House and Senate leaders, known as the 'Gang of 8,' to receive a highly-classified intelligence briefing on the FBI's investigation into Russian meddling, reversing plans to exclude Democrats altogether. ABC News first reported the plans to hold a separate briefing for Democrats, citing multiple administration and congressional sources. While details of the bipartisan meeting are still being worked out, a Republican-only briefing will go on as scheduled Thursday."
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'Transmission spectroscopy of exoplanets is a tool to characterize rocky planets and explore their habitability. Using the Earth itself as a proxy, we model the atmospheric cross section as a function of wavelength, and show the effect of each atmospheric species, Rayleigh scattering and refraction from 115 to 1000 nm. Clouds do not significantly affect this picture because refraction prevents the lowest 12.75 km of the atmosphere, in a transiting geometry for an Earth-Sun analog, to be sampled by a distant observer. We calculate the effective planetary radius for the primary eclipse spectrum of an Earth-like exoplanet around a Sun-like star. Below 200 nm, ultraviolet (UV) O$_2$ absorption increases the effective planetary radius by about 180 km, versus 27 km at 760.3 nm, and 14 km in the near-infrared (NIR) due predominantly to refraction. This translates into a 2.6% change in effective planetary radius over the UV-NIR wavelength range, showing that the ultraviolet is an interesting wavelength range for future space missions.'
author:
- 'Y. Bétrémieux and L. Kaltenegger'
title: |
Transmission spectrum of Earth as a transiting exoplanet\
from the ultraviolet to the near-infrared
---
Introduction
============
Many planets smaller than Earth have now been detected with the Kepler mission, and with the realization that small planets are much more numerous than giant ones (Batalha et al. 2013), future space missions, such as the James Webb Space Telescope (JWST), are being planned to characterize the atmosphere of potential Earth analogs by transiting spectroscopy, explore their habitability, and search for signs of life. The simultaneous detection of large abundances of either O$_{2}$ or O$_{3}$ in conjunction with a reducing species such as CH$_{4}$, or N$_2$O, are biosignatures on Earth (see e.g. Des Marais et al. 2002; Kaltenegger et al. 2010a and reference therein). Although not a clear indicative for the presence of life, H$_2$O is essential for life.
Simulations of the Earth’s spectrum as a transiting exoplanet (Ehrenreich et al. 2006; Kaltenegger & Traub 2009; Pallé et al. 2009; Vidal-Madjar et al. 2010; Rauer et al. 2011; García Muñoz et al. 2012; Hedelt et al. 2013) have focused primarily on the visible (VIS) to the infrared (IR), the wavelength range of JWST (600-5000 nm). No models of spectroscopic signatures of a transiting Earth have yet been computed from the mid- (MUV) to the far-ultraviolet (FUV). Which molecular signatures dominate this spectral range? In this paper, we present a model of a transiting Earth’s transmission spectrum from 115 to 1000 nm (UV-NIR) during primary eclipse. While no UV missions are currently in preparation, this model can serve as a basis for future UV mission concept studies.
Model description {#model}
=================
To simulate the spectroscopic signatures of an Earth-analog transiting its star, we modified the Smithsonian Astrophysical Observatory 1998 (SAO98) radiative transfer code (see Traub & Stier 1976; Johnson et al. 1995; Traub & Jucks 2002; Kaltenegger & Traub 2009 and references therein for details), which computes the atmospheric transmission of stellar radiation at high spectral resolution from a molecular line list database. Updates include a new database of continuous absorber’s cross sections, as well as N$_2$, O$_2$, Ar, and CO$_2$ Rayleigh scattering cross sections from the ultraviolet (UV) to the near-infrared (NIR). A new module interpolates these cross sections and derives resulting optical depths according to the mole fraction of the continuous absorbers and the Rayleigh scatterers in each atmospheric layer. We also compute the deflection of rays by atmospheric refraction to exclude atmospheric regions for which no rays from the star can reach the observer due to the observing geometry.
Our database of continuous absorbers is based on the MPI-Mainz-UV-VIS Spectral Atlas of Gaseous Molecules[^1]. For each molecular species of interest (O$_2$, O$_3$, CO$_2$, CO, CH$_4$, H$_2$O, NO$_2$, N$_2$O, and SO$_2$), we created model cross sections composed of several measured cross sections from different spectral regions, at different temperatures when measurements are available, with priority given to higher spectral resolution measurements (see Table \[tbl\_crsc\]). We compute absorption optical depths for different altitudes in the atmosphere using the cross section model with the closest temperature to that of the atmospheric layer considered. Note that we do not consider line absorption from atomic or ionic species which could produce very narrow but possibly detectable features at high spectral resolution (see also Snellen et al. 2013).
The Rayleigh cross sections, $\sigma_R$, of N$_2$, O$_2$, Ar, and CO$_2$, which make-up 99.999% of the Earth’s atmosphere, are computed with $$\label{rayl}
{\sigma_{R}} = \frac{32\pi^{3}}{3} \left( \frac{{\nu_{0}}}{n_{0}} \right)^{2} w^{4} F_K ,$$ where $\nu_0$ is the refractivity at standard pressure and temperature (or standard refractivity) of the molecular species, $w$ is the wavenumber, $F_K$ is the King correction factor, and $n_{0}$ is Loschmidt’s constant. Various parametrized functions are used to describe the spectral dependence of $\nu_0$ and $F_K$. Table \[tbl\_rayl\] gives references for the functional form of both parameters, as well as their spectral region.
The transmission of each atmospheric layer is computed with Beer’s law from all optical depths. We use disc-averaged quantities for our model atmosphere.
We use a present-day Earth vertical composition (Kaltenegger et al. (2010b) for SO$_2$; Lodders & Fegley, Jr. (1998) for Ar; and Cox (2000) for all other molecules) up to 130 km altitude, unless specified otherwise. Above 130 km, we assume constant mole fraction with height for all the molecules except for SO$_2$ which we fix at zero, and for N$_2$, O$_2$, and Ar which are described below. Below 100 km, we use the US 1976 atmosphere (COESA 1976) as the temperature-pressure profile. Above 100 km, the atmospheric density is sensitive to and increases with solar activity (Hedin 1987). We use the tabulated results of the MSIS-86 model, for solar maximum (Table A1.2) and solar minimum conditions (Table A1.1) published in Rees (1989), to derive the atmospheric density, pressure, and mole fractions for N$_2$, O$_2$, and Ar above 100 km. We run our simulations in two different spectral regimes. In the VIS-NIR, from 10000 to 25000 cm$^{-1}$ (400-1000 nm), we use a 0.05 cm$^{-1}$ grid, while in the UV from 25000 to 90000 cm$^{-1}$ (111-400 nm), we use a 0.5 cm$^{-1}$ grid. For displaying the results, the VIS-NIR and the UV simulations are binned on a 4 cm$^{-1}$ and a 20 cm$^{-1}$ grid, respectively. The choice in spectral resolution impacts predominantly the detectability of spectral features.
The column abundance of each species along a given ray is computed taking into account refraction, tracing specified rays from the observer back to their source. Each ray intersects the top of the model atmosphere with an impact parameter $b$, the projected radial distance of the ray to the center of the planetary disc as viewed by the observer. As rays travel through the planetary atmosphere, they are bent by refraction along paths define by an invariant $L = (1 + \nu(r)) r \sin\theta(r)$ where both the zenith angle, $\theta(r)$, of the ray, and the refractivity, $\nu(r)$, are functions of the radial position of the ray with respect to the center of the planet. The refractivity is given by $$\label{refrac}
\nu(r) = \left( \frac{n(r)}{n_{0}} \right) \sum_{j} f_{j}(r) {\nu_{0}}_{j} = \left( \frac{n(r)}{n_{0}} \right) \nu_{0}(r) ,$$ where ${\nu_{0}}_{j}$ is the standard refractivity of the j$^{th}$ molecular species while $\nu_{0}(r)$ is that of the atmosphere, $n(r)$ is the local number density, and $f_{j}(r)$ is the mole fraction of the j$^{th}$ species. Here, we only consider the main contributor to the refractivity (N$_2$, O$_2$, Ar, and CO$_2$) which are well-mixed in the Earth’s atmosphere, and fix the standard refractivity at all altitudes at its surface value.
If we assume a zero refractivity at the top of the atmosphere, the minimum radial position from the planet’s center, $r_{min}$, that can be reached by a ray is related to its impact parameter by $$\label{refpath}
L = (1 + \nu(r_{min})) r_{min} = R_{top} \sin\theta_{0} = b ,$$ where $R_{top}$ is the radial position of the top of the atmosphere and $\theta_{0}$ is the zenith angle of the ray at the top of the atmosphere. Note that $b$ is always larger than $r_{min}$, therefore the planet appears slightly larger to a distant observer. For each ray, we specify $r_{min}$, compute $\nu(r_{min})$, and obtain the corresponding impact parameter. Then, each ray is traced through the atmosphere every 0.1 km altitude increment, and column abundances, average mole fractions, as well as cumulative deflection along the ray are computed for each atmospheric layer (Johnson et al. 1995; Kaltenegger & Traub 2009).
We characterize the transmission spectrum of the exoplanet using effective atmospheric thickness, $\Delta z_{eff}$, the increase in planetary radius due to atmospheric absorption during primary eclipse. To compute $\Delta z_{eff}$ for an exoplanet, we first specify $r_{min}$ for $N$ rays spaced in constant altitude increments over the atmospheric region of interest. We then compute the transmission, $T$, and impact parameter, $b$, of each ray through the atmosphere, and finally use,
$$\begin{aligned}
R_{eff}^{2} = R_{top}^{2} - \sum_{i = 1}^{N} \left( \frac{T_{i+1} + T_{i}}{2} \right) (b_{i+1}^{2} - b_{i}^{2}) \label{reff} \\
R_{top} = R_{p} + \Delta z_{atm} \\
\Delta z_{eff} = R_{eff} - R_{p} , \end{aligned}$$
where $R_{eff}$ is the effective radius of the planet, $R_{top}$ is the radial position of the top of the atmosphere, $R_{p}$ is the planetary radius (6371 km), $\Delta z_{atm}$ is the thickness of the atmosphere, and $i$ denotes the ray considered. Note that $(N+1)$ refer to a ray that grazes the top of the atmosphere. The rays define $N$ projected annuli whose transmission is the average of the values at the borders of the annulus. The top of the atmosphere is defined where the transmission is 1, and no bending occurs ($b_{N+1} = R_{top}$). We choose R$_{top}$ where atmospheric absorption and refraction are negligible, and use 100 km in the VIS-NIR, and 200 km in the UV for $\Delta z_{atm}$.
To first order, the total deflection of a ray through an atmosphere is proportional to the refractivity of the deepest atmospheric layer reached by a ray (Goldsmith 1963). The planetary atmosphere density increases exponentially with depth, therefore some of the deeper atmospheric regions can bend all rays away from the observer (see e.g. Sidis & Sari 2010, García Muñoz et al. 2012), and will not be sampled by the observations. At which altitudes this occurs depends on the angular extent of the star with respect to the planet. For an Earth-Sun analog, rays that reach a distant observer are deflected on average no more than 0.269$\degr$. We calculate that the lowest altitude reached by these grazing rays range from about 14.62 km at 115 nm, 13.86 km at 198 nm (shortest wavelength for which all used molecular standard refractivities are measured), to 12.95 km at 400 nm, and 12.75 km at 1000 nm.
As this altitude is relatively constant in the VIS-NIR, we incorporate this effect in our model by excluding atmospheric layers below 12.75 km. To determine the effective planetary radius, we choose standard refractivities representative of the lowest opacities within each spectral region: 2.88$\times10^{-4}$ for the VIS-NIR, and 3.00$\times10^{-4}$ for the UV. We use 80 rays from 12.75 to 100 km in the VIS-NIR, and 80 rays from 12.75 to 200 km altitude in the UV. In the UV, the lowest atmospheric layers have a negligible transmission, thus the exact exclusion value of the lowest atmospheric layer, calculated to be between 14.62 and 12.75 km, do not impact the modeled UV spectrum.
Results and discussion {#discussion}
======================
The increase in planetary radius due to the additional atmospheric absorption of a transiting Earth-analog is shown in Fig. \[spectrum\] from 115 to 1000 nm.
The individual contribution of Rayleigh scattering by N$_2$, O$_2$, Ar, and CO$_2$ is also shown, with and without the effect of refraction by these same species, respectively. The individual contribution of each species, shown both in the lower panel of Fig. \[spectrum\] and in Fig. \[absorbers\], are computed by sampling all atmospheric layers down to the surface, assuming the species considered is the only one with a non-zero opacity. In the absence of absorption, the effective atmospheric thickness is about 1.8 km, rather than zero, because the bending of the rays due to refraction makes the planet appear larger to a distant observer.
The spectral region shortward of 200 nm is shaped by O$_2$ absorption and depends on solar activity. Amongst the strongest O$_2$ features are two narrow peaks around 120.5, and 124.4 nm, which, increase the planetary radius by 179-185 and 191-195 km, respectively. The strongest O$_2$ feature, the broad Schumann-Runge continuum, increases the planetary radius by more than 150 km from 134.4 to 165.5 nm, and peaks around 177-183 km. The Schumann-Runge bands, from 180 to 200 nm, create maximum variations of 30 km in the effective planetary radius. O$_2$ features can also be seen in the VIS-NIR, but these are much smaller than in the UV. Two narrows peaks around 687.0 and 760.3 nm increase the planetary radius to about 27 km, at the spectral resolution of the simulation.
Ozone absorbs in two different broad spectral regions in the UV-NIR increasing the planetary radius by 66 km around 255 nm (Hartley band), and 31 km around 602 nm (Chappuis band). Narrow ozone absorption, from 310 to 360 nm (Huggins band), produce variations in the effective planetary radius no larger than 2.5 km. Weak ozone bands are also present throughout the VIS-NIR: all features not specifically identified on the small VIS-NIR panel in Fig. \[spectrum\] are O$_3$ features, and show changes in the effective planetary radius on the order of 1 km.
NO$_2$ and H$_2$O are the only other molecular absorbers that create observable features in the spectrum (Fig. \[spectrum\], small VIS-NIR panel). NO$_2$ shows a very weak band system in the visible shortward of 510 nm, which produces less than 1 km variations in the effective planetary radius. H$_2$O features are observable only around 940 nm, where they increase the effective planetary radius to about 14.5 km.
Rayleigh scattering (Fig. \[spectrum\]) increases the planetary radius by about 68 km at 115 nm, 27 km at 400 nm, and 5.5 km at 1000 nm, and dominates the spectrum from about 360 to 510 nm where few molecules in the Earth’s atmosphere absorb, and refraction is not yet the dominant effect. In this spectral region, NO$_2$ is the dominant molecular absorber but its absorption is much weaker than Rayleigh scattering.
The lowest 12.75 km of the atmosphere is not accessible to a distant observer because no rays below that altitude can reach the observer in a transiting geometry for an Earth-Sun analog. Clouds located below that altitude do not influence the spectrum and can therefore be ignored in this geometry. Figure \[spectrum\] also shows that refraction influences the observable spectrum for wavelengths larger than 400 nm.
The combined effects of refraction and Rayleigh scattering increases the planetary radius by about 27 km at 400 nm, 16 km at 700 nm, and 14 km at 1000 nm. In the UV, the lowest 12.75 km of the atmosphere have negligible transmission, so this atmospheric region can not be seen by a distant observer irrespective of refraction.
Both Rayleigh scattering and refraction can mask some of the signatures from molecular species. For instance, the individual contribution of the H$_2$O band in the 900-1000 nm region can increase the planetary radius by about 10 km. However, H$_2$O is concentrated in the lowest 10-15 km of the Earth’s atmosphere, the troposphere, hence its amount above 12.75 km increases the planetary radius above the refraction threshold only by about 1 km around 940 nm. The continuum around the visible O$_2$ features is due to the combined effects of Rayleigh scattering, ozone absorption, and refraction. It increases the effective planetary radius by about 21 and 17 km around 687.0 and 760.3 nm, respectively. The visible O$_2$ features add 6 and 10 km to the continuum values, at the spectral resolution of the simulation.
[Figure \[data\] compares our effective model from Fig. \[spectrum\] with atmospheric thickness with the one deduced by Vidal-Madjar et al. (2010) from Lunar eclipse data obtained in the penumbra. The contrast of the two O$_2$ features are comparable with those in the data. However, there is a slight offset (about 3.5 km) and a tilt in the main O$_3$ absorption profile. Note that, Vidal-Madjar et al. (2010) estimate that several sources of systematic errors and statistical uncertainties prevent them from obtaining absolute values better than $\pm$2.5 km. Also, we do not include limb darkening in our calculations. However, for a transiting Earth, the atmosphere eclipses an annular region on the Sun, whereas during Lunar eclipse observations, it eclipses a band across the Sun (see Fig. 4 in Vidal-Madjar et al. 2010), leading to different limb darkening effects.]{}
Many molecules, such as CO$_2$, H$_2$O, CH$_4$, and CO, absorb ultraviolet radiation shortward of 200 nm (see Fig. \[absorbers\]). However, for Earth, the O$_2$ absorption dominates in this region and effectively masks their signatures. For planets without molecular oxygen, the far UV would still show strong absorption features that increase the planet’s effective radius by a higher percentage than in the VIS to NIR wavelength range.
Conclusions
===========
The UV-NIR spectrum (Fig. \[spectrum\]) of a transiting Earth-like exoplanet can be divided into 5 broad spectral regions characterized by the species or process that predominantly increase the planet’s radius: one O$_2$ region (115-200 nm), two O$_3$ regions (200-360 nm and 510-700 nm), one Rayleigh scattering region (360-510 nm), and one refraction region (700-1000 nm).
From 115 to 200 nm, O$_2$ absorption increases the effective planetary radius by up to 177-183 km, except for a narrow feature at 124.4 nm where it goes up to 191-195 km, depending on solar conditions. Ozone increases the effective planetary radius up to 66 km in the 200-360 nm region, and up to 31 km in the 510-700 nm region. From 360 to 510 nm, Rayleigh scattering predominantly increases the effective planetary radius up to 31 km. Above 700 nm, refraction and Rayleigh scattering increase the effective planetary radius to a minimum of 14 km, masking H$_2$O bands which only produce a further increase of at most 1 km. Narrow O$_2$ absorption bands around 687.0 and 760.3 nm, both increase the effective planetary radius by 27 km, that is 6 and 10 km above the continuum, respectively. NO$_2$ only produces variations on the order of 1 km or less above the continuum between 400 and 510 nm.
One can use the NIR as a baseline against which the other regions in the UV-NIR can be compared to determine that an atmosphere exists. From the peak of the O$_2$ Schumann-Runge continuum in the FUV, to the NIR continuum, the effective planetary radius changes by about 166 km, which translates into a 2.6% change.
The increase in effective radius of the Earth in the UV due to O$_2$ absorption shows that this wavelength range is very interesting for future space missions. This increase in effective planetary radius has to be traded off against the lower available stellar flux in the UV as well as the instrument sensitivity at different wavelengths for future mission studies. For habitable planets with atmospheres different from Earth’s, other molecules, such as CO$_2$, H$_2$O, CH$_4$, and CO, would dominate the absorption in the ultraviolet radiation shortward of 200nm, providing an interesting alternative to explore a planet’s atmosphere.
[10]{} Ackerman, M. 1971, Mesospheric Models and Related Experiments, G. Fiocco, Dordrecht:D. Reidel Publishing Company, 149 Au, J. W., & Brion, C. E. 1997, Chem. Phys., 218, 109 Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, , 204, 24 Bates, D. R. 1984, , 32, 785 Bideau-Mehu, A., Guern, Y., Abjean, R., & Johannin-Gilles, A. 1973, Opt. Commun., 9, 432 Bideau-Mehu, A., Guern, Y., Abjean, R., & Johannin-Gilles, A. 1981, , 25, 395 Bogumil, K., Orphal, J., Homann, T., Voigt, S., Spietz, P., Fleischmann, O. C., Vogel, A., Hartmann, M., Bovensmann, H., Frerick, J., & Burrows, J. P. 2003, J. Photochem. Photobiol. A.: Chem., 157, 167 Brion, J. Chakir, A., Daumont, D., Malicet, J., & Parisse, C. 1993, Chem. Phys. Lett., 213, 610 Chan, W. F., Cooper, G., & Brion, C. E. 1993, Chem. Phys., 170, 123 Chen, F. Z., & Wu, C. Y. R. 2004, , 85, 195 COESA (Committee on Extension to the Standard Atmosphere) 1976, U.S. Standard Atmosphere, Washington, D.C.:Government Printing Office Cox, A. N. 2000, Allen’s Astrophysical Quantities, 4th ed., New York:AIP Des Marais, D. J., Harwit, M. O., Jucks, K. W., Kasting, J. F., Lin, D. N. C., Lunine, J. I., Schneider, J., Seager, S., Traub, W. A., & Woolf, N. J. 2010, Astrobiology, 2, 153 Ehrenreich, D., Tinetti, G., Lecavelier des Etangs, A., Vidal-Madjar, A., & Selsis, F. 2006, , 448, 379 Fally, S., Vandaele, A. C., Carleer, M., Hermans, C., Jenouvrier, A., Mérienne, M.-F., Coquart, B., & Colin, R. 2000, J. Mol. Spectrosc., 204, 10 García Muñoz, A., Zapatero Osorio, M. R., Barrena, R., Montañés-Rodríguez, P., Martín, E. L., & Pallé, E. 2012, , 755, 103 Goldsmith, W. W. 1963, , 2, 341 Griesmann, U., & Burnett, J. H. 1999, Optics Letters, 24, 1699 Hedelt, P., von Paris, P., Godolt, M., Gebauer, S., Grenfell, J. L., Rauer, H., Schreier, F., Selsis, F., & Trautmann, T. 2013, , submitted (arXiv:astro-ph/1302.5516) Hedin, A. E. 1987, , 92, 4649 Huestis, D. L. & Berkowitz, J. 2010, Advances in Geosciences, 25, 229 Jenouvrier, A., Coquart, B., & Mérienne, M. F. 1996, J. Atmos. Chem., 25, 21 Johnson, D. G., Jucks, K. W., Traub, W. A., & Chance, K. V. 1995, , 100, 3091 Kaltenegger, L., & Traub, W. A. 2009, , 698, 519 Kaltenegger, L., Selsis, F., Friedlund, M., Lammer, H., et al. 2010a, Astrobiology, 10, 89 Kaltenegger, L., Henning, W. G., & Sasselov, D. D. 2010b, , 140, 1370 Lee, A. Y. T., Yung, Y. L., Cheng, B. M., Bahou, M., Chung, C.-Y., & Lee, Y. P. 2001, , 551, L93 Lodders, K., & Fegley, Jr., B. 1998, The Planetary Scientist’s Companion, New York:Oxford University Press Lu, H.-C., Chen, K.-K., Chen, H.-F., Cheng, B.-M., & Ogilvie, J. F. 2010, , 520, A19 Manatt, S. L., & Lane, A. L. 1993, , 50, 267 Mason, N. J., Gingell, J. M., Davies, J. A., Zhao, H., Walker, I. C., & Siggel, M. R. F. 1996, J. Phys. B: At. Mol. Opt. Phys., 29, 3075 Mérienne, M. F., Jenouvrier, A., & Coquart, B. 1995, J. Atmos. Chem., 20, 281 Mota, R., Parafita, R., Giuliani, A., Hubin-Franskin, M.-J., Lourenço, J. M. C., Garcia, G., Hoffmann, S. V., Mason, M. J., Ribeiro, P. A., Raposo, M., & Limão-Vieira, P. 2005, Chem. Phys. Lett., 416, 152 Nakayama, T., Kitamura, M. T., & Watanabe, K. 1959, J. Chem. Phys., 30, 1180 Pallé, E., Zapatero Osorio, M. R., Barrena, R., Montañés-Rodríguez, P., & Martín, E. L. 2009, , 459, 814 Rauer, H., Gebauer, S., von Paris, P., Cabrera, J., Godolt, M., Grenfell, J. L., Belu, A., Selsis, F., Hedelt, P., & Schreier, F. 2011, , 529, A8 Rees, M. H. 1989, Physics and Chemistry of the Upper Atmosphere, 1$^{st}$ ed., Cambridge:Cambridge University Press Schneider, W., Moortgat, G. K., Burrows, J. P., & Tyndall, G. S. 1987, J. Photochem. Photobiol., 40, 195 Selwyn, G., Podolske, J., & Johnston, H. S. 1977, , 4, 427 Sidis, O., & Sari, R. 2010, , 720, 904 Sneep, M. & Ubachs, W. 2005, , 92, 293 Snellen, I., de Kok, R., Le Poole, R., Brogi, M., & Birkby, J. 2013, , submitted (arXiv:astro-ph/1302.3251) Traub, W. A., & Jucks, K. W. 2002, AGU Geophysical Monograph Ser. 130, Atmospheres in the Solar System: Comparative Aeronomy, M. Mendillo, 369 Traub, W. A., & Stier, M. T. 1976, , 15, 364 Vandaele, A. C., Hermans, C., & Fally, S. 2009, , 110, 2115 Vidal-Madjar, A., Arnold, A., Ehrenreich, D., Ferlet, R., Lecavelier des Etangs, A., Bouchy, F., et al. 2010, , 523, A57 Wu, C. Y. R., Yang, B. W., Chen, F. Z., Judge, D. L., Caldwell, J., & Trafton, L. M. 2000, , 145, 289 Yoshino, K., Cheung, A. S.-C., Esmond, J. R., Parkinson, W. H., Freeman, D. E., Guberman, S. L., Jenouvrier, A., Coquart, B., & Mérienne, M. F. 1988, , 36, 1469 Yoshino, K., Esmond, J. R., Cheung, A. S.-C., Freeman, D. E., & Parkinson, W. H. 1992, , 40, 185 Zelikoff, M., Watanabe, K., & Inn, E. C. Y. 1953, , 21, 1643
[cccc]{} O$_{2}$ & 303 & 115.0 - 179.2 & Lu et al. (2010)\
& 300 & 179.2 - 203.0 & Yoshino et al. (1992)\
& 298 & 203.0 - 240.5 & Yoshino et al. (1988)\
& 298 & 240.5 - 294.0 & Fally et al. (2000)\
O$_{3}$ & 298 & 110.4 - 150.0 & Mason et al. (1996)\
& 298 & 150.0 - 194.0 & Ackerman (1971)\
& 218 & 194.0 - 230.0 & Brion et al. (1993)\
& 293, 273, 243, 223 & 230.0 - 1070.0 & Bogumil et al. (2003)\
NO$_{2}$ & 298 & 15.5 - 192.0 & Au & Brion (1997)\
& 298 & 192.0 - 200.0 & Nakayama et al. (1959)\
& 298 & 200.0 - 219.0 & Schneider et al. (1987)\
& 293 & 219.0 - 500.01 & Jenouvrier et al. (1996) + Mérienne et al. (1995)\
& 293, 273, 243, 223 & 500.01 - 930.1 & Bogumil et al. (2003)\
CO & 298 & 6.2 - 177.0 & Chan et al. (1993)\
CO$_{2}$ & 300 & 0.125 - 201.6 & Huestis & Berkowitz (2010)\
H$_{2}$O & 298 & 114.8 - 193.9 & Mota et al. (2005)\
CH$_{4}$ & 295 & 120.0 - 142.5 & Chen & Wu (2004)\
& 295 & 142.5 - 152.0 & Lee et al. (2001)\
N$_{2}$O & 298 & 108.2 - 172.5 & Zelikoff et al. (1953)\
& 302, 263, 243, 225, 194 & 172.5 - 240.0 & Selwyn et al. (1977)\
SO$_{2}$ & 293 & 106.1 - 171.95 & Manatt & Lane (1993)\
& 295 & 171.95 - 262.53 & Wu et al. (2000)\
& 358, 338, 318, 298 & 262.53 - 416.66 & Vandaele et al. (2009)\
[ccc]{} N$_{2}$ & 149 - 189 & Griesmann & Burnett (1999)\
& 189 - 2060 & Bates (1984)\
O$_{2}$ & 198 - 546 & Bates (1984)\
Ar & 140 - 2100 & Bideau-Mehu et al. (1981)\
CO$_{2}$ & 180 - 1700 & Bideau-Mehu et al. (1973)\
N$_{2}$ & $\geq$ 200 & Bates (1984)\
O$_{2}$ & $\geq$ 200 & Bates (1984)\
Ar & all & Bates (1984)\
CO$_{2}$ & 180 - 1700 & Sneep & Ubachs (2005)\
[^1]: Hannelore Keller-Rudek, Geert K. Moortgat, MPI-Mainz-UV-VIS Spectral Atlas of Gaseous Molecules, www.atmosphere.mpg.de/spectral-atlas-mainz
|
{
"pile_set_name": "arxiv"
}
|
Avalancha de Éxitos
Avalancha de Éxitos (Avalanche of Hits) was Café Tacuba's third album. In 1996, two years after their acclaimed Re, the band had amassed enough new music to fill four CDs, but couldn't winnow it down to a single album. So instead, they covered eight songs by other Spanish-speaking artists, who ranged from totally obscure to well-known.
Track listing
All the covers are very different from the originals - and from each other. Chilanga Banda is a hip-hop piece in Mexican slang (featuring the sound "ch"), and Ojalá Que Llueva Café is marked by fast-paced fiddle and rapid switching from chest register to head register - reminiscent, in fact, of yodeling. This is a continuation of the precedent the band established with Re, their previous album, of constant genre-shifting.
Band members
Anónimo (Rubén Albarrán): vocals, guitar
Emmanuel del Real: keyboards, acoustic guitar, piano, programming, vocals, melodion
Joselo Rangel: electric guitar, acoustic guitar, vocals
Quique Rangel: bass guitar, electric upright bass, vocals
References
Category:Café Tacuba albums
Category:1996 albums
|
{
"pile_set_name": "wikipedia_en"
}
|
///
/// Copyright (c) 2016 Dropbox, Inc. All rights reserved.
///
/// Auto-generated by Stone, do not modify.
///
#import <Foundation/Foundation.h>
#import "DBSerializableProtocol.h"
@class DBTEAMPOLICIESSharedFolderJoinPolicy;
NS_ASSUME_NONNULL_BEGIN
#pragma mark - API Object
///
/// The `SharedFolderJoinPolicy` union.
///
/// Policy governing which shared folders a team member can join.
///
/// This class implements the `DBSerializable` protocol (serialize and
/// deserialize instance methods), which is required for all Obj-C SDK API route
/// objects.
///
@interface DBTEAMPOLICIESSharedFolderJoinPolicy : NSObject <DBSerializable, NSCopying>
#pragma mark - Instance fields
/// The `DBTEAMPOLICIESSharedFolderJoinPolicyTag` enum type represents the
/// possible tag states with which the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// union can exist.
typedef NS_CLOSED_ENUM(NSInteger, DBTEAMPOLICIESSharedFolderJoinPolicyTag){
/// Team members can only join folders shared by teammates.
DBTEAMPOLICIESSharedFolderJoinPolicyFromTeamOnly,
/// Team members can join any shared folder, including those shared by users
/// outside the team.
DBTEAMPOLICIESSharedFolderJoinPolicyFromAnyone,
/// (no description).
DBTEAMPOLICIESSharedFolderJoinPolicyOther,
};
/// Represents the union's current tag state.
@property (nonatomic, readonly) DBTEAMPOLICIESSharedFolderJoinPolicyTag tag;
#pragma mark - Constructors
///
/// Initializes union class with tag state of "from_team_only".
///
/// Description of the "from_team_only" tag state: Team members can only join
/// folders shared by teammates.
///
/// @return An initialized instance.
///
- (instancetype)initWithFromTeamOnly;
///
/// Initializes union class with tag state of "from_anyone".
///
/// Description of the "from_anyone" tag state: Team members can join any shared
/// folder, including those shared by users outside the team.
///
/// @return An initialized instance.
///
- (instancetype)initWithFromAnyone;
///
/// Initializes union class with tag state of "other".
///
/// @return An initialized instance.
///
- (instancetype)initWithOther;
- (instancetype)init NS_UNAVAILABLE;
#pragma mark - Tag state methods
///
/// Retrieves whether the union's current tag state has value "from_team_only".
///
/// @return Whether the union's current tag state has value "from_team_only".
///
- (BOOL)isFromTeamOnly;
///
/// Retrieves whether the union's current tag state has value "from_anyone".
///
/// @return Whether the union's current tag state has value "from_anyone".
///
- (BOOL)isFromAnyone;
///
/// Retrieves whether the union's current tag state has value "other".
///
/// @return Whether the union's current tag state has value "other".
///
- (BOOL)isOther;
///
/// Retrieves string value of union's current tag state.
///
/// @return A human-readable string representing the union's current tag state.
///
- (NSString *)tagName;
@end
#pragma mark - Serializer Object
///
/// The serialization class for the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// union.
///
@interface DBTEAMPOLICIESSharedFolderJoinPolicySerializer : NSObject
///
/// Serializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances.
///
/// @param instance An instance of the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// API object.
///
/// @return A json-compatible dictionary representation of the
/// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object.
///
+ (nullable NSDictionary<NSString *, id> *)serialize:(DBTEAMPOLICIESSharedFolderJoinPolicy *)instance;
///
/// Deserializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances.
///
/// @param dict A json-compatible dictionary representation of the
/// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object.
///
/// @return An instantiation of the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// object.
///
+ (DBTEAMPOLICIESSharedFolderJoinPolicy *)deserialize:(NSDictionary<NSString *, id> *)dict;
@end
NS_ASSUME_NONNULL_END
|
{
"pile_set_name": "github"
}
|
package network
// Copyright (c) Microsoft and contributors. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Code generated by Microsoft (R) AutoRest Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
import (
"context"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/go-autorest/tracing"
"net/http"
)
// VpnSitesClient is the network Client
type VpnSitesClient struct {
BaseClient
}
// NewVpnSitesClient creates an instance of the VpnSitesClient client.
func NewVpnSitesClient(subscriptionID string) VpnSitesClient {
return NewVpnSitesClientWithBaseURI(DefaultBaseURI, subscriptionID)
}
// NewVpnSitesClientWithBaseURI creates an instance of the VpnSitesClient client.
func NewVpnSitesClientWithBaseURI(baseURI string, subscriptionID string) VpnSitesClient {
return VpnSitesClient{NewWithBaseURI(baseURI, subscriptionID)}
}
// CreateOrUpdate creates a VpnSite resource if it doesn't exist else updates the existing VpnSite.
// Parameters:
// resourceGroupName - the resource group name of the VpnSite.
// vpnSiteName - the name of the VpnSite being created or updated.
// vpnSiteParameters - parameters supplied to create or update VpnSite.
func (client VpnSitesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, vpnSiteName string, vpnSiteParameters VpnSite) (result VpnSitesCreateOrUpdateFuture, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.CreateOrUpdate")
defer func() {
sc := -1
if result.Response() != nil {
sc = result.Response().StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, vpnSiteName, vpnSiteParameters)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "CreateOrUpdate", nil, "Failure preparing request")
return
}
result, err = client.CreateOrUpdateSender(req)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "CreateOrUpdate", result.Response(), "Failure sending request")
return
}
return
}
// CreateOrUpdatePreparer prepares the CreateOrUpdate request.
func (client VpnSitesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, vpnSiteName string, vpnSiteParameters VpnSite) (*http.Request, error) {
pathParameters := map[string]interface{}{
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
"vpnSiteName": autorest.Encode("path", vpnSiteName),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/vpnSites/{vpnSiteName}", pathParameters),
autorest.WithJSON(vpnSiteParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) CreateOrUpdateSender(req *http.Request) (future VpnSitesCreateOrUpdateFuture, err error) {
var resp *http.Response
resp, err = autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
if err != nil {
return
}
future.Future, err = azure.NewFutureFromResponse(resp)
return
}
// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) CreateOrUpdateResponder(resp *http.Response) (result VpnSite, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
// Delete deletes a VpnSite.
// Parameters:
// resourceGroupName - the resource group name of the VpnSite.
// vpnSiteName - the name of the VpnSite being deleted.
func (client VpnSitesClient) Delete(ctx context.Context, resourceGroupName string, vpnSiteName string) (result VpnSitesDeleteFuture, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.Delete")
defer func() {
sc := -1
if result.Response() != nil {
sc = result.Response().StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
req, err := client.DeletePreparer(ctx, resourceGroupName, vpnSiteName)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "Delete", nil, "Failure preparing request")
return
}
result, err = client.DeleteSender(req)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "Delete", result.Response(), "Failure sending request")
return
}
return
}
// DeletePreparer prepares the Delete request.
func (client VpnSitesClient) DeletePreparer(ctx context.Context, resourceGroupName string, vpnSiteName string) (*http.Request, error) {
pathParameters := map[string]interface{}{
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
"vpnSiteName": autorest.Encode("path", vpnSiteName),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsDelete(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/vpnSites/{vpnSiteName}", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// DeleteSender sends the Delete request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) DeleteSender(req *http.Request) (future VpnSitesDeleteFuture, err error) {
var resp *http.Response
resp, err = autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
if err != nil {
return
}
future.Future, err = azure.NewFutureFromResponse(resp)
return
}
// DeleteResponder handles the response to the Delete request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent),
autorest.ByClosing())
result.Response = resp
return
}
// Get retrieves the details of a VPNsite.
// Parameters:
// resourceGroupName - the resource group name of the VpnSite.
// vpnSiteName - the name of the VpnSite being retrieved.
func (client VpnSitesClient) Get(ctx context.Context, resourceGroupName string, vpnSiteName string) (result VpnSite, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.Get")
defer func() {
sc := -1
if result.Response.Response != nil {
sc = result.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
req, err := client.GetPreparer(ctx, resourceGroupName, vpnSiteName)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "Get", nil, "Failure preparing request")
return
}
resp, err := client.GetSender(req)
if err != nil {
result.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "Get", resp, "Failure sending request")
return
}
result, err = client.GetResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "Get", resp, "Failure responding to request")
}
return
}
// GetPreparer prepares the Get request.
func (client VpnSitesClient) GetPreparer(ctx context.Context, resourceGroupName string, vpnSiteName string) (*http.Request, error) {
pathParameters := map[string]interface{}{
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
"vpnSiteName": autorest.Encode("path", vpnSiteName),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsGet(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/vpnSites/{vpnSiteName}", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// GetSender sends the Get request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) GetSender(req *http.Request) (*http.Response, error) {
return autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
}
// GetResponder handles the response to the Get request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) GetResponder(resp *http.Response) (result VpnSite, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
// List lists all the VpnSites in a subscription.
func (client VpnSitesClient) List(ctx context.Context) (result ListVpnSitesResultPage, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.List")
defer func() {
sc := -1
if result.lvsr.Response.Response != nil {
sc = result.lvsr.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
result.fn = client.listNextResults
req, err := client.ListPreparer(ctx)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "List", nil, "Failure preparing request")
return
}
resp, err := client.ListSender(req)
if err != nil {
result.lvsr.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "List", resp, "Failure sending request")
return
}
result.lvsr, err = client.ListResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "List", resp, "Failure responding to request")
}
return
}
// ListPreparer prepares the List request.
func (client VpnSitesClient) ListPreparer(ctx context.Context) (*http.Request, error) {
pathParameters := map[string]interface{}{
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsGet(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/vpnSites", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// ListSender sends the List request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) ListSender(req *http.Request) (*http.Response, error) {
return autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
}
// ListResponder handles the response to the List request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) ListResponder(resp *http.Response) (result ListVpnSitesResult, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
// listNextResults retrieves the next set of results, if any.
func (client VpnSitesClient) listNextResults(ctx context.Context, lastResults ListVpnSitesResult) (result ListVpnSitesResult, err error) {
req, err := lastResults.listVpnSitesResultPreparer(ctx)
if err != nil {
return result, autorest.NewErrorWithError(err, "network.VpnSitesClient", "listNextResults", nil, "Failure preparing next results request")
}
if req == nil {
return
}
resp, err := client.ListSender(req)
if err != nil {
result.Response = autorest.Response{Response: resp}
return result, autorest.NewErrorWithError(err, "network.VpnSitesClient", "listNextResults", resp, "Failure sending next results request")
}
result, err = client.ListResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "listNextResults", resp, "Failure responding to next results request")
}
return
}
// ListComplete enumerates all values, automatically crossing page boundaries as required.
func (client VpnSitesClient) ListComplete(ctx context.Context) (result ListVpnSitesResultIterator, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.List")
defer func() {
sc := -1
if result.Response().Response.Response != nil {
sc = result.page.Response().Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
result.page, err = client.List(ctx)
return
}
// ListByResourceGroup lists all the vpnSites in a resource group.
// Parameters:
// resourceGroupName - the resource group name of the VpnSite.
func (client VpnSitesClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result ListVpnSitesResultPage, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.ListByResourceGroup")
defer func() {
sc := -1
if result.lvsr.Response.Response != nil {
sc = result.lvsr.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
result.fn = client.listByResourceGroupNextResults
req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "ListByResourceGroup", nil, "Failure preparing request")
return
}
resp, err := client.ListByResourceGroupSender(req)
if err != nil {
result.lvsr.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "ListByResourceGroup", resp, "Failure sending request")
return
}
result.lvsr, err = client.ListByResourceGroupResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "ListByResourceGroup", resp, "Failure responding to request")
}
return
}
// ListByResourceGroupPreparer prepares the ListByResourceGroup request.
func (client VpnSitesClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) {
pathParameters := map[string]interface{}{
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsGet(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/vpnSites", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) {
return autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
}
// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) ListByResourceGroupResponder(resp *http.Response) (result ListVpnSitesResult, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
// listByResourceGroupNextResults retrieves the next set of results, if any.
func (client VpnSitesClient) listByResourceGroupNextResults(ctx context.Context, lastResults ListVpnSitesResult) (result ListVpnSitesResult, err error) {
req, err := lastResults.listVpnSitesResultPreparer(ctx)
if err != nil {
return result, autorest.NewErrorWithError(err, "network.VpnSitesClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request")
}
if req == nil {
return
}
resp, err := client.ListByResourceGroupSender(req)
if err != nil {
result.Response = autorest.Response{Response: resp}
return result, autorest.NewErrorWithError(err, "network.VpnSitesClient", "listByResourceGroupNextResults", resp, "Failure sending next results request")
}
result, err = client.ListByResourceGroupResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request")
}
return
}
// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required.
func (client VpnSitesClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result ListVpnSitesResultIterator, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.ListByResourceGroup")
defer func() {
sc := -1
if result.Response().Response.Response != nil {
sc = result.page.Response().Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
result.page, err = client.ListByResourceGroup(ctx, resourceGroupName)
return
}
// UpdateTags updates VpnSite tags.
// Parameters:
// resourceGroupName - the resource group name of the VpnSite.
// vpnSiteName - the name of the VpnSite being updated.
// vpnSiteParameters - parameters supplied to update VpnSite tags.
func (client VpnSitesClient) UpdateTags(ctx context.Context, resourceGroupName string, vpnSiteName string, vpnSiteParameters TagsObject) (result VpnSitesUpdateTagsFuture, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/VpnSitesClient.UpdateTags")
defer func() {
sc := -1
if result.Response() != nil {
sc = result.Response().StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, vpnSiteName, vpnSiteParameters)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "UpdateTags", nil, "Failure preparing request")
return
}
result, err = client.UpdateTagsSender(req)
if err != nil {
err = autorest.NewErrorWithError(err, "network.VpnSitesClient", "UpdateTags", result.Response(), "Failure sending request")
return
}
return
}
// UpdateTagsPreparer prepares the UpdateTags request.
func (client VpnSitesClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, vpnSiteName string, vpnSiteParameters TagsObject) (*http.Request, error) {
pathParameters := map[string]interface{}{
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
"vpnSiteName": autorest.Encode("path", vpnSiteName),
}
const APIVersion = "2018-10-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPatch(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/vpnSites/{vpnSiteName}", pathParameters),
autorest.WithJSON(vpnSiteParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// UpdateTagsSender sends the UpdateTags request. The method will close the
// http.Response Body if it receives an error.
func (client VpnSitesClient) UpdateTagsSender(req *http.Request) (future VpnSitesUpdateTagsFuture, err error) {
var resp *http.Response
resp, err = autorest.SendWithSender(client, req,
azure.DoRetryWithRegistration(client.Client))
if err != nil {
return
}
future.Future, err = azure.NewFutureFromResponse(resp)
return
}
// UpdateTagsResponder handles the response to the UpdateTags request. The method always
// closes the http.Response Body.
func (client VpnSitesClient) UpdateTagsResponder(resp *http.Response) (result VpnSite, err error) {
err = autorest.Respond(
resp,
client.ByInspecting(),
azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
|
{
"pile_set_name": "github"
}
|
2020 Australian S5000 Championship
The 2020 Australian S5000 Championship is planned to be the inaugural season of the Australian S5000 Championship, run after a series of exhibition races the previous year. The series will be sanctioned by Motorsport Australia and promoted by the Australian Racing Group as part of the 2020 Shannons Nationals Motor Racing Series. The season is currently scheduled for 6 rounds, beginning in March at the Albert Park Circuit and ending on 13 September at Sandown Raceway.
Teams and drivers
The following teams and drivers are under contract to compete in the 2020 championship:
Race calendar
The proposed 2020 calendar was released on 29 October 2019, with six confirmed rounds, plus one non-championship round. All rounds will be held in Australia. Final scheduling of race dates is yet to be determined. The date for the inaugural "Bathurst International" event was revealed on 15 January 2020.
References
S5000 Championship
Australian S5000 Championship
|
{
"pile_set_name": "wikipedia_en"
}
|
Direct-shift gearbox
A direct-shift gearbox (), commonly abbreviated to DSG, is an electronically controlled dual-clutch multiple-shaft gearbox in a transaxle design, with automatic clutch operation and with fully automatic or semi-manual gear selection. The first actual dual-clutch transmissions were derived from Porsche in-house development for their Model 962 racing cars in the 1980s.
In simple terms, a DSG automates two separate "manual" gearboxes (and clutches) contained within one housing and working as one unit. It was designed by BorgWarner and is licensed to the Volkswagen Group, with support by IAV GmbH. By using two independent clutches, a DSG can achieve faster shift times and eliminates the torque converter of a conventional epicyclic automatic transmission.
Overview
Transverse DSG
At the time of launch in 2003, it became the world's first automated dual-clutch transmission in a series-production car, in the German-market Volkswagen Golf Mk4 R32, and shortly afterwards worldwide, in the original Audi TT 3.2. and the 2004+ New Beetle TDI. For the first few years of production, this original DSG transmission was only available in transversely oriented front-engine, front-wheel-drive and Haldex Traction-based four-wheel-drive vehicle layouts.
The first DSG transaxle that went into production for the Volkswagen Group mainstream marques had six forward speeds (and one reverse) and used wet/submerged multi-plate clutch packs (Volkswagen Group internal code: DQ250, parts code prefix: 02E). It has been paired to engines with up to of torque. The two-wheel-drive version weighs . It is manufactured at Volkswagen Group's Kassel plant, with a daily production output of 1,500 units.
At the start of 2008, another world-first seven-speed DSG transaxle (Volkswagen Group internal code: DQ200, parts code prefix: 0AM) became available. It differs from the six-speed DSG, in that it uses two single-plate dry clutches (of similar diameter). This clutch pack was designed by LuK Clutch Systems, Gmbh. This seven-speed DSG is used in smaller front-wheel-drive cars with smaller-displacement engines with lower torque outputs, such as the latest Volkswagen Golf, Volkswagen Polo Mk5, and the new SEAT Ibiza. It has been paired to engines with up to . It has considerably less oil capacity than the six-speed DQ250; this new DQ200 uses just of transmission fluid.
In September 2010, VW launched a new seven-speed DSG built to support up to , the DQ500.
Audi longitudinal DSG
In late 2008, an all-new seven-speed longitudinal S tronic version of the DSG transaxle went into series production (Volkswagen Group internal code: DL501, parts code prefix: 0B5). Initially, from early 2009, it is only used in certain Audi cars, and only with longitudinally mounted engines. Like the original six-speed DSG, it features a concentric dual wet multi-plate clutch. However, this particular variant uses notably more plates – the larger outer clutch (for the odd-numbered gears) uses 10 plates, whereas the smaller inner clutch (driving even-numbered gears and reverse) uses 12 plates. Another notable change over the original transverse DSGs is the lubrication system – Audi now utilise two totally separate oil circuits. One oil circuit, consisting of , lubricates the hydraulic clutches and mechatronics with fully synthetic specialist automatic transmission fluid (ATF), whilst the other oil circuit lubricates the gear trains and front and centre differentials with of conventional hypoid gear oil. This dual circuit lubrication is aimed at increasing overall reliability, due to eliminating cross-contamination of debris and wear particles. It has a torque handling limit of up to , and engine power outputs of up to . It has a total mass, including all lubricants and the dual-mass flywheel of .
This was initially available in their quattro all-wheel-drive variants, and is very similar to the new ZF Friedrichshafen-supplied Porsche Doppel-Kupplung (PDK).
List of DSG variants
Operational introduction
The internal combustion engine drives two clutch packs. The outer clutch pack drives gears 1, 3, 5 (and 7 when fitted), and reverse – the outer clutch pack has a larger diameter compared to the inner clutch, and can therefore handle greater torque loadings. The inner clutch pack drives gears 2, 4, and 6. Instead of a standard large dry single-plate clutch, each clutch pack for the six-speed DSG is a collection of four small wet interleaved clutch plates (similar to a motorcycle wet multi-plate clutch). Due to space constraints, the two clutch assemblies are concentric, and the shafts within the gearbox are hollow and also concentric. Because the alternate clutch pack's gear-sets can be pre-selected (predictive shifts enabled via the unloaded section of the gearbox), un-powered time while shifting is avoided because the transmission of torque is simply switched from one clutch-pack to the other. While the DSG has one of the fastest shift times on the market, the claim that the DSG takes only about 8 milliseconds to upshift is un-proven with 3-party data nor claimed by the manufacturer.
DSG controls
The direct-shift gearbox uses a floor-mounted transmission shift lever, very similar to that of a conventional automatic transmission. The lever is operated in a straight 'fore and aft' plane (without any 'dog-leg' offset movements), and uses an additional button to help prevent an inadvertent selection of an inappropriate shift lever position.
P
P position of the floor-mounted gear shift lever means that the transmission is set in park. Both clutch packs are fully disengaged, all gear-sets are disengaged, and a solid mechanical transmission lock is applied to the crown wheel of the DSG's internal differential. This position must only be used when the motor vehicle is stationary. Furthermore, this is the position which must be set on the shift lever before the vehicle ignition key can be removed.
N
N position of the floor-mounted shift lever means that the transmission is in neutral. Similar to P above, both clutch packs and all gear-sets are fully disengaged; however, the parking lock is also disengaged.
D mode
Whilst the motor vehicle is stationary and in neutral (N), the driver can select D for drive (after first pressing the foot brake pedal). The transmission's reverse gear is selected on the first shaft K1, and the outer clutch K2 engages at the start of the bite point. At the same time, on the alternate gear shaft, the reverse gear clutch K1 is also selected (pre-selected), as the gearbox doesn't know whether the driver wants to go forward or reverse. The clutch pack for second gear (K2) gets ready to engage. When the driver releases the brake pedal, the K2 clutch pack increases the clamping force, allowing the second gear to take up the drive through an increase of the bite point, and thereby transferring the torque from the engine through the transmission to the drive shafts and road wheels, causing the vehicle to move forward. Depressing the accelerator pedal engages the clutch and causes an increase of forward vehicle speed. Pressing the throttle pedal to the floor (hard acceleration) will cause the gearbox to "kick down" to first gear to provide the acceleration associated with first, although there will be a slight hesitation while the gearbox deselects second gear and selects first gear. As the vehicle accelerates, the transmission's computer determines when the second gear (which is connected to the second clutch) should be fully used. Depending on the vehicle speed and amount of engine power being requested by the driver (determined by the position of the throttle pedal), the DSG then up-shifts. During this sequence, the DSG disengages the first outer clutch whilst simultaneously engaging the second inner clutch (all power from the engine is now going through the second shaft), thus completing the shift sequence. This sequence in a fraction of a second (aided by pre-selection), and can happen even with full throttle opening, and as a result, there is minimal power loss.
Once the vehicle has completed the shift to second gear, the first gear is immediately de-selected, and third gear (being on the same shaft as 1st and 5th) is pre-selected, and is pending. Once the time comes to shift into 3rd, the second clutch disengages and the first clutch re-engages. This method of operation continues in the same manner for the remaining forward gears.
Downshifting is similar to up-shifting but in reverse order, and is slower, at 600 milliseconds, due to the engine's Electronic Control Unit, or ECU, needing to 'blip' the throttle so that the engine crankshaft speed can match the appropriate gear shaft speed. The car's computer senses the car slowing down, or more power required (during acceleration), and thus engages a lower gear on the shaft not in use, and then completes the downshift.
The actual shift points are determined by the DSG's transmission ECU, which commands a hydro-mechanical unit. The transmission ECU, combined with the hydro-mechanical unit, are collectively called a mechatronics unit or module. Because the DSG's ECU uses fuzzy logic, the operation of the DSG is said to be adaptive; that is, the DSG will "learn" how the user drives the car, and will progressively tailor the shift points accordingly to suit the habits of the driver.
In the vehicle instrument display, between the speedometer and tachometer, the available shift-lever positions are shown, the current position of the shift-lever is highlighted (emboldened), and the current gear ratio in use is also displayed as a number.
Under "normal", progressive and linear acceleration and deceleration, the DSG shifts in a sequential manner; i.e., under acceleration: 1st → 2nd → 3rd → 4th → 5th → 6th, and the same sequence reversed for deceleration. However, the DSG can also skip the normal sequential method, by missing gears, and shift two or more gears. This is most apparent if the car is being driven at sedate speeds in one of the higher gears with a light throttle opening, and the accelerator pedal is then pressed down, engaging the kick-down function. During kick-down, the DSG will skip gears, shifting directly to the most appropriate gear depending on speed and throttle opening. This kick-down may be engaged by any increased accelerator pedal opening, and is completely independent of the additional resistance to be found when the pedal is pressed fully to the floor, which will activate a similar kick-down function when in Manual operation mode. The seven-speed unit in the 2007 Audi variants will not automatically shift to 6th gear; rather, it stays at 5th to keep power available at a high RPM while cruising.
When the floor-mounted gear selector lever is in position D, the DSG works in fully automatic mode, with emphasis placed on gear shifts programmed to deliver maximum fuel economy. That means that shifts will change up and down very early in the rev-range. As an example, on the Volkswagen Golf Mk5 GTI, sixth gear will be engaged around , when initially using the DSG transmission with the default ECU adaptation; although with an "aggressive" or "sporty" driving style, the adaptive shift pattern will increase the vehicle speed at which sixth gear engages.
S mode
The floor selector lever also has an S position. When S is selected, sport mode is activated in the DSG. Sport mode still functions as a fully automatic mode, identical in operation to D mode, but upshifts and downshifts are made much higher up the engine rev-range. This aids a more sporty driving manner, by utilising considerably more of the available engine power, and also maximising engine braking. However, this mode does have a detrimental effect on the vehicle fuel consumption, when compared to D mode. This mode may not be ideal to use when wanting to drive in a sedate manner; nor when road conditions are very slippery, due to ice, snow or torrential rain – because loss of tire traction may be experienced (wheel spin during acceleration, and may also result in road wheel locking during downshifts at high engine rpms under closed throttle). On 4motion or quattro-equipped vehicles this may be partially offset by the drivetrain maintaining full-time engagement of the rear differential in S mode, so power distribution under loss of front-wheel traction may be marginally improved.
S is highlighted in the instrument display, and like D mode, the currently used gear ratio is also displayed as a number.
R
R position of the floor-mounted shift lever means that the transmission is in reverse. This functions in a similar way to D, but there is just one reverse gear. When selected, R is highlighted in the instrument display.
Manual mode
Additionally, the floor shift lever also has another plane of operation, for manual mode, with spring-loaded + and − positions. This plane is selected by moving the stick away from the driver (in vehicles with the driver's seat on the right, the lever is pushed to the left, and in left-hand drive cars, the stick is pushed to the right) when in D mode only. When this plane is selected, the DSG can now be controlled like a manual gearbox, albeit only under a sequential shift pattern.
In most (VW) applications, the readout in the instrument display changes to 6 5 4 3 2 1, and just like the automatic modes, the currently used gear ratio is highlighted or emboldened. In other versions (e.g., on the Audi TT) the display shows just M followed by the gear currently selected; e.g., M1, M2, etc.
To change up a gear, the lever is pushed forward (against a spring pressure) towards the +, and to change down, the lever is pulled rearward towards the −. The DSG transmission can now be operated with the gear changes being (primarily) determined by the driver. This method of operation is commonly called tiptronic. In the interests of engine preservation, when accelerating in Manual/tiptronic mode, the DSG will still automatically change up just before the redline, and when decelerating, it will change down automatically at very low revs, just before the engine idle speed (tickover). Furthermore, if the driver calls for a gear when it is not appropriate (e.g., requesting a downshift when engine speed is near the redline) the DSG will not change to the driver's requested gear.
Current variants of the DSG will still downshift to the lowest possible gear ratio when the kick-down button is activated during full throttle whilst in manual mode. In Manual mode this kick-down is only activated by an additional button at the bottom of the accelerator pedal travel; unless this is pressed the DSG will not downshift, and will simply perform a full-throttle acceleration in whatever gear was previously being utilised.
Paddle shifters
Initially available on certain high-powered cars, and those with a "sporty" trim level – such as those using the 2.0 T FSI and 3.2/3.6 VR6 engines – steering wheel-mounted paddle shifters were available. However, these are now being offered (either as a standard inclusive fitment, or as a factory optional extra) on virtually all DSG-equipped cars, throughout all model ranges, including lesser power output applications, such as the 105 PS Volkswagen Golf Plus.
These operate in an identical manner as the floor mounted shift lever when it is placed across the gate in manual mode. The paddle shifters have two distinct advantages: the driver can safely keep both hands on the steering wheel when using the Manual/tiptronic mode; and the driver can temporarily manually override either of the automatic programmes (D or S), and gain instant manual control of the DSG transmission (within the above described constraints).
If the paddle-shift activated manual override of one of the automatic modes (D or S) is used intermittently the DSG transmission will default back to the previously selected automatic mode after a predetermined duration of inactivity of the paddles, or when the vehicle becomes stationary. Alternatively, should the driver wish to immediately revert to fully automatic control, this can be done by activating and holding the + paddle for at least two seconds.
Advantages and disadvantages
Advantages
Better fuel economy (up to 15% improvement) than conventional planetary geared automatic transmission (due to lower parasitic losses from oil churning) and for some models with manual transmissions;
No loss of torque through the transmission from the engine to the driving wheels during gear shifts;
Short up-shift time of 8 milliseconds when shifting to a gear the alternate gear shaft has preselected;
Smooth gear-shift operations;
Consistent down-shift time of 600 milliseconds, regardless of throttle or operational mode;
Disadvantages
Unreliable: By design, it is not possible to make it as reliable as a conventional torque-converter automatic transmission. The slipping clutch mechanism has a limited lifespan;
Marginally worse overall mechanical efficiency compared to a conventional manual transmission, especially on wet-clutch variants (due to electronics and hydraulic systems);
Expensive specialist transmission fluids/lubricants with dedicated additives are required, which need regular changes;
Relatively expensive to manufacture, and therefore increases new vehicle purchase price;
Relatively lengthy shift time when shifting to a gear ratio which the transmission control unit did not anticipate (around 1100 ms, depending on the situation);
Torque handling capability constraints perceive a limit on after-market engine tuning modifications (though many tuners and users may exceed the official torque limits notwithstanding); (Later variants have been fitted to more powerful cars, such as the 300 bhp/350 Nm VW R36 and the 272 bhp/350 Nm Audi TTS.)
Heavier than a comparable Getrag conventional manual transmission ( vs. );
While the first generation DSG fuel economy was up to 15% worse than a manual, the second generation DSG (current) gets better fuel economy than the manual transmission.
Applications
Volkswagen Group vehicles with the DSG gearbox include:
Audi
After originally using the DSG moniker, Audi subsequently renamed their direct-shift gearbox to S tronic.
Audi TT
Audi A1
Audi A3
Audi S3
Audi A4 (B8)
Audi A4 (B9)
Audi S4 (B8)
Audi S5 (B8)
Audi A5
Audi A6
Audi A7
Audi A8 (D4)
Audi Q2
Audi Q3
Audi Q5
Audi R8 facelift
Bugatti
Bugatti Veyron EB 16.4 (developed by Ricardo rather than Borg Warner)
Škoda
Škoda Fabia
Škoda Kodiaq
Škoda Karoq
Škoda Octavia
Škoda Rapid (2012)
Škoda Roomster
Škoda Superb II
Škoda Yeti
Škoda Scala
Volkswagen Passenger Cars
Volkswagen Ameo
Volkswagen Vento
Volkswagen Polo
Volkswagen Golf, GTI, GTD, GTE, TDI, R32, R
Volkswagen Jetta GLI, TDI, TSI(Brazil)
Volkswagen Eos
Volkswagen Touran
Volkswagen New Beetle
Volkswagen New Beetle Convertible
Volkswagen Passat and R36
Volkswagen CC
Volkswagen Sharan
Volkswagen Scirocco
Volkswagen Tiguan 2011
Volkswagen Commercial Vehicles
Volkswagen Caddy car-derived van
Volkswagen Transporter (T5) medium van
Problems and recalls of DSG-equipped vehicles
The 7-speed DQ200 and 6-speed DQ250 gearboxes sometimes suffer from power-loss (gear disengaging) due to short-circuiting of wires caused by a build-up of sulphur in the transmission oil.
United States of America
In August 2009, Volkswagen of America issued two recalls of DSG-equipped vehicles. The first involved 13,500 vehicles, and was to address unplanned shifts to the neutral gear, while the second involved similar problems (by then attributed to faulty temperature sensors) and applied to 53,300 vehicles. These recalls arose as a result of investigations carried out by the US National Highway Traffic Safety Administration (NHTSA), where owners reported to the NHTSA a loss of power whilst driving. This investigation preliminary found only 2008 and 2009 model year vehicles as being affected.
Australia
In November 2009, Volkswagen recalled certain Golf, Jetta, EOS, Passat & Caddy models equipped with 6-speed DQ250 DSG transmission because the gearbox may read the clutch temperature incorrectly, which leads to clutch protection mode, causing a loss of power.
China
Since 2009 there have been widespread concerns from Chinese consumers particularly among the online community, who expressed that Volkswagen has failed to respond to complaints about defects in its DSG-equipped vehicles. Typical issues associated with 6-speed DSG include abnormal noise and inability to change gear; while issues associated with 7-speed DSG include abnormal noise, excessive shift shock, abnormal increase in engine RPM, flashing gear indicator on the dashboard as well as inability to shift to even-numbered gears. In March 2012 China's quality watchdog the General Administration of Quality Supervision, Inspection and Quarantine (AQSIQ) said that it had been in contact with Volkswagen (China) and urged the carmaker to probe the issues. In a survey held by Gasgoo.com (China) of 2,937 industry experts and insiders, 83% of respondents believed that the carmaker should consider a full vehicle recall. In March 2012 Volkswagen Group China admitted that there could be an issue in its seven-speed DSG gearboxes that may affect approximately 500,000 vehicles from its various subsidiaries in China. A software upgrade has since been offered for the affected vehicles in an attempt to repair the problem.
According to 163.com - one of China's most popular web portals - in March 2012 about a quarter of the complaints about problems found in cars in China's automotive market were made against DSG-equipped vehicles manufactured by Volkswagen. The top five models that dominate those complaints were:
Volkswagen Magotan - 6%
Volkswagen Bora - 5.3%
Volkswagen Sagitar - 5.3%
Volkswagen Touareg - 4.7%
Volkswagen Golf - 4%
Would be worth noting that Touareg has never been fitted with a DSG transmission.
On 15 March 2013, China Central Television aired a program for the World Consumer Rights Day. The program criticized the issue associated with DSG-equipped vehicles manufactured by Volkswagen. On 17 March 2013 Volkswagen Group China announced on its official Weibo that it will voluntarily recall vehicles equipped with DSG boxes. Some sources have estimated the failure rate of DSG-equipped vehicles sold in China to be greater than 20,000 per million sold.
Sweden
VW Sweden stopped selling the Passat EcoFuel DSG as a taxi after many cars had problems with the 7 speed DSG gearbox. They instead offered the Touran EcoFuel DSG, which is using an updated version of the same DSG gearbox.
Japan
The recall has been extended to Japan with 91,000 (VW and Audi using the same DSG) being recalled.
Malaysia
13 days after the Singapore recall, Volkswagen Malaysia also announced a recall for the 7-speed DSG. No official statement was released by the company, but it was stated that a total of 3,962 were involved in the unit recall exercise - units produced between June 2010 and June 2011, with affected vehicles being Golf, Polo, Scirocco, Cross Touran, Passat and Jetta models equipped with the transmission.
Worldwide recall
14 November 2013, Volkswagen Group announced a major worldwide recall over problems with the 7-speed DSG gearbox (model: DQ200) which might lead to loss of power, covering some 1.6m cars including those carrying the Audi, Skoda and SEAT badges. Overall, once above 100k miles, repair may be needed immediately whenever strong vibration or gear up-shift/down-shift hesitation is detected.
Australian recall
15 October 2019, Australia recall of DSG 7-speed gearboxes.
Due to a production fault, over time a crack in the transmissions pressure accumulator can occur.
If the pressure accumulator cracks, oil and pressure is lost in the hydraulic system of the gearbox. As a result, the transmission of engine power via the gearbox is interrupted. The experience of this symptom would be comparable to depressing the clutch in a vehicle fitted with a manual transmission. This could increase the likelihood of an accident affecting the occupants of the vehicle and other road users.
See also
Volkswagen 01M transmission
List of ZF transmissions
List of Aisin transmissions
List of GM transmissions
List of Ford transmissions
Multimode manual transmission
Automatic manual transmission
References
External links
Dual Clutch Transmission - DCT Facts (not implemented yet)
Official links
Volkswagen AG corporate website
Independent links
Pictures and diagrams of DQ250 DSG at WorldCarFans.com.
Reviews, videos, and explanation of DSG transmission
First Drive: Audi TT 3.2 DSG review at VWvortex.com.
European interest in dual clutch technology shifts up a gear, an informative article from Just-Auto.com.
Computer-controlled Meccano model of a DSG Transmission by Alan Wenbourne of the South East London Meccano Club (SELMEC).
Video of Alan Wenbourne's Meccano DSG in operation at YouTube.com.
Category:Volkswagen Group
Category:Automatic transmission tradenames
Category:Automotive transmission technologies
Category:Automotive technology tradenames
Category:Borg-Warner transmissions
de:Doppelkupplungsgetriebe
sv:DSG-växellåda
|
{
"pile_set_name": "wikipedia_en"
}
|
Kengo Ota
is a Japanese football player for Grulla Morioka.
Career
After attending Osaka University of Health and Sport Sciences, Ota joined Grulla Morioka in January 2018.
Club statistics
Updated to 30 August 2018.
References
External links
Profile at J. League
Profile at Iwate Grulla Morioka
Category:1995 births
Category:Living people
Category:Osaka University of Health and Sport Sciences alumni
Category:Association football people from Kanagawa Prefecture
Category:Japanese footballers
Category:J3 League players
Category:Iwate Grulla Morioka players
Category:Association football defenders
|
{
"pile_set_name": "wikipedia_en"
}
|
John Wynne (died 1747)
John Wynne ( – 9 February 1747) was an Irish politician.
He sat in the House of Commons of Ireland from 1727 to 1747 as a Member of Parliament for Castlebar.
References
Category:1690 births
Category:Year of birth uncertain
Category:1747 deaths
Category:Irish MPs 1727–1760
Category:Members of the Parliament of Ireland (pre-1801) for County Mayo constituencies
|
{
"pile_set_name": "wikipedia_en"
}
|
// Copyright (c) Microsoft Open Technologies, Inc. All rights reserved. See License.txt in the project root for license information.
namespace System.Data.Entity.TestModels.ProviderAgnosticModel
{
using System;
public enum AllTypesEnum
{
EnumValue0 = 0,
EnumValue1 = 1,
EnumValue2 = 2,
EnumValue3 = 3,
};
public class AllTypes
{
public int Id { get; set; }
public bool BooleanProperty { get; set; }
public byte ByteProperty { get; set; }
public DateTime DateTimeProperty { get; set; }
public decimal DecimalProperty { get; set; }
public double DoubleProperty { get; set; }
public byte[] FixedLengthBinaryProperty { get; set; }
public string FixedLengthStringProperty { get; set; }
public string FixedLengthUnicodeStringProperty { get; set; }
public float FloatProperty { get; set; }
public Guid GuidProperty { get; set; }
public short Int16Property { get; set; }
public int Int32Property { get; set; }
public long Int64Property { get; set; }
public byte[] MaxLengthBinaryProperty { get; set; }
public string MaxLengthStringProperty { get; set; }
public string MaxLengthUnicodeStringProperty { get; set; }
public TimeSpan TimeSpanProperty { get; set; }
public string VariableLengthStringProperty { get; set; }
public byte[] VariableLengthBinaryProperty { get; set; }
public string VariableLengthUnicodeStringProperty { get; set; }
public AllTypesEnum EnumProperty { get; set; }
}
}
|
{
"pile_set_name": "github"
}
|
I got a wake up call, I got to make this workCause if we don´t we´re left with nothing and that´s what hurtsWe´re so close to giving up but something keeps us here
I can´t see what´s yet to comeBut I have imagined life without you and it feels wrongI want to know where love begins, not where it ends
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind
We want it all and deserve no lessBut all we seem to give each other is second bestWe´re still reaching out for something that we can´t touch
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind
You know there´s nothing like this loveSo we don´t want to let it go
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost it got rightBut almost wasn´t what I had in mind
|
{
"pile_set_name": "pile-cc"
}
|
{
"fpsLimit": 60,
"preset": "basic",
"background": {
"color": "#0d47a1",
"image": "",
"position": "50% 50%",
"repeat": "no-repeat",
"size": "cover"
}
}
|
{
"pile_set_name": "github"
}
|
DataverseUse test
Set import-private-functions=true
Query:
Let Variable [ Name=$txt ]
:=
LiteralExpr [STRING] [Hello World, I would like to inform you of the importance of Foo Bar. Yes, Foo Bar. Jürgen.]
Let Variable [ Name=$tokens ]
:=
FunctionCall asterix.hashed-word-tokens@1[
Variable [ Name=$txt ]
]
SELECT ELEMENT [
Variable [ Name=$token ]
]
FROM [ Variable [ Name=$tokens ]
AS Variable [ Name=$token ]
]
|
{
"pile_set_name": "github"
}
|
// RUN: %clang_cc1 -emit-llvm -triple i386-apple-macosx10.7.2 < %s | FileCheck %s
// The preferred alignment for a long long on x86-32 is 8; make sure the
// alloca for x uses that alignment.
int test (long long x) {
return (int)x;
}
// CHECK-LABEL: define i32 @test
// CHECK: alloca i64, align 8
// Make sure we honor the aligned attribute.
struct X { int x,y,z,a; };
int test2(struct X x __attribute((aligned(16)))) {
return x.z;
}
// CHECK-LABEL: define i32 @test2
// CHECK: alloca %struct._Z1X, align 16
|
{
"pile_set_name": "github"
}
|
Julia Kogan
Julia Kogan is an American-French operatic coloratura soprano, writer, and presenter of Ukrainian ancestry.
Biography
Kogan's opera roles have included Queen of the Night in Die Zauberflöte, Zerbinetta in Ariadne auf Naxos, Blonde in Die Entführung, Madame Herz in Der Schauspieldirektor, Greta Fiorentino in Street Scene, and Fiordiligi Cosi fan tutte at the opera houses of Avignon, Indianapolis, Limoges, Manitoba, Toulon, Toulouse and in Oxford. She has been described as "a lively actress" with "a warm voice, round, elegant and expressive phrasing, and a remarkable knack for coloratura passages", "up to the challenge of a stratospheric soprano line".
Kogan has concertized with repertoire ranging from Baroque to contemporary in Europe, North and South America, and Africa, including such venues as Carnegie Hall, Alice Tully Hall at the Lincoln Center, St. Petersburg's Glinka Hall, the Hôtel de Ville in Paris, the Alcazar Palace in Seville, the Library of Congress in Washington D.C., and collaborated with Chamber Orchestra Kremlin, Ensemble Calliopée, Figueiredo Consort, Junge Philharmonie Wien, Les Passions, The Little Orchestra Society, the Oxford Philharmonic, the Newcastle Baroque Orchestra, Saint Petersburg Chamber Philharmonic, Toulon Opera Orchestra, and Ukrainian National Symphony, among others.
Julia Kogan wrote and presented the BBC Radio 4 documentary "The Lost Songs of Hollywood", which aired on 12 November 2015. It was chosen "Pick of the Week" on BBC radio.
Releases
Kogan's first solo album, "Vivaldi Fioritura" (2010), was recorded with Chamber Orchestra Kremlin under Misha Rachlevsky. Her second solo album, Troika (2011), was recorded with the St. Petersburg Chamber Philharmonic under Jeffery Meyer. Both albums were released on Rideau Rouge Records with distribution by Harmonia Mundi.
References
External links
Official website
http://www.bbc.co.uk/programmes/b06nrqvk
Category:American operatic sopranos
Category:Living people
Category:Ukrainian emigrants to the United States
Category:Year of birth missing (living people)
|
{
"pile_set_name": "wikipedia_en"
}
|
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1
type JobExpansion interface{}
|
{
"pile_set_name": "github"
}
|
<!-- ============ PROGRESS -->
<!-- ====================== -->
<h1>Progress</h1>
<!-- ============ VARIABLES -->
<!-- ====================== -->
<p>
<h4>Global variables</h4>
<div><pre hljs class="prettyprint lang-sass">$progress-class: "-progress" !global
$progress-bar-class: "-bar" !global
$progress-bar-padding-vertical: $base-padding-vertical / 3
$progress-bar-padding-horizontal: $base-padding-horizontal / 1.5
$progress-font-weight: 600 !global
$progress-border-radius: 4px !global
$progress-border-width: 0px !global
$progress-border-style: solid !global
$progress-padding: 3px !global
$progress-background: #fff !global</pre></div>
</p>
<p>
Use widget class <code>-progress</code>. Apply themes and sizes. Append <code>-bar</code> inside <code>-progress</code>.
</p>
<div class="-row example-block">
<div class="-col12 view">
<div class="-progress -primary-">
<div class="-bar" style="width: 12%">12 %</div><div class="-bar -warning-" style="width: 25%">25 %</div><div class="-bar -error-" style="width: 5%">Something goes wrong</div>
</div>
<br>
<div class="-progress _divine -primary-">
<div class="-bar" style="width: 12%">12 %</div>
</div>
<br>
<div class="-progress -primary- -shadow-curve-">
<div class="-bar" style="width: 42%">progress with shadow 42 %</div><div class="-bar -warning-" style="width: 25%">25 %</div>
</div>
<br>
<div class="-progress -primary- -shadow-lifted-">
<div class="-bar" style="width: 42%">progress with shadow 42 %</div>
</div>
</div>
<div class="-col12 example"><pre hljs class="prettyprint lang-html"><div class="-progress -primary-">
<div class="-bar" style="width: 12%">12 %</div>
<div class="-bar -warning-" style="width: 25%">25 %</div>
<div class="-bar -error-" style="width: 5%">Something goes wrong</div>
</div>
<div class="-progress _divine -primary-">
<div class="-bar" style="width: 12%">12 %</div>
</div>
</pre></div>
</div>
|
{
"pile_set_name": "github"
}
|
Phullu
Phullu is a 2017 Indian drama film directed by Abhishek Saxena. Produced by Pushpa Chaudhary, Dr. Anmol Kapoor, Kshitij Chaudhary & Raman Kapoor under the Kapoor Film Inc Kc Production Pvt.Ltd banner. The film was released worldwide on June 16, 2017. The film stars Sharib Hashmi, Jyotii Sethi, and Nutan Surya which is inspired from the life of Arunachalam Muruganantham, a social activist from Tamil Nadu.
Phullu is about Phullu, an errand boy who eventually makes low-cost menstruation pads.
Plot
Phullu the titular character (portrayed by Sharib Hashmi) that's the typical good guy. Phullu's mother sells quilts because he doesn't have a job. He helps out his mom by procuring all the raw material for the quilts from the nearby town. In addition, he also picks up all the other stuff the women in his village may need from there.
When Phullu gets married, he realises that his wife keeps taking away pieces of red cloth from the material he gathers for the quilts. He wonders about it, but doesn't connect the dots as he knows nothing about menstruation. Neither his wife or mother explain the concept to him.
The women in his life also want Phullu to move to a big city and find work. But he's adamant about staying back in the village.
Finally, a turning point in Phullu's life comes when he finds out about menstruation through a female doctor at a chemist's shop on one of his city visits. He finally begins to understand why his wife needs the cloth, and why she suffers from itching every night.
He then takes rather drastic step of using all the money reserved for the last installment payment for his sister's jewellery to get a whole lot of sanitary pads. His furious mother kicks him out of the house, saying that he's wasted the money she earned with so much difficulty. When he tries to protest that the sanitary napkins are more important, his mother says her grandmother used wood to get rid of the itching and went on to live for 102 years, so pads are irrelevant.
Phullu goes to the city, where he gets in touch with the doctor who'd educated him about menstruation. He manages to create a sanitary napkin of his own. However, his mother and sister refuse to test it, as do the other women in the village for whom he used to run errands in the city. His wife is pregnant at this time, so she can't help him out either but is in how supportive Phullu's wife is, of his endeavour to manufacture low-cost sanitary napkins.
See also
Pad Man
Period. End of Sentence.
References
External links
Category:Indian drama films
Category:2017 films
|
{
"pile_set_name": "wikipedia_en"
}
|
Himashree Roy
Himashree Roy is a female Indian Athlete who won a bronze medal in the 100 meters women's relay race along with Merlin K Joseph, Srabani Nanda and Dutee Chand in the 22nd Asian Athletics Championships which concluded on July 9, 2017. She was born in Kolkata, West Bengal on 15 March 1995.
Career
She won the silver medal in women's 4x100m relay race along with N. Shardha, Sonal Chawla and Priyanka in the National Open athletics championships 2018 where they represented the Indian Railways.
Himashree Roy timed 11.60 seconds to set a record in women's 100 metres on 5 August 2018 in the 68th State Athletics Championships, at the Salt Lake Stadium while representing the Eastern Railway Sports Association (ERSA). She won the bronze medal in women's 100m final in 84th All India Railway Athletics Championship, 2017.
Himashree Roy, MG Padmini, Srabani Nanda and Gayathri Govindaraj won the bronze medal for women's 4x100m relay race in the second leg of the 2015 Asian Grand Prix Games, held in Thailand. She also won the gold medal in the women's 4×100 metre relay with teammates Dutee Chand, Srabani Nanda and Merlin K Joseph while representing the Indian Railways in the 55th National Open Athletic Championship, 2015.
References
Category:1995 births
Category:Sportswomen from West Bengal
Category:Indian female sprinters
Category:21st-century Indian women
Category:Living people
|
{
"pile_set_name": "wikipedia_en"
}
|
Guy Fouché
Guy Fouché (17 June 1921 – 28 May 1998) was a French operatic tenor.
Life
Born in Bordeaux, Fouché graduated from the Conservatoire de Bordeaux with the First Prize in Opera and opéra comique. He began his career at the Grand Théâtre de Bordeaux in 1942 in Bizet's Les Pêcheurs de perles.
He also obtained a second prize at the Conservatoire de Paris in 1943. From 1945 to 1953, he performed in French opera houses, including those of Toulouse, Marseille, Lyon, Lille, Nantes, Rennes and Bordeaux.
In 1953, he was in Oran. From 1954 to 1956, he was part of the troupe of the Opéra Royal de Wallonie in Liège before being, for six seasons, the first tenor at La Monnaie in Brussels.
Back in Oran, he sang the title role of Faust. In 1961, he moved to Toulon where he ended his career two years later.
Quotes
Discography
Complete
Berlioz's La Damnation de Faust (Faust)
with Ninon Vallin - Pléiade P3082 (33 rpm)
with Régine Crespin, Michel Roux, Peter Van Der Bilt - BellaVoce BLV107.202 (CD)
Donizetti's La Favorite (Fernand), with Simone Couderc, Charles Cambon, choir and Pasdeloup Orchestra, Jean Allain (dir.) - Pléiade P3071 / Vega 28000 - recorded in 1962
Massenet's Hérodiade (Jean), with Andréa Guiot, Mimi Aarden, Charles Cambon, Germain Guislain, Jos Burcksen, Corneluis Kalkman - Malibran CDRG 191 (CD).
Meyerbeer's Les Huguenots (Raoul de Nangis), with Renée Doria, Jeanne Rinella, Henri Médus, Adrien Legros, Académie chorale de Paris, Pasdeloup Orchestra, Jean Allain (dir.) - Pléiade P3085/86 (33 rpm) - recorded in 1953 at the Théâtre de l'Apollo reissued CD Accord 204592
Verdi's Rigoletto (Duke of Mantoue), with Renée Doria, Ernest Blanc, Denise Scharley, Gérard Bourreli, Maria Valetti, Maurice Faure, André Dumas, Pierre Cruchon director - Pléiade P3076 (33 rpm) - French version
Extracts
Puccini's La Bohème, aria of Rodolphe Que cette main est froide (act I) - Pléiade P45152 (Extended play) - French version
Verdi's Rigoletto, arias of the Duke of Mantoue Qu'une belle (act I) and Comme la plume au vent (act III) - Pléiade P45152 (45 rpm) - French version
References
External links
Guy Fouché on Forgotten opera singer
Les Huguenots, Acte II, Scène 1: Ô ciel, où suis-je ? Beauté divine et enchanteresse (YouTube)
Category:1921 births
Category:1998 deaths
Category:People from Bordeaux
Category:Conservatoire de Paris alumni
Category:French operatic tenors
Category:20th-century French singers
Category:20th-century male singers
|
{
"pile_set_name": "wikipedia_en"
}
|
Stern John
Stern John, CM (born 30 October 1976) is a Trinidadian football manager and former player who is currently managing Central F.C. in the TT Pro League. He previously played for a number of American and English football clubs that included Columbus Crew, Bristol City, Nottingham Forest, Birmingham City, Sunderland, Southampton, Crystal Palace, Coventry City and Derby County.
Club career
Early Career in US
John was born in Tunapuna, Trinidad and Tobago and moved to the United States to attend Mercer County Community College in 1995. He joined the Columbus Crew of Major League Soccer (MLS) from the now-defunct New Orleans Riverboat Gamblers of the A-League for the 1998 season. On the recommendation of his older cousin, Columbus Crew defender and Trinidad and Tobago international, Ansil Elcock, John received a try-out with Crew, where he became one of the most prolific scorers in league history. In 1998, John led the league with 26 goals, a record that currently puts him tied for fifth in MLS for goals in one season, and also with 57 points to be named the MLS Scoring Champion. He was named to the MLS Best XI that year as well, and tied for the lead with 18 goals in 1999.
Nottingham Forest
After the 1999 season with Columbus, John was acquired by Nottingham Forest of the English First Division for a fee of £1.5 million. However, eventual financial difficulties at Forest following the team's failed bid at promotion forced John's sale to Birmingham City in February 2002, then pushing for promotion to the Premier League, for the sum of £100,000. John scored 18 goals in 49 starts for Forest.
Birmingham City
At Birmingham, John rarely played, although he had some memorable moments in the blue shirt of Birmingham, such as his turn and finish away at West Ham in 2002; his last minute equaliser at Villa Park in the Birmingham derby; and his last minute goal away at Millwall which put Birmingham through to the Playoff Final in 2002. He then scored one of the penalties in the play-off final shootout to help them get promoted to the Premier League. Popular with the Birmingham fans for his crucial and sometimes brilliant goals, he nonetheless fell out of favour with management, and was sold to Coventry City on 14 September 2004.
Coventry City
In his first season with Coventry, John finished second in team scoring with 12 goals despite starting in barely half of Coventry's games.
Derby County
At the start of the 2005–06 season, following the signing of James Scowcroft, John found himself outside of manager Micky Adams's first-team plans. As a result, he was loaned to Derby County on 16 September 2005. He rejoined Coventry three months later.
Sunderland
On 29 January 2007, John was transferred to Sunderland for an undisclosed fee. The signing was Sunderland manager Roy Keane's sixth signing of the 2006–07 season January transfer window. He scored his first goals against Southend United in a 4–0 victory on 17 February 2007.
Southampton
On 29 August 2007, John moved to Southampton as part of a deal that took his international teammate Kenwyne Jones in the opposite direction.
He scored his first goals with two in a 3–2 win against West Bromwich Albion on 6 October 2007. From then on he scored regularly for "The Saints", with nine goals in his first fifteen appearances, including a second half hat trick against Hull City on 8 December 2007. He finished the 2007–08 season fourth highest scorer in the Championship with 19 goals for Southampton. (He had also scored once for Sunderland in the Premier League prior to his transfer.) Before being sent off for a second bookable offence, John scored two goals, including the match winner, in Southampton's final game of the season against Sheffield United, as the Saints narrowly avoided relegation to League One.
Bristol City
John was loaned to Bristol City in October 2008 until the end of the 2008–09 season. John made his first Bristol City appearance, coming on as a substitute, against Barnsley in a 0–0 draw. John scored his first goal for Bristol City in a 4–1 defeat to Reading at Ashton Gate Stadium on 1 November 2008.
Crystal Palace
On 29 July 2009 John signed for Crystal Palace on a year-long deal after turning down an offer to stay at Southampton. He made his debut on the opening day of the season against Plymouth Argyle, he had to come off after 35 minutes due to an injury. He returned in mid-October, but joined Ipswich Town on a one-month loan at the end of November. He scored his first goal for Ipswich in a 3–2 win over Coventry City on 16 January 2010. Upon his return to Palace he scored his first goal for the club in a 3–1 win at Watford on 30 March 2010.
New Palace manager George Burley had hoped to discuss the player's future at the end of the season, but no discussion occurred, and John left the club.
Solihull Moors
In August 2012, after two seasons out of English football, John returned, signing for Solihull Moors. However, as of November 2012, he had yet to make an appearance in any competition for the club.
WASA FC
John retired and moved back to his native Trinidad and Tobago after his spell at Solihull Moors. He came out of retirement a second time in order to join WASA FC of the National Super League of Trinidad and Tobago in January 2014. He scored on his debut
Central F.C.
John came out of retirement once again in 2016 when he was appointed as player-coach of Central F.C. in the TT Pro League.
International career
John made his international debut for Trinidad and Tobago national football team on 15 February 1995 against Finland in a Friendly match at the Queen's Park Oval, scoring one goal on his debut. John has been a vital player for the Soca Warriors, currently the team's all-time leading scorer with 70 goals in 115 caps (as of 9 February 2011), and is also the 7th highest international goalscorer according to the list of Top international association football goal scorers by country, behind Pelé, Ferenc Puskás and Ali Daei. He is also the all-time top CONCACAF goal scorer. He was instrumental in helping his country qualify for the 2006 FIFA World Cup and played in all three of his country's World Cup group matches at Germany 2006. In Germany, he scored an offside goal. He was also named Trinidad and Tobago Football Federation Player of the Year in 2002. John is currently the second most capped Trinidad and Tobago international behind former teammate Angus Eve. He was the only player to score in 12 consecutive international matches, from 1998 to 1999.
Honours
1998 MLS Scoring Champion
1998 MLS Golden Boot
1998 MLS Best XI
2002 Division 1 Play-offs Winner's Medal
2002 Trinidad and Tobago Football Federation Player of the Year
2007 Championship Winners' Medal with Sunderland
Career statistics
Club statistics
International goals
Scores and results list Trinidad and Tobago's goal tally first.
Notes
References
External links
Player profile from Southampton F.C. website (via archive.org)
Category:1976 births
Category:Living people
Category:People from Tunapuna–Piarco
Category:Trinidad and Tobago footballers
Category:Association football forwards
Category:Mercer County Community College alumni
Category:Carolina Dynamo players
Category:New Orleans Riverboat Gamblers players
Category:Columbus Crew SC players
Category:Nottingham Forest F.C. players
Category:Birmingham City F.C. players
Category:Coventry City F.C. players
Category:Derby County F.C. players
Category:Sunderland A.F.C. players
Category:Southampton F.C. players
Category:Bristol City F.C. players
Category:Crystal Palace F.C. players
Category:Ipswich Town F.C. players
Category:North East Stars F.C. players
Category:Solihull Moors F.C. players
Category:USISL players
Category:USL First Division players
Category:Major League Soccer players
Category:Major League Soccer All-Stars
Category:Premier League players
Category:English Football League players
Category:TT Pro League players
Category:Trinidad and Tobago international footballers
Category:1998 CONCACAF Gold Cup players
Category:2002 CONCACAF Gold Cup players
Category:2005 CONCACAF Gold Cup players
Category:2006 FIFA World Cup players
Category:FIFA Century Club
Category:Trinidad and Tobago expatriate footballers
Category:Trinidad and Tobago expatriate sportspeople in the United States
Category:Trinidad and Tobago expatriate sportspeople in England
Category:Expatriate soccer players in the United States
Category:Expatriate footballers in England
Category:Trinidad and Tobago football managers
Category:Central F.C. managers
Category:TT Pro League managers
Category:Recipients of the Chaconia Medal
|
{
"pile_set_name": "wikipedia_en"
}
|
<?xml version="1.0" encoding="UTF-8"?>
<segment>
<name>PD1</name>
<description>Patient Additional Demographic</description>
<elements>
<field minOccurs="0" maxOccurs="0">
<name>PD1.1</name>
<description>Living Dependency</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.2</name>
<description>Living Arrangement</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.3</name>
<description>Patient Primary Facility</description>
<datatype>XON</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.4</name>
<description>Patient Primary Care Provider Name & ID No.</description>
<datatype>XCN</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.5</name>
<description>Student Indicator</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.6</name>
<description>Handicap</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.7</name>
<description>Living Will Code</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.8</name>
<description>Organ Donor Code</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.9</name>
<description>Separate Bill</description>
<datatype>ID</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.10</name>
<description>Duplicate Patient</description>
<datatype>CX</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.11</name>
<description>Publicity Code</description>
<datatype>CE</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.12</name>
<description>Protection Indicator</description>
<datatype>ID</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.13</name>
<description>Protection Indicator Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.14</name>
<description>Place of Worship</description>
<datatype>XON</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.15</name>
<description>Advance Directive Code</description>
<datatype>CE</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.16</name>
<description>Immunization Registry Status</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.17</name>
<description>Immunization Registry Status Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.18</name>
<description>Publicity Code Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.19</name>
<description>Military Branch</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.20</name>
<description>Military Rank/Grade</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.21</name>
<description>Military Status</description>
<datatype>IS</datatype>
</field>
</elements>
</segment>
|
{
"pile_set_name": "github"
}
|
---
abstract: 'Solid $^4$He has been created off the melting curve by growth at nearly constant mass via the “blocked capillary" technique and growth from the $^4$He superfluid at constant temperature. The experimental apparatus allows injection of $^4$He atoms from superfluid directly into the solid. Evidence for the superfluid-like transport of mass through a sample cell filled with hcp solid $^4$He off the melting curve is found. This mass flux depends on temperature and pressure.'
author:
- 'M. W. Ray and R. B. Hallock'
title: Observation of Unusual Mass Transport in Solid hcp $^4$He
---
Experiments by Kim and Chan[@Kim2004a; @Kim2004b; @Kim2005; @Kim2006], who studied the behavior of a torsional oscillator filled with hcp solid $^4$He, showed a clear reduction in the period of the oscillator as a function of temperature at temperatures below T $\approx$ 250 mK. This observation was interpreted as evidence for the presence of “supersolid" behavior in hcp solid $^4$He. Subsequent work in a number of laboratories has confirmed the observation of a period shift, with the interpretation of mass decoupling in most cases in the 0.05 - 1 percent range, but with dramatically larger decoupling seen in quench-frozen samples in small geometries[@Rittner2007]. Aoki et al.[@Aoki2007] observed sample history dependence under some conditions. These observations and interpretations, among others, have kindled considerable interest and debate concerning solid hcp $^4$He.
Early measurements by Greywall[@Greywall1977], showed no evidence for mass flow in solid helium. Work by the Beamish group also showed no evidence for mass flow in two sets of experiments involving Vycor[@Day2005] and narrow channels[@Day2006]. Sasaki et al.[@Sasaki2006] attempted to cause flow through solid helium on the melting curve, using a technique similar to that used by Bonfait et al.[@Bonfait1989] (that showed no flow). Initial interpretations suggested that flow might be taking place through the solid[@Sasaki2006], but subsequent measurements have been interpreted to conclude that the flow was instead likely carried by small liquid regions at the interface between crystal faces and the surface of the sample cell[@Sasaki2007], which were shown to be present for helium on the melting curve. Recent work by Day and Beamish[@Day2007] showed that the shear modulus of hcp solid $^4$He increased at low temperature and demonstrated a temperature and $^3$He impurity dependence very similar to that shown by the torsional oscillator results. The theoretical situation is also complex, with clear analytic predictions that a supersolid cannot exist without vacancies (or interstitials)[@Prokofev2005], numerical predictions that no vacancies exist in the ground state of hcp solid $^4$He[@Boninsegni2006; @Clark2006; @Boninsegni2006a], and [*ab initio*]{} simulations that predict that in the presence of disorder the solid can demonstrate superflow[@Boninsegni2006; @Pollet2007; @Boninsegni2007] along imperfections. But, there are alternate points of view[@Anderson2007]. There has been no clear experimental evidence presented for the flow of atoms through solid hcp $^4$He.
We have created a new approach, related to our “sandwich"[@Svistunov2006] design, with an important modification. The motivation was to attempt to study hcp solid $^4$He at pressures off the melting curve in a way that would allow a chemical potential gradient to be applied across the solid, but not by squeezing the hcp solid lattice directly. Rather, the idea is to inject helium atoms into the solid from the superfluid. To do this off the melting curve presents rather substantial experimental problems due to the high thermal conductivity of bulk superfluid helium. But, helium in the pores of Vycor, or other small pore geometries, is known to freeze at much higher pressures than does bulk helium[@Beamish1983; @Adams1987; @Lie-zhao1986]. Thus, the “sandwich" consists of solid helium held between two Vycor plugs, each containing superfluid $^4$He.
The schematic design of our experiment is shown in figure 1. Three fill lines lead to the copper cell; two from room temperature, with no heat sink below 4K, enter via liquid reservoirs, R1, R2, atop the Vycor (1 and 2) and a third (3) is heat sunk at 1K and leads directly to the cell, bypassing the Vycor. The concept of the measurement is straightforward: (a) Create a solid sample Shcp and then (b) inject atoms into the solid Shcp by feeding atoms via line 1 or 2. So, for example, we increase the pressure on line 1 or 2 and observe whether there is a change in the pressure on the other line. We also have capacitive pressure gauges on the sample cell, C1 and C2, and can measure the pressure [*in situ*]{}. To conduct the experiment it is important that the helium in the Vycor, the liquid reservoirs atop the Vycor, and the lines that feed the Vycor contain $^4$He that does not solidify. This is accomplished by imposing a temperature gradient between R and Shcp across the Vycor, a gradient which would present insurmountable difficulties if the Vycor were not present. While the heat conducted down the Vycor rods in our current apparatus is larger than we expected, and this presently limits our lowest achievable temperature, we have none the less obtained interesting results.
To study the flow characteristics of our Vycor rods, we measured the relaxation of pressure differences between line 1 and line 2 with superfluid $^4$He in the cell at $\sim$ 20 bar at 400 mK, with the tops of the Vycor rods in the range 1.7 $<$ T$_1$ = T$_2$ $<$ 2.0 K, temperatures similar to some of our measurements at higher pressures with solid helium in the sample cell. The relaxation was linear in time as might be expected for flow through a superleak at critical velocity. The pressure recorded by the capacitive gauges shifted as it should. An offset in the various pressure readings if T$_1$ $\ne$ T$_2$ was present due to a predictable fountain effect across the two Vycor superleaks. Our Vycor rods readily allow a flux of helium atoms, even for T$_1$, T$_2$ as high as 2.8K.
To study solid helium, one approach is to grow from the superfluid phase (using ultra-high purity helium, assumed to have $\sim$ 300 ppb $^3$He). With the cell at T $\approx$ 400 mK, we added helium to lines 1 and 2 to increase the pressure from below the melting curve to $\approx$ 26.8 bar. Sample A grew in a few hours and was held stable for about a day before we attempted measurements on it. Then the pressure to line 1, P1, was changed abruptly from 27.1 to 28.6 bar (figure 2). There resulted a gradual decrease in the pressure in line 1 and a corresponding increase of the pressure in line 2. Note that pressure can increase in line 2 only if atoms move from line 1 to line 2, through the region of the cell occupied by solid helium, Shcp. We also observed a change in the pressure recorded on the capacitive pressure gauges on the cell, e.g. C1 (C1 and C2 typically agree). As these pressure changes evolved, we hoped to see the pressure in line 1 and line 2 converge, but the refrigerator stopped after 20 hours of operation on this particular run. Note that the change in P2 is rather linear (0.017 bar/hr) and does not show the sort of non-linear change with time that one would expect for the flow of a viscous fluid. Our conclusion is that helium has moved through the region of hcp solid $^4$He, while the solid was off the melting curve, and that this flow from line 1 to line 2 was at a limiting velocity, consistent with superflow. From the behavior of the pressure gauges on the cell, it is clear that atoms were also added to the solid.
We next grew a new solid sample, B, again by growth from the superfluid, but we grew it at a faster rate and did not dwell for a day prior to measurements. This sample also demonstrated flow, with the pressure difference relaxing over about 5 hours after we stopped adding atoms to line 1. The pressure step applied to line 1 was from 26.4 to 28.0 bar. While $^4$He was slowly added to line 1, P2 increased. After the addition of atoms was stopped, the change in these pressures appeared to depend on P1-P2, with P2 showing curvature and regions of predominantly $\sim$ 0.076 and $\sim$ 0.029 bar/hr. Next, we used the same solid sample and moved it closer to the melting curve (1.25 K), but maintained it as a solid, sample C. We applied a pressure difference by increasing the pressure to line 1 from 26.0 to 28.4 bar, but in this case there was no increase in P2; the pressure difference P1-P2 appeared nearly constant, with a slight increase in pressure recorded in the cell. It is possible that this difference in behavior is an annealing effect, but it may also be due to a reduced ability to flow through the same number of conducting pathways. Next, after a warm up to room temperature, we prepared another sample, G, with P,T coordinates much like sample A, but used a time for growth, and pause prior to injection of helium, that was midway between those used for samples A and B. The results again showed flow, (P2 changing $\sim$ 0.008 bar/hr; with C1, C2 similar). Finally, we injected sample G again, but there were some modest instabilities with our temperatures. A day later, we injected again on this same sample, now two days old and termed H; and then again, denoting it sample J. Short term changes were observed in P1 and P2, but P1-P2 was essentially constant at $\approx$ 1.38 bar for more than 15 hours. In another sequence, we created sample M (like G), increased P1, observed flow, warmed it to 800 mK, saw no flow, cooled it to 400 mK, increased P1, saw no flow, decreased P1, saw flow, increased P1 again, and saw flow again. (Typically if an increase in P1 shows flow, a decrease in P1 will also show flow.) Yet another sample, Y, created similar to A, showed linear flow like A, but when warmed to 800 mK showed no flow. Whatever is responsible for the flow appears to change somewhat with time, sample history, and is clearly dependent on sample pressure and temperature.
How can we reconcile such behavior when the measurements of Greywall and Day and Beamish saw no such flow[@Greywall1977; @Day2005; @Day2006]? The actual explanation is not clear to us, but there is a conceptual difference between the two types of experiments: These previous experiments pushed on the solid helium lattice; we inject atoms from the superfluid (which must have been the case for the experiments of Sasaki et al.[@Sasaki2006], on the melting curve). If predictions of superflow along structures in the solid[@Pollet2007; @Boninsegni2007; @Shevchenko1987] (e.g. dislocations of various sorts or grain boundaries) are correct, it is possible that by injecting atoms from the superfluid we can access these defects at their ends in a way that applying mechanical pressure to the lattice does not allow.
We have also grown samples via the “blocked capillary" technique. In this case the valves leading to lines 1 and 2 were controlled and the helium in line 3 was frozen. Sample D was created this way and exited the melting curve in the higher pressure region of the bcc phase and settled near 28.8 bar. There then followed an injection of $^4$He atoms via line 1 (figure 3). Here we observed a lengthy period during which a substantial pressure difference between lines 1 and 2 did not relax, and to high accuracy we saw no change in the pressure of the solid as measured directly in the cell with the capacitive gauges C; C1 changed $<$ 0.0003 bar/hr. Behavior of this sort was also observed for the same sample, but with a much smaller (0.21 bar) pressure shift, with no flow observed. And, warming this sample to 900 mK produced no evidence for flow (sample E, not shown). Four other samples (F, T, V, W) were grown using the blocked capillary technique, with the lower pressure samples (T, V) demonstrating flow. Pressure appears to be an important variable, but not growth technique.
To summarize the focus of our work to date, on figure 4 we show the location of some of the samples that we have created. Samples grown at higher pressure have not shown an ability to relax from an applied pressure difference over intervals longer than 10 hr.; they appear to be insulators. Samples grown at lower pressures clearly show mass flux through the solid samples, and for some samples this flux appears to be at constant velocity. Samples, warmed close to 800 mK, and one warmed near to the melting curve at 1.25 K, and a sample created from the superfluid at 800 mK all showed no flow. We interpret the absence of flow for samples warmed to or created at 800 mK to likely rule out liquid channels as the conduction mechanism. Annealing my be present for the 1.25 K sample, but we doubt that this explains the 800 mK samples. Instead we suspect that whatever conducts the flow (perhaps grain boundaries or other defects) is temperature dependent. Sample pressure and temperature are important; sample history may be.
The data of figure 2 can be used to deduce the mass flux through and into the sample. From that 20-hour data record we conclude that over the course of the measurement 1 $\times$ 10$^{-4}$ grams of $^4$He must have moved through the cell from line 1 to line 2, and that about 4.5 $\times$ 10$^{-4}$ grams of $^4$He must have joined the solid. If we write M/t = $\xi$$\rho$vxy, as the mass flux from line 1 to line 2, where M is the mass that moved in time t, $\rho$ is the density of helium, $\xi$ is the fraction of the helium that can flow, v is the velocity of flow in the solid, and xy is the cross section that supports that flow, we find $\xi$vxy = 8 $\times$ 10$^{-9}$ cm$^3$/sec. We know from measurements on the Vycor filled with superfluid that it should not limit the flow. So, if we take the diameter of our sample cell (0.635 cm), presuming that the full diameter conducts, we can deduce that $\xi$v = 2.52 $\times$ 10$^{-8}$ cm/sec, which, if, for arbitrary example, v = 100 $\mu$/sec, results in $\xi$ = 2.5 $\times$ 10$^{-6}$.
An alternate approach is to presume instead that what is conducting the flow from line 1 to line 2 is not the entire cross section of the sample cell but rather a collection of discrete structures (say, dislocation lines, or grain boundaries). If this were the case, with one dimension set at x = 0.5 nm, an atomic thickness, then for the flow from line 1 to line 2, $\xi$vy = 0.16 cm$^2$/sec . If we assume that $\xi$ = 1 for what moves along these structures then vy = 0.16 cm$^2$/sec. If we adopt the point of view that what can flow in such a thin dimension is akin to a helium film, we can take a critical velocity of something like 200 cm/sec[@Telschow1974]. In such a case, we find y = 8 $\times$ 10$^{-4}$ cm. If our structures conduct along an axis, where the axis is, say 0.5 nm x 0.5 nm, then we would need 1.6 $\times$ 10$^4$ such structures to act as pipe-like conduits. This, given the volume of our cell between our two Vycor rods (0.6 cm$^3$), would require a density of such structures of at least 2.67 $\times$ 10$^4$ cm$^{-2}$, and roughly five times this number (10$^5$ cm$^{-2}$) to carry the flux that also contributes mass to the solid as its pressure increases.
We have conducted experiments that show the first evidence for flow of helium through a region containing solid hcp $^4$He off the melting curve. The phase diagram appears to have two regions. Samples grown at lower pressures show flow, with flow apparently dependent on sample history, with reduced flow for samples at higher temperature, which is evidence for dependence on temperature. Samples grown at higher pressures show no clear evidence for any such flow for times longer than 10 hours. The temperatures utilized for this work are well above the temperatures at which much attention has been focused, but interesting behavior is seen. Further measurements will be required to establish in more detail how such behavior depends on pressure and temperature, and on sample history, and the relevance (if any) of our observations to the torsional oscillator and shear modulus experiments that were conducted at lower temperatures.
We thank B. Svistunov and N. Prokofev for illuminating discussions, which motivated us to design this experiment. We also thank S. Balibar and J. Beamish for very helpful discussions and advice on the growth of solid helium, M.C.W. Chan, R.A. Guyer, H. Kojima, W.J. Mullin, J.D. Reppy, E. Rudavskii and Ye. Vekhov for discussions. This work was supported by NSF DMR 06-50092, CDRF 2853, UMass RTF funds and facilities supported by the NSF-supported MRSEC.
[26]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, ****, ().
(), <http://online.itp.ucsb.edu/online/smatter_m06/svistunov/>.
, , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
|
{
"pile_set_name": "arxiv"
}
|
// Copyright 2004-present Facebook. All Rights Reserved.
#include "SamplingProfilerJniMethod.h"
#include <JavaScriptCore/JSProfilerPrivate.h>
#include <jschelpers/JSCHelpers.h>
#include <jni.h>
#include <string>
using namespace facebook::jni;
namespace facebook {
namespace react {
/* static */ jni::local_ref<SamplingProfilerJniMethod::jhybriddata>
SamplingProfilerJniMethod::initHybrid(jni::alias_ref<jclass>,
jlong javaScriptContext) {
return makeCxxInstance(javaScriptContext);
}
/* static */ void SamplingProfilerJniMethod::registerNatives() {
registerHybrid(
{makeNativeMethod("initHybrid", SamplingProfilerJniMethod::initHybrid),
makeNativeMethod("poke", SamplingProfilerJniMethod::poke)});
}
SamplingProfilerJniMethod::SamplingProfilerJniMethod(jlong javaScriptContext) {
context_ = reinterpret_cast<JSGlobalContextRef>(javaScriptContext);
}
void SamplingProfilerJniMethod::poke(
jni::alias_ref<JSPackagerClientResponder::javaobject> responder) {
if (!JSC_JSSamplingProfilerEnabled(context_)) {
responder->error("The JSSamplingProfiler is disabled. See this "
"https://fburl.com/u4lw7xeq for some help");
return;
}
JSValueRef jsResult = JSC_JSPokeSamplingProfiler(context_);
if (JSC_JSValueGetType(context_, jsResult) == kJSTypeNull) {
responder->respond("started");
} else {
JSStringRef resultStrRef = JSValueToStringCopy(context_, jsResult, nullptr);
size_t length = JSStringGetLength(resultStrRef);
char buffer[length + 1];
JSStringGetUTF8CString(resultStrRef, buffer, length + 1);
JSStringRelease(resultStrRef);
responder->respond(buffer);
}
}
}
}
|
{
"pile_set_name": "github"
}
|
Bob Alcivar
Bob Alcivar (born July 8, 1938, in Chicago, Illinois) is an American music producer, composer, conductor and keyboard player. He is the father of rock keyboard player Jim Alcivar (Montrose, Gamma).
Discography
The Signatures - Their Voices and Instruments (1957) bass, arranger, vocals
The Signatures - Sing In (1958)
The Signatures - Prepare to Flip! (1959)
Julie London - Around Midnight (1960) - composer
The New Christy Minstrels - The Wandering Minstrels (1965) - vocal arrangement
The New Christy Minstrels - New Kick! (1966) arranger, director
The 5th Dimension - The Age of Aquarius (1969) - arranger
The Association - The Association (1969) - arranger
The Carnival - The Carnival (1969) - arranger
Seals & Crofts - Seals & Crofts (1970) - producer
The Sandpipers - Come Saturday Morning (1970) - producer & arranger
The 5th Dimension - Portrait (1970) - arranger
Sérgio Mendes & Brasil '77 - Love Music (1973) - arranger, keyboards, vocals
Tim Weisberg - Dreamspeaker - (1974) - arranger
Tom Waits - The Heart of Saturday Night (1974) - arranger
The 5th Dimension - Soul & Inspiration - (1974) - arranger
Sérgio Mendes & Brasil '77 - Vintage 74 - (1974) - vocal arrangement, rhythm arrangement
Sérgio Mendes & Brasil '77 - Sérgio Mendes - (1975) vocal arrangement
Montrose - Jump On It (1976) - string arrangement
Bette Midler - Broken Blossom - (1977) - arranger on "I Never Talk To Strangers" (duet with Tom Waits)
Bruce Johnston - Going Public (1977) - horn arrangement, string arrangement
Tim Weisberg - Live at Last (1977) - producer
Marilyn McCoo & Billy Davis, Jr. - The Two of Us (1977) - keyboards
Ronnie Montrose - Open Fire (1978) - orchestra arrangement, conductor
Tom Waits - Blue Valentine (1978) - orchestra
The Beach Boys - Keepin' the Summer Alive (1980) - horn arrangements
Tom Waits - Heartattack and Vine (1980) - string arrangement, orchestral arrangement, conductor
Seals & Crofts - Longest Road (1980) - string arrangement
Tom Waits - One from the Heart (1982) - piano, orchestral arrangement, conductor
Ceremony - Hang Out Your Poetry (1993) - arranger, string arrangement
Jazz at the Movies Band - One from the Heart: Sax at the Movies II (1994) - arranger, conductor
Royal Philharmonic Orchestra - Symphonic Sounds: The Music of Beach Boys (1998) - conductor, orchestral arrangement
Jazz at the Movies - The Bedroom Mixes (2000) - arranger
Bob Alcivar - Bahai Prayers - (2000)
Film
Butterflies Are Free (1972)
The Crazy World of Julius Vrooder (1974)
Olly Olly Oxen Free (1978)
One From the Heart (1982)
The Best Little Whorehouse in Texas (arranger, 1982)
Hysterical (1983)
That Secret Sunday (TV) (1986)
Blind Witness (TV) (1999)
Naked Lie [TV] (1989)
Roxanne: The Prize Pulitzer [TV] (1989)
Sparks: The Price of Passion [TV] (1990)
Deadly Medicine [TV] (1991)
External links
[ allmusic Biography]
Film Reference Biography
Category:1938 births
Category:Living people
Category:Musicians from Chicago
Category:20th-century American keyboardists
Category:Record producers from Illinois
|
{
"pile_set_name": "wikipedia_en"
}
|
/* Copyright 2019 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
import {Polygon} from '/lib/math/polygon2d.js';
import * as moduleInterface from '/lib/module_interface.js';
import * as moduleTicker from '/client/modules/module_ticker.js';
import * as network from '/client/network/network.js';
import * as peerNetwork from '/client/network/peer.js';
import {easyLog} from '/lib/log.js';
import assert from '/lib/assert.js';
import asset from '/client/asset/asset.js';
import conform from '/lib/conform.js';
import inject from '/lib/inject.js';
import * as stateManager from '/client/state/state_manager.js';
import {TitleCard} from '/client/title_card.js';
import * as time from '/client/util/time.js';
import {delay} from '/lib/promise.js';
function createNewContainer(name) {
var newContainer = document.createElement('div');
newContainer.className = 'container';
newContainer.id = 't-' + time.now();
newContainer.setAttribute('moduleName', name);
return newContainer;
}
export const FadeTransition = {
start(container) {
if (container) {
container.style.opacity = 0.001;
document.querySelector('#containers').appendChild(container);
}
},
async perform(oldModule, newModule, deadline) {
if (newModule.name == '_empty') {
// Fading out.. so fade *out* the *old* container.
oldModule.container.style.transition =
'opacity ' + time.until(deadline).toFixed(0) + 'ms';
oldModule.container.style.opacity = 0.0;
} else {
newModule.container.style.transition =
'opacity ' + time.until(deadline).toFixed(0) + 'ms';
newModule.container.style.opacity = 1.0;
}
// TODO(applmak): Maybe wait until css says that the transition is done?
await delay(time.until(deadline));
}
}
export class ClientModule {
constructor(name, path, config, titleCard, deadline, geo, transition) {
// The module name.
this.name = name;
// The path to the main file of this module.
this.path = path;
// The module config.
this.config = config;
// The title card instance for this module.
this.titleCard = titleCard;
// Absolute time when this module is supposed to be visible. Module will
// actually be faded in by deadline + 5000ms.
this.deadline = deadline;
// The wall geometry.
this.geo = geo;
// The transition to use to transition to this module.
this.transition = transition;
// The dom container for the module's content.
this.container = null;
// Module class instance.
this.instance = null;
// Network instance for this module.
this.network = null;
}
// Deserializes from the json serialized form of ModuleDef in the server.
static deserialize(bits) {
if (bits.module.name == '_empty') {
return ClientModule.newEmptyModule(bits.time);
}
return new ClientModule(
bits.module.name,
bits.module.path,
bits.module.config,
new TitleCard(bits.module.credit),
bits.time,
new Polygon(bits.geo),
FadeTransition,
);
}
static newEmptyModule(deadline = 0, transition = FadeTransition) {
return new ClientModule(
'_empty',
'',
{},
new TitleCard({}),
deadline,
new Polygon([{x: 0, y:0}]),
transition
);
}
// Extracted out for testing purposes.
static async loadPath(path) {
return await import(path);
}
async instantiate() {
this.container = createNewContainer(this.name);
if (!this.path) {
return;
}
const INSTANTIATION_ID =
`${this.geo.extents.serialize()}-${this.deadline}`;
this.network = network.forModule(INSTANTIATION_ID);
let openNetwork = this.network.open();
this.stateManager = stateManager.forModule(network, INSTANTIATION_ID);
const fakeEnv = {
asset,
debug: easyLog('wall:module:' + this.name),
game: undefined,
network: openNetwork,
titleCard: this.titleCard.getModuleAPI(),
state: this.stateManager.open(),
wallGeometry: this.geo,
peerNetwork,
assert,
};
try {
const {load} = await ClientModule.loadPath(this.path);
if (!load) {
throw new Error(`${this.name} did not export a 'load' function!`);
}
const {client} = inject(load, fakeEnv);
conform(client, moduleInterface.Client);
this.instance = new client(this.config);
} catch (e) {
// something went very wrong. Wind everything down.!
this.network.close();
this.network = null;
throw e;
}
}
// Returns true if module is still OK.
async willBeShownSoon() {
if (!this.path) {
return;
}
// Prep the container for transition.
// TODO(applmak): Move the transition smarts out of ClientModule.
this.transition.start(this.container);
try {
await this.instance.willBeShownSoon(this.container, this.deadline);
} catch(e) {
this.dispose();
throw e;
}
}
// Returns true if module is still OK.
beginTransitionIn(deadline) {
if (!this.path) {
return;
}
moduleTicker.add(this.name, this.instance);
try {
this.instance.beginFadeIn(deadline);
} catch (e) {
this.dispose();
throw e;
}
}
finishTransitionIn() {
if (!this.path) {
return;
}
this.titleCard.enter();
this.instance.finishFadeIn();
}
beginTransitionOut(deadline) {
if (!this.path) {
return;
}
this.titleCard.exit();
this.instance.beginFadeOut(deadline);
}
finishTransitionOut() {
if (!this.path) {
return;
}
this.instance.finishFadeOut();
}
async performTransition(otherModule, transitionFinishDeadline) {
await this.transition.perform(otherModule, this, transitionFinishDeadline);
}
dispose() {
if (this.container) {
this.container.remove();
this.container = null;
}
if (!this.path) {
return;
}
this.titleCard.exit(); // Just in case.
moduleTicker.remove(this.instance);
if (this.network) {
this.stateManager.close();
this.stateManager = null;
this.network.close();
this.network = null;
}
}
}
|
{
"pile_set_name": "github"
}
|
#if !defined(BOOST_PP_IS_ITERATING)
///// header body
#ifndef BOOST_MPL_AUX778076_ADVANCE_BACKWARD_HPP_INCLUDED
#define BOOST_MPL_AUX778076_ADVANCE_BACKWARD_HPP_INCLUDED
// Copyright Aleksey Gurtovoy 2000-2004
//
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
// See http://www.boost.org/libs/mpl for documentation.
// $Id$
// $Date$
// $Revision$
#if !defined(BOOST_MPL_PREPROCESSING_MODE)
# include <boost/mpl/prior.hpp>
# include <boost/mpl/apply_wrap.hpp>
#endif
#include <boost/mpl/aux_/config/use_preprocessed.hpp>
#if !defined(BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS) \
&& !defined(BOOST_MPL_PREPROCESSING_MODE)
# define BOOST_MPL_PREPROCESSED_HEADER advance_backward.hpp
# include <boost/mpl/aux_/include_preprocessed.hpp>
#else
# include <boost/mpl/limits/unrolling.hpp>
# include <boost/mpl/aux_/nttp_decl.hpp>
# include <boost/mpl/aux_/config/eti.hpp>
# include <boost/preprocessor/iterate.hpp>
# include <boost/preprocessor/cat.hpp>
# include <boost/preprocessor/inc.hpp>
namespace boost { namespace mpl { namespace aux {
// forward declaration
template< BOOST_MPL_AUX_NTTP_DECL(long, N) > struct advance_backward;
# define BOOST_PP_ITERATION_PARAMS_1 \
(3,(0, BOOST_MPL_LIMIT_UNROLLING, <boost/mpl/aux_/advance_backward.hpp>))
# include BOOST_PP_ITERATE()
// implementation for N that exceeds BOOST_MPL_LIMIT_UNROLLING
template< BOOST_MPL_AUX_NTTP_DECL(long, N) >
struct advance_backward
{
template< typename Iterator > struct apply
{
typedef typename apply_wrap1<
advance_backward<BOOST_MPL_LIMIT_UNROLLING>
, Iterator
>::type chunk_result_;
typedef typename apply_wrap1<
advance_backward<(
(N - BOOST_MPL_LIMIT_UNROLLING) < 0
? 0
: N - BOOST_MPL_LIMIT_UNROLLING
)>
, chunk_result_
>::type type;
};
};
}}}
#endif // BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS
#endif // BOOST_MPL_AUX778076_ADVANCE_BACKWARD_HPP_INCLUDED
///// iteration, depth == 1
// For gcc 4.4 compatability, we must include the
// BOOST_PP_ITERATION_DEPTH test inside an #else clause.
#else // BOOST_PP_IS_ITERATING
#if BOOST_PP_ITERATION_DEPTH() == 1
#define i_ BOOST_PP_FRAME_ITERATION(1)
template<>
struct advance_backward< BOOST_PP_FRAME_ITERATION(1) >
{
template< typename Iterator > struct apply
{
typedef Iterator iter0;
#if i_ > 0
# define BOOST_PP_ITERATION_PARAMS_2 \
(3,(1, BOOST_PP_FRAME_ITERATION(1), <boost/mpl/aux_/advance_backward.hpp>))
# include BOOST_PP_ITERATE()
#endif
typedef BOOST_PP_CAT(iter,BOOST_PP_FRAME_ITERATION(1)) type;
};
#if defined(BOOST_MPL_CFG_MSVC_60_ETI_BUG)
/// ETI workaround
template<> struct apply<int>
{
typedef int type;
};
#endif
};
#undef i_
///// iteration, depth == 2
#elif BOOST_PP_ITERATION_DEPTH() == 2
# define AUX778076_ITER_0 BOOST_PP_CAT(iter,BOOST_PP_DEC(BOOST_PP_FRAME_ITERATION(2)))
# define AUX778076_ITER_1 BOOST_PP_CAT(iter,BOOST_PP_FRAME_ITERATION(2))
typedef typename prior<AUX778076_ITER_0>::type AUX778076_ITER_1;
# undef AUX778076_ITER_1
# undef AUX778076_ITER_0
#endif // BOOST_PP_ITERATION_DEPTH()
#endif // BOOST_PP_IS_ITERATING
|
{
"pile_set_name": "github"
}
|
---
abstract: 'High-mass stars form within star clusters from dense, molecular regions, but is the process of cluster formation slow and hydrostatic or quick and dynamic? We link the physical properties of high-mass star-forming regions with their evolutionary stage in a systematic way, using Herschel and Spitzer data. In order to produce a robust estimate of the relative lifetimes of these regions, we compare the fraction of dense, molecular regions above a column density associated with high-mass star formation, N(H$_2$) $>$ 0.4-2.5 $\times$ 10$^{22}$ cm$^{-2}$, in the ‘starless’ (no signature of stars $\gtrsim$ 10 forming) and star-forming phases in a 2$\times$2 region of the Galactic Plane centered at $\ell$=30. Of regions capable of forming high-mass stars on $\sim$1 pc scales, the starless (or embedded beyond detection) phase occupies about 60-70% of the dense molecular region lifetime and the star-forming phase occupies about 30-40%. These relative lifetimes are robust over a wide range of thresholds. We outline a method by which relative lifetimes can be anchored to absolute lifetimes from large-scale surveys of methanol masers and UCHII regions. A simplistic application of this method estimates the absolute lifetime of the starless phase to be 0.2-1.7 Myr (about 0.6-4.1 fiducial cloud free-fall times) and the star-forming phase to be 0.1-0.7 Myr (about 0.4-2.4 free-fall times), but these are highly uncertain. This work uniquely investigates the star-forming nature of high-column density gas pixel-by-pixel and our results demonstrate that the majority of high-column density gas is in a starless or embedded phase.'
author:
- 'Cara Battersby, John Bally, & Brian Svoboda'
bibliography:
- 'references1.bib'
title: 'The Lifetimes of Phases in High-Mass Star-Forming Regions'
---
2hp[N$_{2}$H$^{+}$]{} 3CO[$^{13}$CO]{} 3[NH$_{3}$]{}
Introduction
============
Whether star clusters and high-mass stars form as the result of slow, equilibrium collapse of clumps [e.g., @tan06] over several free-fall times or if they collapse quickly on the order of a free-fall time [e.g., @elm07; @har07], perhaps mediated by large scale accretion along filaments [@mye09], remains an open question. The stars that form in these regions may disrupt and re-distribute the molecular material from which they formed without dissociating it, allowing future generations of star formation in the cloud with overall long GMC lifetimes [20-40 Myr, e.g.; @kaw09]. The scenario of quick, dynamic star formation sustained over a long time by continued inflow of material is motivated by a variety of observations [discussed in detail in @elm07; @har07], and more recently by the lack of starless massive protoclusters [@gin12; @urq14; @cse14] observed through blind surveys of cold dust continuum emission in the Galaxy. Additionally, observations of infall of molecular material on large scales [@sch10; @per13] suggest that GMCs are dynamic and evolve quickly, but that material may be continually supplied into the region.
To study the formation, early evolution, and lifetimes of high-mass star-forming regions, we investigate their earliest evolutionary phase in dense, molecular regions (DMRs). The gas in regions that form high-mass stars, DMRs, has high densities [10$^{4-7}$ cm$^{-3}$; @lad03] and cold temperatures [10-20 K; @rat10; @bat11] and is typically detected by submm observations where the cold, dust continuum emission peaks. Given the appropriate viewing angle, these regions can also be seen in silhouette as Infrared Dark Clouds (IRDCs) absorbing the diffuse, mid-IR, Galactic background light. @bat11 showed that by using a combination of data [including measurements of their temperatures and column densities from Hi-GAL; @mol10], we can sample DMRs and classify them as starless or star-forming in a more systematically robust way than just using one wavelength (e.g. an IRDC would not be ‘dark’ on the far side of the Galaxy, so a mid-IR-only selection would exclude those DMRs).
Most previous studies of high-mass star-forming region lifetimes have focused on discrete ‘clumps’ of gas, usually identified in the dust continuum [e.g. @hey16; @cse14; @dun11a]. However, oftentimes these ‘clumps’ contain sub-regions of quiescence and active star formation and cannot simply be classified as either. To lump these regions together and assign the entire ‘clump’ as star-forming or quiescent can cause information to be lost on smaller scales within the clump. For example, the filamentary cloud highlighted in a black box in centered at \[30.21, -0.18\] clearly contains an actively star-forming and a quiescent region, but is identified as a single clump in the Bolocam Galactic Plane Survey [@gin13; @ros10; @agu11], as 3 clumps in the first ATLASGAL catalog [@contreras13; @urq14b], and as 5 clumps in the second ATLASGAL catalog [@cse14]. Resolution and sensitivity are not the primary drivers for the number of clumps identified, rather it is algorithmic differences, such as bolocat vs. clumpfind or gaussclumps.
In this paper, we present an alternate approach that circumvents the issues surrounding clump identification, by retaining the available information in a pixel-by-pixel analysis of the maps. All together, the pixels give us a statistical overview of the different evolutionary stages. Pixels that satisfy the criteria for high-mass star formation, explicated in the paper, are referred to as dense molecular regions (DMRs). We compare the fractions of the statistical ensembles of starless and star-forming DMRs to estimate their relative lifetimes.
Previous lifetime estimates, based primarily on mid-IR emission signatures toward samples of IRDCs or dust continuum clumps, found relative starless fractions between about 30-80% and extrapolate these to absolute starless lifetimes ranging anywhere between 10$^{3}$-10$^{6}$ years. Notably, the recent comprehensive studies of clump lifetimes from @svo16 and @hey16 find starless fractions of 47% and 69%, respectively. Our approach differs from previous methods by 1) not introducing clump boundaries, thereby using the highest resolution information available, 2) defining regions capable of forming high-mass stars based strictly on their column density, and 3) using the dust temperature and the mid-IR emission signature to classify a region as starless or star-forming.
Methods {#sec:method}
=======
The Galactic Plane centered at Galactic longitude $\ell =$ 30 contains one of the largest concentrations of dense gas and dust in the Milky Way. Located near the end of the Galactic bar and the start of the Scutum-Centaurus spiral arm, this region contains the massive W43 star forming complex at a distance of 5.5 kpc [@ell15] and hundreds of massive clumps and molecular clouds with more than 80% of the emission having = 80 to 120 implying a kinematic distance between 5 and 9 kpc [@Carlhoff13; @ell13; @ell15]. We investigate the properties and star-forming stages of DMRs in a 2$\times$ 2 field centered at \[$\ell$,b\] = \[30, 0\] using survey data from Hi-GAL [@mol10], GLIMPSE [@ben03], 6.7 GHz CH$_{3}$OH masers [@pes05], and UCHII regions [@woo89].
In a previous work, we measured T$_{dust}$ and N(H$_{2}$) from modified blackbody fits to background-subtracted Hi-GAL data from 160 to 500 using methods described in @bat11 and identified each pixel as mid-IR-bright, mid-IR-neutral, or mid-IR-dark, based on its contrast at 8 . The column density and temperature maps are produced using data which have had the diffuse Galactic component removed, as described in @bat11. We use the 25 version of the maps, convolving and regridding all the data to this resolution, which corresponds to beam sizes of 0.6 and 1.1pc and pixel sizes of 0.13 and 0.24 pc at typical distances of 5 and 9 kpc.
In this work, we identify pixels above a column density threshold capable of forming high-mass stars (§\[sec:nh2\_thresh\]), then, on a pixel-by-pixel basis, identify them as starless or star-forming (§\[sec:starry\]), and from their fractions infer their relative lifetimes. Using absolute lifetimes estimated from survey statistics of 6.7 CH$_{3}$OH masers and UCHII regions, we anchor our relative lifetimes to estimate the absolute lifetimes of the DMRs (§\[sec:maser\] and \[sec:uchii\]).
Previous works (see §\[sec:comp\]) have estimated star-forming lifetimes over contiguous “clumps," typically $\sim$1 pc in size. However, sub-mm identified clumps often contain distinct regions in different stages of star formation. It was this realization that led us to the pixel-by-pixel approach. In the clump approach, actively star-forming and quiescent gas are lumped together; a single signature of star formation in a clump will qualify all of the gas within it as star-forming. This association of a large amount of non-star-forming gas with a star-forming clump could lead to erroneous lifetime estimates for the phases of high-mass star formation, though we note that this will be less problematic in higher-resolution analyses. Therefore, we use a pixel-by-pixel approach and consider any pixel with sufficient column density (see next section) to be a dense molecular region, DMR.

Column Density Thresholds for High-Mass Star Formation {#sec:nh2_thresh}
------------------------------------------------------
High-mass stars form in regions with surface densities $\Sigma$ $\sim$ 1 g cm$^{-2}$ [@kru08], corresponding to N(H$_{2}$) $\sim$ 2.1$\times$10$^{23}$ cm$^{-2}$. At distances of several kpc or more, most cores are highly beam-diluted in our 25 beam. To derive a realistic high-mass star-forming column density threshold for cores beam-diluted by a 25 beam, consider a spherical core with a constant column density $\Sigma$ = 1 g cm$^{-2}$ in the central core (defined as $r < r_{f}$, where r$_f$ is the flat inner portion of the density profile, determined from fits to the data) and a power-law drop off for $r > r_{f}$ $$n(r) = n_{f} (r / r_{f}) ^{-p}$$ where p is the density power-law exponent. The @mue02 study of 51 high-mass star-forming cores found a best-fit central core radius, r$_{f} ~\approx$ 1000 AU, and density power-law index, $p$ = 1.8. This model implies an H$_2$ central density of $n_{f}$ = 6.2 $\times$ 10$^{7}$ cm$^{-3}$, which, integrated over $r_{f}$ = 1000 AU, corresponds to the theoretical surface density threshold for forming high-mass stars of $\Sigma$ = 1 g cm$^{-2}$.
Integration of this model core along the line of sight and convolution with a 25 beam results in a beam-diluted column density threshold, at typical distances of 5 and 9 kpc toward the $\ell$ = 30 field [@ell13; @ell15], of N(H$_{2}$) = 0.8 and 0.4 $\times$ 10$^{22}$ cm$^{-2}$, respectively. Pixels above this column density threshold are referred to as dense molecular regions (DMRs). We note that the column density maps used have had the diffuse Galactic background removed, as described in @bat11, so we can attribute all of the column density to the DMRs themselves.
As discussed in §\[sec:colfig\], the relative lifetimes are mostly insensitive to variations in the threshold column density from $\sim$0.3 to 1.3 $\times$ 10$^{22}$ cm$^{-2}$ (corresponding to distances of 11 and 3 kpc for the model core). We make two estimates of lifetimes throughout the text using the extreme ends of reasonable parameter space, for this section, the cutoffs are: N(H$_{2}$) = 0.4 $\times$ 10$^{22}$ cm$^{-2}$ for the ‘generous’ and N(H$_{2}$) = 0.8 $\times$ 10$^{22}$ cm$^{-2}$ and ‘conservative’ estimates.
Alternatively, we apply the @kau10 criteria (hereafter KP10) for high-mass star formation. @kau10 observationally find that regions with high-mass star formation tend to have a mass-radius relationship of m(r) $>$ 870 $^{1.33}$. At our beam-sizes of 0.6 and 1.1 pc, this corresponds to column densities of about 2.5 and 1.6 $\times$ 10$^{22}$ cm$^{-2}$, respectively. The results of this column density threshold are discussed in more detail in each results section (§\[sec:colfig\], \[sec:maser\], and \[sec:uchii\]). The result that our relative lifetimes are mostly insensitive to variations in the the threshold column density holds true with these higher column density thresholds.
{width="100.00000%"}
Starless vs. Star-Forming {#sec:starry}
-------------------------
In this paper, dust temperature distributions are combined with mid-IR star formation signatures to determine whether each DMR is ‘starless’ or ‘star-forming’. The mid-IR signature is determined from the contrast at 8 , i.e. how ‘bright’ or ‘dark’ the pixel is above the background - the 8 image smoothed with a median filter of 25 resolution [see @bat11 for details]. We use the mid-IR signature at 8 (mid-IR dark or bright) as the main discriminator and the dust temperature as a secondary discriminator, particularly to help identify regions that are cold and starless, but do not show particular absorption as an IRDC as they be on the far side of the Galaxy.
We use the range of dust temperatures found to be associated with mid-IR-dark and bright regions (based on Gaussian fits to those temperature distributions) to help discriminate DMRs are ‘starless’ or ‘star-forming.’ If a DMR is mid-IR-dark and its temperature is within the normal cold dust range (2 or 3-$\sigma$ for the ‘conservative’ and ‘generous’ thresholds respectively), then it is classified as ‘starless.’ If a DMR is mid-IR-bright and its temperature is within the normal warm dust range (2 or 3-$\sigma$ for the ‘conservative’ and ‘generous’ thresholds respectively), then it is classified as ‘star-forming.’ Slight changes in the temperature distributions (e.g., including all DMRs down to 0 K as starless and up to 100 K as star-forming) have a negligible effect. Pixels that are mid-IR-bright and cold or mid-IR-dark and warm are extremely rare and left out of the remaining analysis. When a DMR is mid-IR-neutral, its temperature is used to classify it as ‘starless’ or ‘star-forming.’ A flow chart depicting this decision tree is shown in .
This study is only sensitive to the star-forming signatures of high-mass stars. Therefore, the term ‘starless’ refers only to the absence of high-mass stars forming; the region may support active low- or intermediate-mass star formation. Using @rob06 models of dust emission toward YSOs, scaled to 5 and 9 kpc distances and apertures, we find that our typical starless flux limit of 130 MJy/Sr at 8 (technically we use a contrast not a specific flux, but this is the approximate level for most of the region at our contrast cut) is sensitive enough to detect the vast majority (85%) of possible model YSOs with a mass above 10 . Some YSO models will always be undetected (due to unfortunate inclinations, etc.) no matter the flux limit, so we estimate that we are sensitive to forming massive stars above $\sim$10 .
While contrast at 8 is an imperfect measure of the presence or absence of high-mass star formation, it was found to be a powerful discriminator in @bat11, who compared it with dust temperature, 24 emission, maser emission, and Extended Green Objects. In particular, since we employ the contrast at 8 smoothed over 25 using a median filter, rather than the simple presence or absence of emission, we are unlikely to be affected by field stars, which are small and very bright, and thus will be removed by the median filter smoothing. PAH emission at 8 toward Photon-Dominated Regions (PDRs) is not likely to be an issue since the low column density of dust towards PDRs will preclude their inclusion as DMRs in the first place.
Lifetimes {#sec:lifetimes}
=========
Assumptions in Deriving Relative Lifetimes {#sec:caveats}
------------------------------------------
Several necessary assumptions were made in inferring the relative lifetimes of the starless and star-forming stages for high-mass star-forming regions. i) The sample is complete and unbiased in time and space. ii) The sum of DMRs represents with equal probability all the phases in the formation of a massive star. This can be achieved either with a constant star formation rate (SFR) or a large enough sample. iii) DMRs are forming or will form high-mass stars. iv) The lifetimes don’t depend on mass. v) Starless and star-forming regions occupy similar areas on the sky (i.e. their area is proportional to the mass of gas). vi) The signatures at 8 (mid-IR bright or dark) and their associated temperature distributions are good indicators of the presence or absence of a high-mass star. vii) Pixels above the threshold column density contain beam-diluted dense cores rather than purely beam-filling low-surface density gas. We discuss these assumptions below.
Assumptions (i) and (ii) are reasonable given the size of our region; we are sampling many populations in different evolutionary stages. The relative lifetimes we derive here may apply elsewhere to special regions in the Galaxy with similar SFRs and properties. However, since this region contains W43 a ‘mini-starburst’ [e.g. @bal10; @lou14], it is likely in a higher SFR phase and may not be applicable generally throughout the Galaxy. One possible complication to assumption (ii) is the possibility that some high-mass stars can be ejected from their birthplaces with velocities sufficient to be physically separated from their natal clumps within about 1 Myr. However, this should not result in an underestimate of the star-forming fraction of DMRs, since high-mass star formation is highly clustered, and if the stellar density is sufficient for ejection of high-mass stars, then the remaining stars should easily classify the region as a DMR. On the other hand, the ejected stars could in principle classify a dense, starless region they encounter as ‘star-forming’ by proxy. We expect that this would be rare, but suggest it as an uncertainty worthy of further investigation.
We argue in favor of assumption (iii) in §\[sec:nh2\_thresh\]. While we expect that assumption (iv) is not valid over all size clumps, this assumption is reasonable (and necessary) for our sample, as the column density variation over our DMRs is very small (from the threshold density to double that value). This lack of column density variation in our DMRs also argues in favor of assumption (v). Various studies [e.g., @bat10; @cha09] argue in favor of assumption (vi), statistically, but more sensitive and higher resolution studies will continue to shed light on the validity of this assumption. Assumption (vii) is supported by the fact that interferometric observations of DMRs [e.g. @bat14b] and density measurements [@gin15] demonstrate that most (if not all) DMRs contain dense substructure rather than purely beam-filling low surface-density gas. While we expect that the high column density of some DMRs may not indicate a high volume density, but rather long filaments seen ‘pole-on,’ we expect this to be quite rare. If this special geometry were common, it would lead to an over-estimate of the starless fraction.

Observed Relative Lifetimes {#sec:colfig}
---------------------------
For both the ‘conservative’ and ‘generous’ estimates, the relative percentage of DMRs in the starless / star-forming phase is about 70%/30%. The higher column density thresholds from KP10 give relative percentages of 63%/37%. These lifetimes are shown in . The dashed lines in shows the percentage of pixels in the starless vs. starry categories for a range of column density thresholds shown on the x-axis. Below a column density threshold of about 0.3 $\times$ 10$^{22}$ cm$^{-2}$ starless and starry pixels are equally distributed (50%). At any column density threshold in the range of 0.3 - 1.3 $\times$ 10$^{22}$ cm$^{-2}$ about 70% of the pixels are categorized as starless and 30% as starry (i.e. star-forming). At the higher column density thresholds from KP10, 1.7 - 2.5 $\times$ 10$^{22}$ cm$^{-2}$ about 63% are starless and 37% star-forming.
At each column density, the ratio between the number of pixels in each category (solid lines in , N$_{starless}$ and N$_{starry}$) divided by the total number of pixels at that column density (N$_{total}$), multiplied by 100, gives the percentage of pixels in each category (dashed lines in , Starless % and Starry %). $$\rm{Starless~\%} = [ N_{starless} / N_{total} ] \times 100$$ $$\rm{Starry~\%} = [ N_{starry} / N_{total} ] \times 100$$
The histograms in solid lines in show the number of pixels in each category with a given column density, as noted on the x-axis; i.e. a column density probability distribution function. This figure demonstrates that the average column density distribution is higher for starless pixels. We interpret this to mean that regions categorized as starless have a high capacity for forming future stars (high column density and cold). Our method of comparing these populations above a threshold column density allows us to disentangle them, and derive relative lifetime estimates for regions on the brink of (starless) vs. actively forming stars (starry).
Under the assumptions discussed in §\[sec:method\] and \[sec:caveats\], high-mass star-forming DMRs spend about 70% of their lives in the starless phase and 30% in the actively star-forming phase. If we instead apply the KP10 criteria as a column density threshold (see §\[sec:nh2\_thresh\]), our relative lifetimes are 63% for the starless phase and 37% the for star-forming phase. We therefore conclude that the starless phase occupies approximately 60-70% of the lifetime of DMRs while the star-forming phase is about 30-40% over a wide range of parameters, both column density threshold and temperature thresholds as shown by ‘conservative’ vs. ‘generous’ criteria.
While our relative lifetime estimates are robust over a range of parameters, the connection of our relative lifetimes to absolute lifetimes is extremely uncertain. We present below two methods to connect our relative lifetimes to absolute timescales. The first is to link the methanol masers detected in our region with Galactic-scale maser surveys, which provide an estimate of the maser lifetime. The second approach is to instead assume an absolute lifetime of UCHII regions to anchor our relative DMR lifetimes.
Maser association and absolute lifetime estimates {#sec:maser}
-------------------------------------------------
We use the association of DMRs with 6.7 GHz Class II methanol masers (thought to be almost exclusively associated with regions of high-mass star formation), and the lifetime of these masers from @van05, to anchor our relative lifetimes to absolute timescales. These lifetimes are highly uncertain and rest on a number of assumptions, therefore care should be taken in their interpretation. We utilize unbiased Galactic plane searches for methanol masers by @szy02 and @ell96 compiled by @pes05. We define the ‘sizes’ of the methanol masers to be spherical regions with a radius determined by the average size of a cluster forming clump associated with a methanol maser from the BGPS as $R\sim0.5$ pc [@svo16], corresponding to 30 diameter apertures for the average distance in this field. While methanol maser emission comes from very small areas of the sky [e.g., @wal98], they are often clustered, so these methanol maser “sizes" are meant to represent the extent of the star-forming region.
The absolute lifetime of DMRs can be anchored to the duration of the 6.7 GHz Class II CH$_{3}$OH masers, which are estimated to have lifetimes of $\sim$35,000 years [by extrapolating the number of masers identified in these same surveys to the total number of masers in the Milky Way and using an IMF and global SFR to estimate their lifetimes; @van05]. We note that the @van05 extrapolated total number of methanol masers in the Galaxy of 1200 is in surprisingly good agreement with the more recent published methanol maser count from the MMB group [they find 582 over half the Galaxy; @cas10; @cas11; @gre12], therefore, though this catalog is outdated, its absolute lifetime estimate remains intact, though we suggest that future works use these new MMB catalogs, which have exquisite positional accuracy.
While the fraction of starless vs. star-forming DMRs is insensitive to the column density cuts, the fraction of DMRs associated with methanol masers ($f_{maser}$) increases as a function of column density (see ). In the ‘generous’ and ‘conservative’ cuts, the methanol maser fraction is 2% and 4% ($f_{maser}$), respectively, corresponding to total DMR lifetimes ($\tau_{total}$) of 1.9 Myr and 0.9 Myr. Using the alternative column density threshold from @kau10, called KP10, as discussed in §\[sec:nh2\_thresh\], the maser fraction is about 7-12%, corresponding to total DMR lifetimes ($\tau_{total}$) of 0.3 and 0.4 Myr. Given our starless and star-forming fractions ($f_{starless}$=0.6-0.7 and $f_{starry}$=0.3-0.4) and the methanol maser lifetime [$\tau_{maser}$=35,000 years; @van05] we can calculate the total DMC lifetime and relative phase lifetimes using the following equations: $$\tau_{total} = \frac{\tau_{maser}}{f_{maser}}$$ $$\tau_{starless} = f_{starless}~ \tau_{total}$$ $$\tau_{starry} = f_{starry}~ \tau_{total}$$ The ‘starless’ lifetime then is about 0.6-1.4 Myr, while the ‘star-forming’ lifetime is 0.2-0.6 Myr considering only the ‘conservative’ and ‘generous’ thresholds. The KP10 column density threshold for high-mass star formation, because of its larger methanol maser fraction, yields absolute starless lifetimes of 0.2-0.3 Myr and star-forming lifetimes of 0.1-0.2 Myr. Overall, the range of absolute starless lifetimes is 0.2 - 1.4 Myr and star-forming lifetimes is 0.1 - 0.6 Myr. See for a summary of the relative and absolute lifetimes for various methods.
Criteria N(H$_2$) \[cm$^{-2}$\] $f_{starless}$ $f_{starry}$ Anchor $\tau_{total}$ \[Myr\] $\tau_{starless}$ $\tau_{starry}$
-------------- -------------------------- ---------------- -------------- -------- ------------------------ ------------------- ----------------- --
Generous 0.4$\times$10$^{22}$ 0.71 0.29 maser 1.9 1.4 0.6
Conservative 0.8$\times$10$^{22}$ 0.72 0.28 maser 0.9 0.6 0.3
Either 0.4-0.8$\times$10$^{22}$ 0.7 0.3 UCHII 1.2-2.4 0.8-1.7 0.4-0.7
KP10 near 2.5$\times$10$^{22}$ 0.63 0.37 maser 0.3 0.2 0.1
KP10 far 1.6$\times$10$^{22}$ 0.63 0.37 maser 0.5 0.3 0.2
Either KP 1.6-2.5$\times$10$^{22}$ 0.63 0.37 UCHII 1.0-1.9 0.6-1.2 0.4-0.7
Overall 0.4-2.5$\times$10$^{22}$ 0.6-0.7 0.3-0.4 both 0.3-2.4 0.2-1.7 0.1-0.7
Criteria Median N(H$_2$) Anchor $\tau_{ff}$ \[Myr\] N$_{ff, total}$ N$_{ff, starless}$ N$_{ff, starry}$
-------------- -------------------------- -------- --------------------- ----------------- -------------------- ------------------
Generous 0.6$\times$10$^{22}$ maser 0.7 2.8 2.0 0.8
Conservative 1.2$\times$10$^{22}$ maser 0.5 1.8 1.3 0.5
Either 0.6-1.2$\times$10$^{22}$ UCHII 0.5-0.7 1.7-4.8 1.2-3.4 0.5-1.4
KP10 near 3.5$\times$10$^{22}$ maser 0.3 1.0 0.6 0.4
KP10 far 2.2$\times$10$^{22}$ maser 0.4 1.1 0.7 0.4
Either KP 2.2-3.5$\times$10$^{22}$ UCHII 0.3-0.4 2.4-6.5 1.5-4.1 0.9-2.4
Overall 0.6-3.5$\times$10$^{22}$ both 0.3-0.7 1.0-6.5 0.6-4.1 0.4-2.4
UCHII Region Association and Lifetimes {#sec:uchii}
--------------------------------------
Since the absolute lifetimes of methanol masers is quite uncertain, we tie our relative lifetimes to absolute lifetimes of UCHII regions using a different method to probe the range of parameter space that is likely for DMRs. @woo89b [@woo89] determined that the lifetimes of UCHII regions are longer than anticipated based on the expected expansion rate of D-type ionization fronts as HII regions evolve toward pressure equilibrium. They estimate that O stars spend about 10-20% of their main-sequence lifetime in molecular clouds as UCHII regions, or about 3.6 $\times$ 10$^{5}$ years[^1], $\tau_{UCHII}$. The remaining link between the absolute and relative lifetimes is the fraction of DMRs associated with UCHII regions, particularly O stars. @woo89 look for only the brightest UCHII regions, dense regions containing massive stars, while the more recent and more sensitive studies of @and11 show HII regions over wider evolutionary stages, after much of the dense gas cocoon has been dispersed. We suggest that future works investigate the use of newer catalogs from the CORNISH survey [e.g.; @pur13]. @woo89 searched three regions in our $l$ = 30 field (all classified as “starry" in our study) for UCHII regions and found them toward two. Therefore the @woo89 absolute lifetime of 3.6 $\times$ 10$^{5}$ years ($\tau_{UCHII}$) corresponds, very roughly, to 2/3 of ‘starry’ DMRs they surveyed in our analyzed field.
We make the assumption that approximately 50-100% of our ‘starry’ pixels are associated with UCHII regions ($f_{UCHII}$). This assumption is based on the following lines of evidence: 1) 8 emission is often indicative of UV excitation [@ban10 show that nearly all GLIMPSE bubbles, 8 emission, are associated with UCHII regions], 2) “starry" pixels show warmer dust temperatures, 3) and UCHII regions were found toward 2/3 regions surveyed by @woo89 in our field, as shown above. The assumption that 50-100% of ‘starry’ pixels are associated with UCHII regions ($f_{UCHII}$) corresponds to total DMR lifetimes ($\tau_{total}$) of 2.4 or 1.2 Myr (for 50% or 100%, respectively) when we assume a starry fraction of 30% ($f_{starry}$) as shown in the equation below. $$\tau_{total} = \frac{\tau_{UCHII}}{f_{starry}~f_{UCHII}}$$ The absolute lifetimes of the ‘Generous’ and ‘Conservative’ thresholds starless phase (using Equations 5 and 6) is then 0.8-1.7 Myr and the “starry" phase would be 0.4-0.7 Myr. If we instead assume the KP10 column density threshold for high-mass star formation, $f_{UCHII}$ would not change, only the star-forming and starless fraction, in this case, 37% and 63%, respectively. The absolute lifetimes inferred for these phases based on association with UCHII regions is a $\tau_{total}$ of 1.0-1.9 Myr, a $\tau_{starless}$ of 0.6-1.2 Myr and a $\tau_{starry}$ of 0.4-0.7 Myr. See for a summary of the relative and absolute lifetimes for various methods.
Free-fall times {#sec:ff}
---------------
The absolute lifetimes derived in §\[sec:maser\] and \[sec:uchii\] are compared with fiducial cloud free-fall times. To calculate ‘fiducial’ cloud free-fall times, we first calculate the median pixel column density for each of the categories (conservative, generous, KP10 near, and KP10 far). These are shown in . We then use Equations 11 and 12 from @svo16 to calculate a fiducial free-fall time from the column densities. The central volume density is calculated from the column density assuming a characteristic length of 1 pc and a spherically symmetric Gaussian density distribution [see @svo16 for details]. We stress that there are many uncertainties in this calculation, as we do not know the true volume densities, but these are simply meant to provide approximate free-fall times based on known cloud parameters, such as the median column density and typical size. Moreover, these free-fall times are, of course, calculated at the ‘present day,’ and we do not know what the ‘initial’ cloud free-fall times were. The fiducial free-fall times for each category are listed in .
For each category, we then convert the absolute lifetimes derived into number of free-fall times. These are shown in the rightmost three columns of . The number of free-fall times for the total lifetimes range from 1-6.5. The starless phase ranges between 0.6-4.1 free-fall times and the starry phase ranges from 0.4-2.4 free-fall times.
Clump Identification SF Identification $f_{starless}$ Reference
------------------------ --------------------- ---------------- ----------- --
IRDC 24 0.65 @cha09
IRDC 8 0.82 @cha09
IRDC 24 0.33 @par09
IRDC 24 0.32-0.80 @per09
LABOCA 8/24 0.44 @mie12
ATLASGAL 22 0.25 @cse14
ATLASGAL GLIMPSE+MIPSGAL 0.23 @tac12
Hi-GAL 24 0.18 @wil12
Hi-GAL 8 0.33 @wil12
BGPS mid-IR catalogs[^2] 0.80 @dun11a
BGPS many tracers 0.47 @svo16
ATLASGAL MIPSGAL YSOs 0.69 @hey16
N(H$_{2}$) from Hi-GAL T$_{dust}$ and 8 0.60-0.70 This work
Comparison with Other Lifetime Estimates for Dense, Molecular Clumps {#sec:comp}
--------------------------------------------------------------------
Previous lifetime estimates are summarized in . They are based primarily on mid-IR emission signatures at 8/24 toward IRDCs or dust-identified clumps and find starless fraction percentages between 23-82%. In previous studies, these starless fractions are often extrapolated to absolute starless lifetimes ranging from $\sim$ 10$^{3}$-10$^{6}$ years. Additionally, @tac12 find a lifetime for the starless phase for the most high-mass clumps of 6 $\times$ 10$^{4}$ years based on an extrapolated total number of starless clumps in the Milky Way and a Galactic SFR, and @ger14 found an IRDC lifetime of 10$^{4}$ years based on chemical models.
Previous analyses determined lifetimes of high-mass star-forming regions by calculating relative fractions of ‘clumps’ or ‘cores,’ defined in various ways and with arbitrary sizes. All the regions within each ‘clump’ or ‘core’ are lumped together and collectively denoted as starless or star-forming. Since clumps identified in sub-mm surveys generally contain gas in different stages of star formation (actively star-forming and quiescent) we chose to use the pixel-by-pixel approach. In this way, gas that is star-forming or quiescent is identified as such without being instead included in a different category due to its association with a clump. Previously, a single 8 or 24 point source would classify an entire clump as star-forming, therefore, we expect that our pixel-by-pixel approach will identify more regions as starless and give a higher starless fraction. Our relative lifetime estimates are in reasonable agreement with previous work on the topic, and yield a somewhat higher starless fraction, as would be expected with the pixel-by-pixel method.
Of particular interest for comparison are the recent works of @svo16 and @hey16. @svo16 perform a comprehensive analysis of over 4500 clumps from BGPS across the Galactic Plane, including their distances, physical properties, and star-forming indicators. In this analysis they notice a possible trend in which the clump starless lifetimes decrease with clump mass. Overall, about 47% of their clumps can be qualified as ‘starless’ clump candidates, and using a similar method for determining absolute lifetimes, they find lifetimes of 0.37 $\pm$ 0.08 Myr (M/10$^{3}$ )$^{-1}$ for clumps more massive than 10$^3$ . @hey16 similarly perform a comprehensive analysis of the latency of star formation in a large survey of about 3500 clumps identified by ATLASGAL. They carefully identify MIPSGAL YSOs [@gut15] that overlap with these clumps, and accounting for clumps excluded due to saturation, find that about 31% are actively star-forming. They conclude that these dense, molecular clumps have either no star formation, or low-level star formation below their sensitivity threshold, for about 70% of their lifetimes. Our starless lifetime of about 60-70% agrees remarkably well with both of these studies. The @svo16 analysis includes regions of lower column density than our selection and are also sensitive to the early signatures of lower-mass stars, so it would be expected that the starless fraction is a bit lower.
The absolute lifetimes we derive are larger than most previous studies simply because of how they are anchored - most previous studies simply assume a star-forming lifetime ($\tau_{starry}$) of 2 $\times$ 10$^{5}$ years [representative YSO accretion timescale, @zin07]. If this star-forming lifetime ($\tau_{starry}$) is used along with our starless fractions of 0.6-0.7, we would derive starless lifetimes ($\tau_{starless}$, using Equations 5 and 6) of 0.3 - 0.5 Myr. Overall, there is quite a wide range in the estimates of the starless lifetimes for DMRs. However, the relatively good agreement from the comprehensive studies of @svo16 and @hey16 on individual clumps and the present work using a variety of star-formation tracers and a pixel-by-pixel analysis over a large field may indicate that these values are converging. Moreover, it is crucial to understand that different techniques will necessarily provide different values, as each is probing a different clump or DMR density and some star-formation tracers will be more sensitive to the signatures of lower-mass stars. The matter is overall, quite complex, and assigning a single lifetime to regions of different masses and densities is a simplification [@svo16].
Comparison with Global Milky Way SFRs {#sec:compsfr}
-------------------------------------
One simple sanity test for our lifetime estimates is to compare them with global SFRs. We use our column density map of the “starry" regions and convert it to a total mass of material in the star-forming phase. For the range of column densities considered, assuming distances between 5-9 kpc, we find the total mass of material engaged in forming high-mass stars to be about 0.5 - 3 $\times$ 10$^5$ $\Msun$ in the 2 $\times$ 2 field centered at \[$\ell$, b\] = \[30, 0\]. We assume a typical star formation efficiency of 30% to derive the mass of stars we expect to form in the region over “starry" lifetime of 0.1 - 0.7 Myr. Multiplying the gas mass by the efficiency and dividing by this lifetime gives us a total SFR in our 2 $\times$ 2 region of 0.02 - 0.82 /year (given the ranges in lifetimes, threshholds and distances). Extrapolating this to the entire Galaxy within the solar circle [assuming a sensitivity of our measurements from 3 to 12 kpc and a typical scale height of 80 pc as in @rob10] gives a global Milky Way SFR ranging from 0.3 - 20 /year. Typical estimates of Milky Way SFRs range from about 0.7 - 10 /year, and when accounting for different IMFs, converge to about 2 /year [e.g. @rob10; @cho11].
Since our observed field contains W43, often touted as a “mini-starbust" [e.g. @bal10; @lou14], we expect our inferred global SFR to be higher than the true global SFR. Due to the many uncertainties and assumptions, we find that the inferred global SFR has a large range and is between 0.1 - 10 $\times$ the fiducial value of 2 /year. While the level of uncertainties and assumptions preclude any meaningful inference from this comparison, our numbers do pass the simple sanity check. Additionally, the average of inferred global SFRs is higher than the fiducial value, as would be expecåted for this highly active, “mini-starburst" region of the Galaxy.
Conclusion {#sec:conclusion}
==========
We estimate the relative lifetimes of the starless and star-forming phases for all regions capable of forming high-mass stars in a 2 $\times$ 2 field centered at \[$\ell$, b\] = \[30, 0\]. We use column densities derived from Hi-GAL to determine which regions are capable of forming high-mass stars, and dust temperature and Spitzer 8 emission to determine if the region is starless (to a limit of about 10 ) or star-forming. Unlike previous analyses, we do not create any artificial ‘clump’ boundaries, but instead use all the spatial information available and perform our analysis on a pixel-by-pixel basis. We find that regions capable of forming high-mass stars spend about 60-70% of their lives in a starless or embedded phase with star formation below our detection level and 30-40% in an actively star-forming phase.
Absolute timescales for the two phases are anchored to the duration of methanol masers determined from @van05 and the UCHII region phase from @woo89. We include a wide range of possible assumptions and methodologies, which gives a range for starless lifetimes of 0.2 to 1.7 Myr (60-70%) and a star-forming lifetime of 0.1 to 0.7 Myr (30-40%) for high-mass star-forming regions identified in the dust continuum above column densities from 0.4 - 2.5 $\times$ 10$^{22}$ cm$^{-2}$. These lifetimes correspond to about 0.6-4.1 free-fall times for the starless phase and 0.4-2.4 free-fall times for the star-forming phase, using fiducial cloud free-fall times. In this work, we are only sensitive to tracing forming stars more massive than about 10 . If lower-mass stars in the same regions form earlier, the starless timescale for those stars would be even faster than the 0.6-4.1 free-fall times reported here.
We find that the relative lifetimes of about 60-70% of time in the starless phase and 30-40% in the star-forming phase are robust over a wide range of thresholds, but that the absolute lifetimes are rather uncertain. These results demonstrate that a large fraction of high-column density gas is in a starless or embedded phase. We outline a methodology for estimating relative and absolute lifetimes on a pixel-by-pixel basis. This pixel-by-pixel method could easily be implemented to derive lifetimes for dense, molecular regions throughout the Milky Way.
We thank the anonymous referee for many insightful and important comments that have greatly improved the manuscript. We also thank P. Meyers, Y. Shirley, H. Beuther, J. Tackenberg, A. Ginsburg, and J. Tan for helpful conversations regarding this work. Data processing and map production of the Herschel data has been possible thanks to generous support from the Italian Space Agency via contract I/038/080/0. Data presented in this paper were also analyzed using The Herschel interactive processing environment (HIPE), a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS, and SPIRE consortia. This material is based upon work supported by the National Science Foundation under Award No. 1602583 and by NASA through an award issued by JPL/Caltech via NASA Grant \#1350780.
[^1]: For an O6 star, the main sequence lifetime is about 2.4 $\times$ 10$^{6}$ years [@mae87], so 15% is 3.6 $\times$ 10$^{5}$ years. Note that a less massive mid-B star would have a lifetime about 5$\times$ longer, changing our absolute lifetime estimates by that factor. This variation gives a sense of the uncertainties involved in deriving absolute lifetimes.
[^2]: @rob08, the Red MSX Catalog from @urq08, and the EGO catalog from @cyg08.
|
{
"pile_set_name": "arxiv"
}
|
.theme-dusk,.theme-midnight {
.hljs {
display: block;
overflow-x: auto;
background: #232323;
color: #e6e1dc;
}
.hljs-comment,
.hljs-quote {
color: #bc9458;
font-style: italic;
}
.hljs-keyword,
.hljs-selector-tag {
color: #c26230;
}
.hljs-string,
.hljs-number,
.hljs-regexp,
.hljs-variable,
.hljs-template-variable {
color: #a5c261;
}
.hljs-subst {
color: #519f50;
}
.hljs-tag,
.hljs-name {
color: #e8bf6a;
}
.hljs-type {
color: #da4939;
}
.hljs-symbol,
.hljs-bullet,
.hljs-built_in,
.hljs-builtin-name,
.hljs-attr,
.hljs-link {
color: #6d9cbe;
}
.hljs-params {
color: #d0d0ff;
}
.hljs-attribute {
color: #cda869;
}
.hljs-meta {
color: #9b859d;
}
.hljs-title,
.hljs-section {
color: #ffc66d;
}
.hljs-addition {
background-color: #144212;
color: #e6e1dc;
display: inline-block;
width: 100%;
}
.hljs-deletion {
background-color: #600;
color: #e6e1dc;
display: inline-block;
width: 100%;
}
.hljs-selector-class {
color: #9b703f;
}
.hljs-selector-id {
color: #8b98ab;
}
.hljs-emphasis {
font-style: italic;
}
.hljs-strong {
font-weight: bold;
}
.hljs-link {
text-decoration: underline;
}
}
|
{
"pile_set_name": "github"
}
|
if (global.GENTLY) require = GENTLY.hijack(require);
var crypto = require('crypto');
var fs = require('fs');
var util = require('util'),
path = require('path'),
File = require('./file'),
MultipartParser = require('./multipart_parser').MultipartParser,
QuerystringParser = require('./querystring_parser').QuerystringParser,
OctetParser = require('./octet_parser').OctetParser,
JSONParser = require('./json_parser').JSONParser,
StringDecoder = require('string_decoder').StringDecoder,
EventEmitter = require('events').EventEmitter,
Stream = require('stream').Stream,
os = require('os');
function IncomingForm(opts) {
if (!(this instanceof IncomingForm)) return new IncomingForm(opts);
EventEmitter.call(this);
opts=opts||{};
this.error = null;
this.ended = false;
this.maxFields = opts.maxFields || 1000;
this.maxFieldsSize = opts.maxFieldsSize || 2 * 1024 * 1024;
this.keepExtensions = opts.keepExtensions || false;
this.uploadDir = opts.uploadDir || os.tmpDir();
this.encoding = opts.encoding || 'utf-8';
this.headers = null;
this.type = null;
this.hash = opts.hash || false;
this.multiples = opts.multiples || false;
this.bytesReceived = null;
this.bytesExpected = null;
this._parser = null;
this._flushing = 0;
this._fieldsSize = 0;
this.openedFiles = [];
return this;
}
util.inherits(IncomingForm, EventEmitter);
exports.IncomingForm = IncomingForm;
IncomingForm.prototype.parse = function(req, cb) {
this.pause = function() {
try {
req.pause();
} catch (err) {
// the stream was destroyed
if (!this.ended) {
// before it was completed, crash & burn
this._error(err);
}
return false;
}
return true;
};
this.resume = function() {
try {
req.resume();
} catch (err) {
// the stream was destroyed
if (!this.ended) {
// before it was completed, crash & burn
this._error(err);
}
return false;
}
return true;
};
// Setup callback first, so we don't miss anything from data events emitted
// immediately.
if (cb) {
var fields = {}, files = {};
this
.on('field', function(name, value) {
fields[name] = value;
})
.on('file', function(name, file) {
if (this.multiples) {
if (files[name]) {
if (!Array.isArray(files[name])) {
files[name] = [files[name]];
}
files[name].push(file);
} else {
files[name] = file;
}
} else {
files[name] = file;
}
})
.on('error', function(err) {
cb(err, fields, files);
})
.on('end', function() {
cb(null, fields, files);
});
}
// Parse headers and setup the parser, ready to start listening for data.
this.writeHeaders(req.headers);
// Start listening for data.
var self = this;
req
.on('error', function(err) {
self._error(err);
})
.on('aborted', function() {
self.emit('aborted');
self._error(new Error('Request aborted'));
})
.on('data', function(buffer) {
self.write(buffer);
})
.on('end', function() {
if (self.error) {
return;
}
var err = self._parser.end();
if (err) {
self._error(err);
}
});
return this;
};
IncomingForm.prototype.writeHeaders = function(headers) {
this.headers = headers;
this._parseContentLength();
this._parseContentType();
};
IncomingForm.prototype.write = function(buffer) {
if (this.error) {
return;
}
if (!this._parser) {
this._error(new Error('uninitialized parser'));
return;
}
this.bytesReceived += buffer.length;
this.emit('progress', this.bytesReceived, this.bytesExpected);
var bytesParsed = this._parser.write(buffer);
if (bytesParsed !== buffer.length) {
this._error(new Error('parser error, '+bytesParsed+' of '+buffer.length+' bytes parsed'));
}
return bytesParsed;
};
IncomingForm.prototype.pause = function() {
// this does nothing, unless overwritten in IncomingForm.parse
return false;
};
IncomingForm.prototype.resume = function() {
// this does nothing, unless overwritten in IncomingForm.parse
return false;
};
IncomingForm.prototype.onPart = function(part) {
// this method can be overwritten by the user
this.handlePart(part);
};
IncomingForm.prototype.handlePart = function(part) {
var self = this;
if (part.filename === undefined) {
var value = ''
, decoder = new StringDecoder(this.encoding);
part.on('data', function(buffer) {
self._fieldsSize += buffer.length;
if (self._fieldsSize > self.maxFieldsSize) {
self._error(new Error('maxFieldsSize exceeded, received '+self._fieldsSize+' bytes of field data'));
return;
}
value += decoder.write(buffer);
});
part.on('end', function() {
self.emit('field', part.name, value);
});
return;
}
this._flushing++;
var file = new File({
path: this._uploadPath(part.filename),
name: part.filename,
type: part.mime,
hash: self.hash
});
this.emit('fileBegin', part.name, file);
file.open();
this.openedFiles.push(file);
part.on('data', function(buffer) {
if (buffer.length == 0) {
return;
}
self.pause();
file.write(buffer, function() {
self.resume();
});
});
part.on('end', function() {
file.end(function() {
self._flushing--;
self.emit('file', part.name, file);
self._maybeEnd();
});
});
};
function dummyParser(self) {
return {
end: function () {
self.ended = true;
self._maybeEnd();
return null;
}
};
}
IncomingForm.prototype._parseContentType = function() {
if (this.bytesExpected === 0) {
this._parser = dummyParser(this);
return;
}
if (!this.headers['content-type']) {
this._error(new Error('bad content-type header, no content-type'));
return;
}
if (this.headers['content-type'].match(/octet-stream/i)) {
this._initOctetStream();
return;
}
if (this.headers['content-type'].match(/urlencoded/i)) {
this._initUrlencoded();
return;
}
if (this.headers['content-type'].match(/multipart/i)) {
var m = this.headers['content-type'].match(/boundary=(?:"([^"]+)"|([^;]+))/i);
if (m) {
this._initMultipart(m[1] || m[2]);
} else {
this._error(new Error('bad content-type header, no multipart boundary'));
}
return;
}
if (this.headers['content-type'].match(/json/i)) {
this._initJSONencoded();
return;
}
this._error(new Error('bad content-type header, unknown content-type: '+this.headers['content-type']));
};
IncomingForm.prototype._error = function(err) {
if (this.error || this.ended) {
return;
}
this.error = err;
this.emit('error', err);
if (Array.isArray(this.openedFiles)) {
this.openedFiles.forEach(function(file) {
file._writeStream.destroy();
setTimeout(fs.unlink, 0, file.path, function(error) { });
});
}
};
IncomingForm.prototype._parseContentLength = function() {
this.bytesReceived = 0;
if (this.headers['content-length']) {
this.bytesExpected = parseInt(this.headers['content-length'], 10);
} else if (this.headers['transfer-encoding'] === undefined) {
this.bytesExpected = 0;
}
if (this.bytesExpected !== null) {
this.emit('progress', this.bytesReceived, this.bytesExpected);
}
};
IncomingForm.prototype._newParser = function() {
return new MultipartParser();
};
IncomingForm.prototype._initMultipart = function(boundary) {
this.type = 'multipart';
var parser = new MultipartParser(),
self = this,
headerField,
headerValue,
part;
parser.initWithBoundary(boundary);
parser.onPartBegin = function() {
part = new Stream();
part.readable = true;
part.headers = {};
part.name = null;
part.filename = null;
part.mime = null;
part.transferEncoding = 'binary';
part.transferBuffer = '';
headerField = '';
headerValue = '';
};
parser.onHeaderField = function(b, start, end) {
headerField += b.toString(self.encoding, start, end);
};
parser.onHeaderValue = function(b, start, end) {
headerValue += b.toString(self.encoding, start, end);
};
parser.onHeaderEnd = function() {
headerField = headerField.toLowerCase();
part.headers[headerField] = headerValue;
var m = headerValue.match(/\bname="([^"]+)"/i);
if (headerField == 'content-disposition') {
if (m) {
part.name = m[1];
}
part.filename = self._fileName(headerValue);
} else if (headerField == 'content-type') {
part.mime = headerValue;
} else if (headerField == 'content-transfer-encoding') {
part.transferEncoding = headerValue.toLowerCase();
}
headerField = '';
headerValue = '';
};
parser.onHeadersEnd = function() {
switch(part.transferEncoding){
case 'binary':
case '7bit':
case '8bit':
parser.onPartData = function(b, start, end) {
part.emit('data', b.slice(start, end));
};
parser.onPartEnd = function() {
part.emit('end');
};
break;
case 'base64':
parser.onPartData = function(b, start, end) {
part.transferBuffer += b.slice(start, end).toString('ascii');
/*
four bytes (chars) in base64 converts to three bytes in binary
encoding. So we should always work with a number of bytes that
can be divided by 4, it will result in a number of buytes that
can be divided vy 3.
*/
var offset = parseInt(part.transferBuffer.length / 4, 10) * 4;
part.emit('data', new Buffer(part.transferBuffer.substring(0, offset), 'base64'));
part.transferBuffer = part.transferBuffer.substring(offset);
};
parser.onPartEnd = function() {
part.emit('data', new Buffer(part.transferBuffer, 'base64'));
part.emit('end');
};
break;
default:
return self._error(new Error('unknown transfer-encoding'));
}
self.onPart(part);
};
parser.onEnd = function() {
self.ended = true;
self._maybeEnd();
};
this._parser = parser;
};
IncomingForm.prototype._fileName = function(headerValue) {
var m = headerValue.match(/\bfilename="(.*?)"($|; )/i);
if (!m) return;
var filename = m[1].substr(m[1].lastIndexOf('\\') + 1);
filename = filename.replace(/%22/g, '"');
filename = filename.replace(/&#([\d]{4});/g, function(m, code) {
return String.fromCharCode(code);
});
return filename;
};
IncomingForm.prototype._initUrlencoded = function() {
this.type = 'urlencoded';
var parser = new QuerystringParser(this.maxFields)
, self = this;
parser.onField = function(key, val) {
self.emit('field', key, val);
};
parser.onEnd = function() {
self.ended = true;
self._maybeEnd();
};
this._parser = parser;
};
IncomingForm.prototype._initOctetStream = function() {
this.type = 'octet-stream';
var filename = this.headers['x-file-name'];
var mime = this.headers['content-type'];
var file = new File({
path: this._uploadPath(filename),
name: filename,
type: mime
});
this.emit('fileBegin', filename, file);
file.open();
this._flushing++;
var self = this;
self._parser = new OctetParser();
//Keep track of writes that haven't finished so we don't emit the file before it's done being written
var outstandingWrites = 0;
self._parser.on('data', function(buffer){
self.pause();
outstandingWrites++;
file.write(buffer, function() {
outstandingWrites--;
self.resume();
if(self.ended){
self._parser.emit('doneWritingFile');
}
});
});
self._parser.on('end', function(){
self._flushing--;
self.ended = true;
var done = function(){
file.end(function() {
self.emit('file', 'file', file);
self._maybeEnd();
});
};
if(outstandingWrites === 0){
done();
} else {
self._parser.once('doneWritingFile', done);
}
});
};
IncomingForm.prototype._initJSONencoded = function() {
this.type = 'json';
var parser = new JSONParser()
, self = this;
if (this.bytesExpected) {
parser.initWithLength(this.bytesExpected);
}
parser.onField = function(key, val) {
self.emit('field', key, val);
};
parser.onEnd = function() {
self.ended = true;
self._maybeEnd();
};
this._parser = parser;
};
IncomingForm.prototype._uploadPath = function(filename) {
var name = 'upload_';
var buf = crypto.randomBytes(16);
for (var i = 0; i < buf.length; ++i) {
name += ('0' + buf[i].toString(16)).slice(-2);
}
if (this.keepExtensions) {
var ext = path.extname(filename);
ext = ext.replace(/(\.[a-z0-9]+).*/i, '$1');
name += ext;
}
return path.join(this.uploadDir, name);
};
IncomingForm.prototype._maybeEnd = function() {
if (!this.ended || this._flushing || this.error) {
return;
}
this.emit('end');
};
|
{
"pile_set_name": "github"
}
|
Richtweg (Hamburg U-Bahn station)
Richtweg is a public transport station for the rapid transit trains of Hamburg's underground railway line U1, located in Norderstedt, Germany.
It was opened 1953 as a stop of the Alster Northern Railway (ANB) from Ulzburg Süd to Ochsenzoll with an island platform. Between 1994 and 1996 this section of the ANB was rebuilt for the Hamburg U-Bahn system.
Station layout
The station is a side platform station with a passenger bridge crossing at the north and exits to both sides of it.
See also
Hamburger Verkehrsverbund Public transport association in Hamburg
Hamburger Hochbahn Operator of the Hamburg U-Bahn
References
External links
Network plan HVV (pdf) 560 KiB
Norderstedt Richtweg
Norderstedt Richtweg
Norderstedt Richtweg
Norderstedt Richtweg
|
{
"pile_set_name": "wikipedia_en"
}
|
Helderberg Escarpment
The Helderberg Escarpment, also known as the Helderberg Mountains, is an escarpment and mountain range in eastern New York, United States, roughly west of the city of Albany. The escarpment rises steeply from the Hudson Valley below, with an elevation difference of approximately 700 feet (from 400 to 1,100 feet) over a horizontal distance of approximately 2,000 feet. Much of the escarpment is within John Boyd Thacher State Park, and has views of the Hudson Valley and the Albany area.
Geology
The escarpment is geologically related to three other escarpments, the Niagara Escarpment,
the Black River Escarpment, and the Onondaga Escarpment.
The rocks exposed in the escarpment date back to the Middle Ordovician to Early Devonian.
In 1934 the Schenectady Gazette described how the Tory Cave, one of the limestone caves to be found in the escarpment, routinely had stalagmites of ice in the springtime.
Transmission towers
Most of the Capital District's television stations installed their transmission towers at the escarpment to take advantage of its high ground. In 2003 a tower was built on the highest point of the escarpment, for transmitting digital television signals.
History
Dutch settlers first homesteaded the plateau above the escarpment in the 17th century. Helderberg is a Dutch name meaning "clear mountain".
The Open Space Institute and the Mohawk Hudson Land Conservancy are working to keep escarpment lands from being developed for housing or industrial uses.
Farmers farming land near the escarpment can apply to sell their development rights, to help make sure that land is not developed. In 2003 the Ten Eyck family, owners of the Indian Ladder Farm just below the escarpment, sold the development rights to their farm for $848,000. Two real estate assessment were done, one on the value of the property as a working farm, the other on its value as a potential site for urban development. The Ten Eycks were paid the difference in return for agreeing to keep the property as a working farm. They were the first property owners to sell their development rights in Albany County.
References
Category:Landforms of New York (state)
Category:Escarpments of the United States
Category:Landforms of Albany County, New York
Category:Mountains of Albany County, New York
Category:Mountains of New York (state)
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Dark Matter detectors with directional sensitivity have the potential of yielding an unambiguous positive observation of WIMPs as well as discriminating between galactic Dark Matter halo models. In this article, we introduce the motivation for directional detectors, discuss the experimental techniques that make directional detection possible, and review the status of the experimental effort in this field.'
address:
- 'Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA'
- 'Temple University, 1900 N. 13-th Street, Philadelphia, PA 19122, USA'
author:
- G Sciolla
- C J Martoff
bibliography:
- 'all\_DM.bib'
title: Gaseous Dark Matter Detectors
---
Introduction
============
Astronomical and cosmological observations have recently shown that Dark Matter (DM) is responsible for 23% of the energy budget of the Universe and 83% of its mass [@Hinshaw2008]. The most promising candidate for Dark Matter is the so-called Weakly Interacting Massive Particle (WIMP). The existence of WIMPs is independently suggested by considerations of Big Bang cosmology and theoretical supersymmetric particle phenomenology [@LeeWeinberg; @Weinberg82; @Jungman1996].
Over the years, many direct detection experiments have been performed to search for nuclear recoils due to elastic scattering of WIMPs off the nuclei in the active volume of the detector. The main challenge for these experiments is to suppress the backgrounds that mimic WIMP-induced nuclear recoils. Today’s leading experiments have achieved excellent rejection of electromagnetic backgrounds, i.e., photons, electrons and alpha particles, that have a distinct signature in the detector. However, there are sources of background for which the detector response is nearly identical to that of a WIMP-induced recoil, such as the coherent scattering of neutrinos from the sun [@Monroe2007], or the elastic scattering of neutrons produced either by natural radioactivity or by high-energy cosmic rays.
While neutron and neutrino interactions do not limit today’s experiments, they are expected to become dangerous sources of background when the scale of DM experiments grows to fiducial masses of several tons. In traditional counting experiments, the presence of such backgrounds could undermine the unambiguous identification of a Dark Matter signal because neutrinos are impossible to suppress by shielding and underground neutron backgrounds are notoriously difficult to predict [@Mei2006].
An unambiguous positive identification of a Dark Matter signal even in presence of unknown amounts of irreducible backgrounds could still be achieved if one could correlate the observation of a nuclear recoil in the detector with some unique astrophysical signature which no background could mimic. This is the idea that motivates directional detection of Dark Matter.
The Dark Matter Wind
---------------------
The observed rotation curve of our Galaxy suggests that at the galactic radius of the sun the galactic potential has a significant contribution from Dark Matter. The Dark Matter distribution in our Galaxy, however, is poorly constrained. A commonly used DM distribution, the standard dark halo model [@SmithLewin1990], assumes a non-rotating, isothermal sphere extending out to 50 kpc from the galactic center. The DM velocity is described by a Maxwell-Boltzmann distribution with dispersion $\sigma_v=155$ km/s. Concentric with the DM halo is the galactic disk of luminous ordinary matter, rotating with respect to the halo, with an average orbital velocity of about 220 km/s at the radius of the solar system. Therefore in this model, an observer on Earth would see a wind of DM particles with average velocity of 220 km/s.
The Dark Matter wind creates two observable effects. The first was pointed out in 1986 by Drukier, Freese, and Spergel [@Drukier1986] who predicted that the Earth’s motion relative to the galactic halo leads to an annual modulation of the rates of interactions observed above a certain threshold in direct detection experiments. In its annual rotation around the sun, the Earth’s orbital velocity has a component that is anti-parallel to the DM wind during the summer, and parallel to it during the winter. As a result, the apparent velocity of the DM wind will increase (decrease) by about 10% in summer (winter), leading to a corresponding increase (decrease) of the observed rates in DM detectors. Unfortunately, this effect is difficult to detect because the seasonal modulation is expected to be small (a few %) and very hard to disentangle from other systematic effects, such as the seasonal dependence of background rates. These experimental difficulties cast a shadow on the recent claimed observation of the yearly asymmetry by the DAMA/LIBRA collaboration [@Bernabei2008].
A larger modulation of the WIMP signal was pointed out by Spergel [@Spergel] in 1988. The Earth spins around its axis with a period of 24 sidereal hours. Because its rotation axis is oriented at 48$^\circ$ with respect to the direction of the DM wind, an observer on Earth sees the average direction of the WIMPs change by 96$^\circ$ every 12 sidereal hours. This modulation in arrival direction should be resolvable by a Dark Matter directional detector, e.g., a detector able to determine the direction of the DM particles. Most importantly, no known background is correlated with the direction of the DM wind. Therefore, a directional detector could hold the key to the unambiguous observation of Dark Matter.
In addition to background rejection, the determination of the direction of the arrival of Dark Matter particles can discriminate [@Copi1999; @Vergados2003; @Morgan2004; @Freese2005; @Alenazi2008] between various DM halo distributions including the standard dark halo model, models with streams of WIMPs, the Sikivie late-infall halo model [@Sikivie1999; @Tkachev1997; @Sikivie1995], and other anisotropic models. The discrimination power is further enhanced if a determination of the sense as well as the direction of WIMPs is possible [@Green2007]. This capability makes directional detectors unique observatories for underground WIMP astronomy.
Directional Dark Matter Detection
-----------------------------------
When Dark Matter particles interact with regular matter, they scatter elastically off the atoms and generate nuclear recoils with typical energies $E_R$ of a few tens of keV, as explained in more detail in section \[NuclearRecoils\]. The direction of the recoiling nucleus encodes the direction of the incoming DM particle. To observe the daily modulation in the direction of the DM wind, an angular resolution of 20–30 degrees in the reconstruction of the recoil nucleus is sufficient, because the intrinsic spread in direction of the DM wind is $\approx$ 45 degrees. Assuming that sub-millimeter tracking resolution can be achieved, the length of a recoil track has to be of at least 1–2 mm, which can be obtained by using a very dilute gas as a target material.
An ideal directional detector should provide a 3-D vector reconstruction of the recoil track with a spatial resolution of a few hundred microns in each coordinate, and combine a very low energy threshold with an excellent background rejection capability. Such a detector would be able to reject isotropy of the recoil direction, and hence identify the signature of a WIMP wind, with just a handful of events [@Morgan2004].
More recently, Green and Morgan [@Green2007] studied how the number of events necessary to detect the WIMP wind depends on the detector performance in terms of energy threshold, background rates, 2-D versus 3-D reconstruction of the nuclear recoil, and ability to determine the sense of the direction by discriminating between the “head” and “tail” of the recoil track. The default configuration used for this study assumes a CS$_2$ gaseous TPC running at 0.05 bar using 200 $\mu$m pixel readout providing 3-D reconstruction of the nuclear recoil and “head-tail” discrimination. The energy threshold is assumed to be 20 keV, with perfect background rejection. In such a configuration, 7 events would be sufficient to establish observation of the WIMP wind at 90% C.L.. In presence of background with S/N=1, the number of events necessary to reject isotropy would increase by a factor 2. If only 2D reconstruction is available, the required number of events doubles compared to the default configuration. “Head-tail” discrimination turns out to be the most important capability: if the sense cannot be measured, the number of events necessary to observe the effects of the WIMP wind increases by one order of magnitude.
Nuclear Recoils in Gaseous Detectors {#NuclearRecoils}
====================================
To optimize the design of gaseous detectors for directional detection of Dark Matter one must be able to calculate the recoil atom energy spectrum expected for a range of WIMP parameters and halo models. The detector response in the relevant energy range must also be predictable. The response will be governed first and foremost by the track length and characteristics (multiple scattering) as a function of recoil atom type and energy. Since gas detectors require ionization for detection, design also requires knowledge of the ionization yield in gas and its distribution along the track as a function of recoil atom type and energy, and possibly electric field.
The large momentum transfer necessary to produce a detectable recoil in gas implies that the scattering atom can be treated as a free particle, making calculations of the recoil spectrum essentially independent of whether the target is a solid, liquid, or gas.
An estimate of the maximum Dark Matter recoil energy for simple halo models is given by the kinematically allowed energy transfer from an infinitely heavy halo WIMP with velocity equal to the galactic escape speed. This speed is locally about 500-600 km/sec [@rave]; WIMPS with higher velocities than this would not be gravitationally bound in the halo and would presumably be rare. The corresponding maximum energy transfer amounts to $<$ 10 keV/nucleon. The integrated rate will be concentrated at lower energies than this, at least in halo models such as the isothermal sphere. For that model, the recoil energy ($E_R$) distribution [@SmithLewin1990] is proportional to $\exp(-E_R/E_I)$, with $E_I$ a constant that depends on the target and WIMP masses and the halo model. For a 100 GeV WIMP and the isothermal halo model parameters of Ref. [@SmithLewin1990], $E_I / A$ varies from 1.05 to 0.2 keV/nucleon for target mass numbers from 1 to 131. These are very low energy particles, well below the Bragg Peak at $\sim$200–800 keV/A. In this regime dE/dx [*decreases*]{} with decreasing energy, and the efficiency of ionization is significantly reduced.
Lindhard Model for Low-Energy Stopping
--------------------------------------
The stopping process for such low energy particles in homoatomic[^1] substances was treated by Lindhard, Scharff, and Schiott [@Lindhard1963; @Lindhard-int] (LSS). This treatment has stood the test of time and experiment, making it worthwhile to summarize the results here.
As is now well-known, the primary energy loss mechanisms for low energy particles in matter can be divided into “nuclear stopping”, due to atom-atom scattering, and “electronic stopping”, due to atom-electron scattering. These mechanisms refer only to the initial interaction causing the incident particle to lose energy. Nuclear stopping eventually contributes to electronic excitations and ionization, and electronic stopping eventually contributes to thermal excitations [@Lindhard-int].
In Ref. [@Lindhard1963] the stopping is described using a Thomas-Fermi atom model to obtain numerical results for universal stopping-power curves in terms of two variables, the scaled energy $\epsilon=E_R/E_{TF}$, and the scaled range $\rho=R/R_{TF}$, where $E_R$ and $R$ are respectively the energy and the stopping distance of the recoil, and $E_{TF}$ and $R_{TF}$ are scale factors[^2].
In Ref. [@Lindhard1963] it was shown that nuclear stopping dominates in the energy range where most of the rate for Dark Matter detection lies. This can be seen as follows. The scaled variables $\epsilon$ and $\rho$ depend algebraically on the atomic numbers and mass numbers of the incident and target particles. The scale factor $E_{TF}$ corresponds to 0.45 keV/nucleon for homoatomic recoils in Carbon, 1.7 keV/nucleon for Ar in Ar and 6.9 keV/nucleon for Xe in Xe. Nuclear stopping $\frac{d\epsilon_n}{d\rho}$ was found to be larger than the electronic stopping $\frac{d\epsilon_e}{d\rho}$ for $\epsilon < 1.6$, which covers the energy range $0 < E_R <
E_I$ where most of the Dark Matter recoil rate can be expected.
Because of the dominance of nuclear stopping, detectors can be expected to respond differently to Dark Matter recoils than to radiations such as x-rays or even $\alpha$ particles, for which electronic stopping dominates. Nuclear stopping yields less ionization and electronic excitation per unit energy loss than does electronic stopping, implying that the W factor, defined as the energy loss required to create one ionization electron, will be larger for nuclear recoils. Reference [@Lindhard-int] presents calculations of the ultimate energy loss partitioning between electronic and atomic motion. Experimenters use empirical “quenching factors" to describe the variation of energy per unit of ionization (the “W" parameter) compared to that from x-rays.
The different microscopic distribution of ionization in tracks dominated by nuclear stopping can also lead to unexpected changes in the interactions of ionized and electronically excited target atoms (e.g., dimer formation, recombination). Such interactions are important for particle identification signatures such as the quantity and pulse shape of scintillation light output, the variation of scintillation pulse shape with applied electric field, and the field variation of ionization charge collection efficiency. Such effects are observed in gases [@White2007; @Martin2009], and even more strongly in liquid and solid targets [@Aprile2006].
Electronic stopping [@Lindhard1963] was found to vary as $\frac{d\epsilon_e}{d\rho} = k \sqrt{\epsilon}$ with the parameter $k$ varying only from 0.13 to 0.17 for homonuclear recoils in A=1 to 131[^3]. Let us define the total stopping as $\frac{d\epsilon}{d\rho}= \frac{d\epsilon_n}{d\rho} +
\frac{d\epsilon_e}{d\rho}$ and the total scaled range as $\rho_o = \int _0 ^\epsilon \frac{d\epsilon}{(\frac{d\epsilon}{d\rho})}$. The relatively small contribution of electronic stopping and the small variation in $k$ for homoatomic recoils, makes the total scaled range for this case depend on the target and projectile almost entirely through $E_{TF}$.
Predictions for the actual range of homoatomic recoils can be obtained from the nearly-universal scaled range curve as follows. Numerically integrating the stopping curves of Ref. [@Lindhard1963] with $k$ set to 0.15 gives a scaled range curve that fits the surprisingly simple expression $$\rho_o \stackrel{.}{=} 2.04 \epsilon + 0.04
\label{eq:range}$$ with accuracy better than 10% for $0.12 < \epsilon < 10 $. According to the formula given earlier, the scale factor $R_{TF}$ lies between 1 and 4 $\times$ 10$^{17}$ atoms/cm$^2$ for homoatomic recoils in targets with $12 \leq A \leq 131$. Thus the model predicts ranges of several times 10$^{17}$ atoms/cm$^2$ at $E_R = E_I$. This is of the order of a few mm for a monoatomic gas at 0.05 bar. As a consequence, tracking devices for Dark Matter detection must provide accurate reconstruction of tracks with typical lengths between 1 and a few mm while operating at pressures of a small fraction of an atmosphere.
When comparing LSS predictions with experimental results, two correction factors must be considered. First, the widely-used program SRIM [@SRIM] produces range-energy tables which contain the “projected range", while LSS calculate the path length along the track. On the other hand, many older experiments report “extrapolated ranges", which are closer in magnitude to the path length than to the “projected range". To compare the SRIM tables with LSS, the projected range should be multiplied by a factor [@Lindhard1963] $(1+\frac{M_T}{3M_P})$ where $M_T$ and $M_P$ are the target and projectile masses. This correction has generally been applied in the next section, where experimental data are discussed.
In addition, it must be noted that the LSS calculations described above were obtained for solids. Therefore, one should consider a gas-solid correction in ranges and stopping powers, as discussed by Bohr, Lindhard and Dan [@BLD]. In condensed phases, the higher collision frequency results in a higher probability for stripping of excited electrons before they can relax, which leads to a higher energy loss rate than for gases. This correction is rather uncertain and has generally not been applied in the following section of this paper.
Finally, numerical calculations to extend the LSS model to the case of targets of mixed atomic number are given in Ref. [@Hitachi2008].
Experimental Data on Low Energy Stopping in Gases
-------------------------------------------------
The literature of energy loss and stopping of fast particles in matter is vast and still growing [@Ziegler1985; @Sigmund1998]. However, there is not a lot of experimental data available for particle ranges and ionization yields in gas at the very low energies typical of Dark Matter recoils, where E/A $\sim$ 1 keV per nucleon. Comprehensive collections of citations for all energies are available [@SRIM; @MSTAR], upon which the widely-used theory-guided-fitting computer programs SRIM and MSTAR [@MSTAR] are based. Several older references [@Evans1953; @Lassen1964; @Cano1968] still appear representative of the available direct measurements at very low energy. More recent studies [@SnowdenIfft2003] provide indirect information based on large detector simulations.
Both references [@Evans1953] and [@Lassen1964] used accelerated beams of He, N, Ne, Ar and $^{24}$Na, $^{66}$Ga, and $^{198}$Au in differentially pumped gas target chambers filled with pure-element gases. In [@Evans1953] the particles were detected with an ionization chamber, while in [@Lassen1964] radioactive beams were used. The stopped particles were collected on segmented walls of the target chamber and later counted. Typical results were ranges of 2(3.2) $\times$ 10$^{17}$ atoms/cm$^2$ for 26(40) keV Ar$^+$ in Argon. The fit to LSS theory given above predicts ranges that are shorter than the experimental results by 10-40%, which is consistent with experimental comparisons given by LSS. Accuracy of agreement with the prediction from the SRIM code is about the same. As in all other cases discussed below, the direction of the deviation from LSS is as expected from the gas-solid effect mentioned in the previous section.
In Ref. [@SnowdenIfft2003] nuclear recoils from $^{252}$Cf neutrons were recorded by a Negative Ion Time Projection Chamber (NITPC) filled with 40 Torr CS$_2$. The device was simulated fitting the observed pulse height and event size distributions. The best fit range curves given for C and S recoils in the gas are 10-20% higher at 25-100 keV than LSS predictions computed by the present authors by assuming simple additivity of stopping powers for the constituent atoms of the polyatomic gas target.
Ionization Yields
-----------------
Tracking readouts in gas TPC detectors are sensitive only to ionization of the gas. As noted above, both nuclear and electronic stopping eventually contribute to both electronic excitations (including ionization) and to kinetic energy of target atoms, as primary and subsequent generations of collision products interact further with the medium. Some guidance useful for design purposes is available from Ref. [@Lindhard-int], where the energy cascade was treated numerically using integral equations. In terms of the scaled energy $\epsilon$ and the electronic stopping coefficient $k$ introduced above, the (scaled) energy $\eta$ ultimately transferred to electrons was found to be well approximated [@SmithLewin1996] by $\eta = \frac{\epsilon}{1+\frac{1}{k\dot g(\epsilon)}}$ with $g(\epsilon)= \epsilon + 3 \epsilon^{0.15} + 0.7 \epsilon^{0.6}$. This function interpolates smoothly from $\eta = 0$ at $\epsilon = 0$ to $\eta = \epsilon$ for $\epsilon \rightarrow \infty$, giving $\eta = 0.4$ at $\epsilon = 1$. In other words, this theory predicts only about 40% as much ionization per unit of energy deposited by Dark Matter recoils as by low LET radiation such as electrons ejected by x-rays. Several direct measurements of total ionization by very low energy particles are available in literature. Many of these results are for recoil nuclei from alpha decays [@Cano1965; @Cano1968; @Stone1957]. These $\sim$ 100 keV, A $\sim$ 200 recoils are of interest as backgrounds in Dark Matter experiments, but their scaled energy $\epsilon \cong 0.07$ is below the range of interest for most WIMP recoils. Measured ionization yield parameters W were typically 100-120 eV/ion pair, in good agreement with the approximate formula for $\eta$ given above. Data more applicable to Dark Matter recoils are given in Refs. [@Phipps1964; @Boring1965; @McDonald1969; @Price1993]. Some representative results from these works include [@Boring1965] W = 91 (65) eV/IP for 25 (100) keV Ar in Ar, both values about 20% higher than would be predicted by the preceding approximate LSS expression. Higher W for gases than for condensed media is expected [@BLD] as mentioned above. Ref. [@McDonald1969] measured total ionization from particles with 1 $<$ Z $<22$ in methane. While in principle the LSS treatment does not apply to heteroatomic gases, using the LSS prescription to predict the W factor for a carbon target (rather than methane) yields a value that is 15% lower than the experimental results.
The authors of Ref. [@SnowdenIfft2003] also fit their data to derive W-values for C and S recoils. Their best-fit values are again 10-25% higher than an LSS-based estimate by the present author using additivity.
To summarize, most of the Dark Matter recoils expected from an isothermal galactic halo have very low energies, and therefore nuclear stopping plays an important role. The sparse available experimental data on track lengths and ionization yields agrees at the $\sim$20% level with simple approximate formulas based on the Lindhard model. Without applying any gas-phase correction, LSS-based estimates for range tend to be slightly longer than those experimentally measured in gases. The predicted ionization parameter W also tends to be slightly lower than the experimental data. This situation is adequate for initial design of detectors, but with the present literature base, each individual experiment will require its own dedicated calibration measurements.
Considerations for Directional Detector Design
==============================================
Detector Architecture
---------------------
From the range-energy discussion in the previous section, we infer that track lengths of typical Dark Matter recoils will be only of the order of 0.1 $\mu$m in condensed matter, while track lengths of up to a few millimeters are expected in gas at a tenth of the atmospheric pressure. Several techniques relevant to direction-sensitive detection using condensed matter targets have been reported, including track-etch analysis of ancient mica [@Bander1995], bolometric detection of surface sputtered atoms [@Martoff1996], and use of nuclear emulsions [@Natsume2007]. The ancient mica etch pit technique was actually used to obtain Dark Matter limits. However, recently the focus of directional Dark Matter detection has shifted to low-pressure gas targets, and that is the topic of the present review.
The TPC [@NygrenTPC; @Fancher1979] is the natural detector architecture for gaseous direction-sensitive Dark Matter detectors, and essentially all experiments use this configuration. The active target volume contains only the active gas, free of background-producing material. Only one wall of the active volume requires a readout system, leading to favorable cost-volume scaling. TPCs with nearly 100 m$^3$ of active volume have been built for high energy physics, showing the possibility of large active masses.
Background Rejection Capabilities
----------------------------------
Gaseous DM detectors have excellent background rejection capability for different kinds of backgrounds. First and foremost, direction sensitivity gives gas detectors the capability of statistically rejecting neutron and neutrino backgrounds. In addition, tracking also leads to extremely effective discrimination against x-ray and $\gamma$-ray backgrounds [@Snowden-Ifft:PRD2000; @Sciolla:2009fb]. The energy loss rates for recoils discussed in the previous section are hundreds of times larger than those of electrons with comparable total energy. The resulting much longer electron tracks are easily identified and rejected in any direction-sensitive detector. Finally, the measured rejection factors for gamma rays vs. nuclear recoils varies between 10$^4$ and 10$^6$ depending on the experiment [@Miuchi2007-58; @SnowdenIfft2003; @Dujmic2008-58].
Choice of Pressure
------------------
It can be shown that there is an optimum pressure for operation of any given direction sensitive WIMP recoil detector. This optimum pressure depends on the fill gas, the halo parameter set and WIMP mass, and the expected track length threshold for direction measurement.
The total sensitive mass, and hence the total number of expected events, increases proportionally to the product of the pressure $P$ and the active volume $V$. Equation \[eq:range\] above shows that the range in atoms/cm$^2$ for WIMP recoils is approximately proportional to their energy. Since the corresponding range in cm is inversely proportional to the pressure ($R \propto E_r/P$), the energy threshold imposed by a particular minimum track length $E_{r,min}$ will scale down linearly with decreasing pressure, $E_{r,min} \propto R_{min} P$, where $R_{min}$ is the shortest detectable track length. For the exponentially falling recoil energy spectrum of the isothermal halo [@SmithLewin1996] the fraction of recoils above a given energy threshold is proportional to $\exp(-E_{min}/E_0 r)$. Hence the rate of tracks longer than the tracking threshold R$_{min}$ will scale as $N \propto PV \exp(-\xi R_{min}P)$, with $\xi$ a track length factor depending on the target gas, WIMP mass, halo model, etc., and the track length threshold $R_{min}$ depending on the readout technology and the drift distance. This expression has a maximum at $P_{opt}
= 1/[\xi R_{min}]$, which shows that the highest event rate is obtained by taking advantage of improvement in tracking threshold to run at higher target pressure. Operating at this optimum pressure, the track-able event rate still scales as $P_{opt}V$, which increases linearly as the tracking threshold decreases. Achieving the shortest possible tracking threshold $R_{min}$ is seen to be the key to sensitive experiments of this type.
Tracking Limit due to Diffusion
-------------------------------
Diffusion of track charge during its drift to the readout plane sets the ultimate limit on how short a track can be measured in a TPC. Diffusion in gases has a rich phenomenology for which only a simplified discussion is given here. More complete discussion with references to the literature is given by Rolandi and Blum [@RnB].
For low values of electric fields, elementary kinetic theory arguments predict equal transverse and longitudinal diffusion to the drift field $E_d$, with the rms diffusion spread $\delta$ given by
$$\label{eq:diff}
\delta = \sqrt{\frac{2kTL}{eE_d}} = 0.7 mm \sqrt{\frac{[L/1m]}{[E_d/1 kV/cm]}}.$$
Here $k$ is the Boltzmann constant, $T$ the gas temperature, and $L$ the drift distance. No pressure or gas dependence appears in this equation. The diffusion decreases inversely as the square root of the applied drift field. Increasing the drift field would appear to allow diffusion to be reduced as much as desired, allowing large detectors to be built while preserving good tracking resolution.
However, in reality diffusion is not so easily controlled. The low-field approximation given by Equation \[eq:diff\] holds only below a certain maximum drift field value $E_d^{max}$, which depends on the pressure and target gas. The drift field must not violate the condition $eE_d^{max} \lambda << kT$, where the effective mean free path $\lambda = 1/f n \sigma$ decreases inversely as the pressure. Here $\sigma$ is the average total cross section for scattering of the drifting species on the fill gas molecules, $n$ is the number density of molecules, and $f$ is an energy-exchange-efficiency factor for the scattering of charge carriers from gas molecules. This condition amounts to requiring that the work done by the drift field on a charge carrier between collisions and not lost to collisions, must be much smaller than the carrier’s thermal energy. If this condition is fulfilled it will ensure that the drifting carriers’ random (thermal) velocity remains consistent with the bulk gas temperature. A larger scattering cross section $\sigma$ or a more effective energy exchange due to strong inelastic scattering processes will lead to a shorter effective mean free path and a larger value of $E_d^{max}$. Importantly, $E_d^{max}$ for electrons in a given gas generally scales inversely as the pressure, as would be expected from the presence of the mean free path in the “low field" condition.
If the drift field exceeds $E_d^{max}$, the energy gained from the drift field becomes non-negligible. The average energy of drifting charge carriers begins to increase appreciably, giving them an effective temperature $T_{eff}$ which can be orders of magnitude larger than that of the bulk gas. Under these conditions, the kinetic theory arguments underlying equation \[eq:diff\] remain approximately valid if the gas temperature $T$ is replaced by $T_{eff}$. Diffusion stops dropping with increasing drift field and may rapidly [ *increase*]{} in this regime, with longitudinal diffusion increasing more rapidly than transverse.
Values of $E_d^{max}/P$ for electrons drifting in various gases and gas mixtures vary from $\sim$0.1–1 V/cm/Torr at 300 K [@SauliBible; @Caldwell]. With drift fields limited to this range and a gas pressure of $\sim$ 50 Torr, the rms diffusion for a 1 meter drift distance would be several mm, severely degrading the tracking resolution.
Effects of diffusion can be significantly reduced by drifting negative ions instead of electrons [@Martoff2000; @Martoff2009; @Ohnuki:NIMA2001]. Electronegative vapors have been found which, when mixed into detector gases, reversibly capture primary ionization electrons within $\sim$ 100 $\mu$m of their creation. The resulting negative ions drift to the gain region of the chamber, where collisional processes free the electrons and initiate normal Townsend avalanches [@Dion2009]. Ions have E$_d^{max}$ values corresponding to E/P = 20 V/cm Torr and higher. This is because the ions’ masses are comparable to the gas molecules, so the energy-exchange-efficiency factor $f$ which determines $E_d^{max}$ is much larger than for electrons. Ion-molecule scattering cross sections also tend to be larger than electron-molecule cross sections. The use of negative ion drift in TPCs would allow sub-millimeter rms diffusion for drift distances of 1 meter or larger, although total drift voltage differences in the neighborhood of 100 kV would be required.
The above outline shows that diffusion places serious constraints on the design of detectors with large sensitive mass and millimeter track resolution, particularly when using a conventional electron drift TPC.
Challenges of Directional Detection
------------------------------------
The current limits on spin-independent interactions of WIMPs in the 60 GeV/c$^2$ mass range have been set using 300-400 kg-day exposures, for example by the XENON10 [@XENON2008] and CDMS [@CDMS2009] experiments. Next generation non-directional experiments are being planned to achieve zero background with hundreds or thousands of times larger exposures [@Arisaka2009].
To be competitive, directional detectors should be able to use comparable exposures. However, integrating large exposures is particularly difficult for low-pressure gaseous detectors. A fiducial mass of a few tons will be necessary to observe DM-induced nuclear recoils for much of the theoretically-favored range of parameter space [@Jungman1996]. This mass of low-pressure gas would occupy thousands of cubic meters. It is, therefore, key to the success of the directional DM program to develop detectors with a low cost per unit volume. Since for standard gaseous detectors the largest expense is represented by the cost of the readout electronics, it follows that a low-cost read-out is essential to make DM directional detectors financially viable.
Dark Matter TPC Experiments
===========================
Early History of Direction-Sensitive WIMP Detectors
---------------------------------------------------
As early as 1990, Gerbier [*et al.*]{} [@Gerbier1990] discussed using a hydrogen-filled TPC at 0.02 bar, drifting electrons in a 0.1 T magnetic field to detect proton recoils from Dark Matter collisions. This proposal was made in the context of the “cosmion", a then-current WIMP candidate with very large (10$^{-36}$ cm$^2$) cross section for scattering on protons. These authors explicitly considered the directional signature, but they did not publish any experimental findings.
A few years later, the UCSD group led by Masek [@Buckland1994] published results of early trials of the first detector system specifically designed for a direction-sensitive Dark Matter search. This pioneering work used optical readout of light produced in a parallel plate avalanche counter (PPAC) located at the readout plane of a low-pressure TPC. The minimum discernible track length was about 5 mm. Electron diffusion at low pressures and its importance for the performance of gas detectors was also studied [@MattDiff]. This early work presaged some of the most recent developments in the field, described in section \[DMTPC\].
DRIFT
-----
The DRIFT-I collaboration [@Snowden-Ifft:PRD2000] mounted the first underground experiment designed for direction sensitive WIMP recoil detection [@Alner2004]. Re-designed detectors were built and further characterization measurements were performed by the DRIFT-II [@Lawson2005] collaboration. Both DRIFT detectors were cubical 1 m$^3$ negative-ion-drifting TPCs with two back-to-back 0.5 m drift spaces. To minimize material possibly contributing radioactive backgrounds, the central drift cathode was designed as a plane of 20 micron wires on 2 mm pitch. The endcap MWPCs used 20 $\mu$m anode wires on 2 mm-pitch, read out with transient digitizers. In DRIFT-II the induced signals on grid wires between the MWPC anode and the drift space were also digitized. DRIFT-I had an amplifier- and digitizer-per-wire readout, while DRIFT-II signals were cyclically grouped onto a small number of amplifiers and digitizers. Both detectors used the negative ion drift gas CS$_2$ at nominally 40 Torr, about one eighth of the atmospheric pressure. The 1 m$^3$ volume gave approximately 170 grams of target mass per TPC. The CS$_2$ gas fill allowed diffusion suppression by running with very high drift fields despite the low pressure. DRIFT-II used drift fields up to 624 V/cm (16 V/cm/Torr).
The detectors were calibrated with alpha particles, $^{55}$Fe x-rays and $^{252}$Cf neutrons. Alpha particle Bragg peaks and neutron recoil events from sources were quickly seen after turn-on of DRIFT-I underground in 2001. Neutron exposures gave energy spectra in agreement with simulations when the energy per ion pair W was adjusted in accordance with the discussion of ionization yields given above. Simulations of DRIFT-II showed that the detector and software analysis chain had about 94% efficiency for detection of those $^{252}$Cf neutron recoils producing between 1000 and 6000 primary ion pairs, and a $^{60}$Co gamma-ray rejection ratio better than a few times 10$^{-6} $ [@drift_II_n]. A study of the direction sensitivity of DRIFT-II for neutron recoils [@driftIIfb] showed that a statistical signal distinguishing the beginning and end of Sulfur recoil tracks (“head-tail discrimination") was available, though its energy range and statistical power was limited by the 2 mm readout pitch.
At present two 1 m$^3$ DRIFT-II modules are operating underground. Backgrounds due to radon daughters implanted in the internal surfaces of the detector [@drift_II_n] are under study and methods for their mitigation are being developed. The absence of nonzero spin nuclides in the CS$_2$ will require a very large increase in target mass or a change of gas fill in order to detect WIMPs with this device.
Dark Matter Searches Using Micropattern Gas-Gain Devices
--------------------------------------------------------
It was shown above that the event rate and therefore the sensitivity of an optimized tracking detector improves linearly as the track length threshold gets smaller. In recent years there has been widespread development of gas detectors achieving very high spatial resolution by using micropatterned gain elements in place of wires. For a recent overview of micropattern detector activity, see Ref. [@pos-sens]. These devices typically have 2-D arrays of individual gain elements on a pitch of $\sim$ 0.1 mm. Rows of elements [@Black2007] or individual gain elements can be read out by suitable arrangements of pickup electrodes separate from the gain structures, or by amplifier-per-pixel electronics integrated with the gain structure [@medipix]. Gain-producing structures known as GEM (Gas Electron Multiplier [@gem]) and MicroMegas (MICRO-MEsh GAseous Structure [@Giomataris1996]) have found particularly wide application.
The gas CF$_4$ also figures prominently in recent micropattern Dark Matter search proposals. This gas was used for low background work in the MUNU experiment [@munu] and has the advantage of high $E_d^{max}$, allowing relatively low diffusion for electron drift at high drift field and reduced pressure [@Dujmic2008-327; @Christo1996; @Caldwell], though it does not approach negative ions in this regard. Containing the odd-proton nuclide $^{19}$F is also an advantage since it confers sensitivity to purely spin-coupled WIMPs [@Ellis1991], allowing smaller active mass experiments to be competitive. Another attractive feature of CF$_4$ is that its Townsend avalanches copiously emit visible and near infrared light [@Pansky1995; @Kaboth2008; @Fraga2003], allowing optical readout as in the DMTPC detector discussed in section \[DMTPC\]. The ultraviolet part of the spectrum may also be seen by making use of a wavelength shifter. Finally, CF$_4$ is non-flammable and non-toxic, and, therefore, safe to operate underground.
The NEWAGE project is a current Dark Matter search program led by a Kyoto University group. This group has recently published the first limit on Dark Matter interactions derived from the absence of a directional modulation during a 0.15 kg-day exposure [@Miuchi2007-58]. NEWAGE uses CF$_4$-filled TPCs with a microwell gain structure [@Miuchi2003; @Tanimori2004; @Miuchi2007-43]. The detector had an active volume of 23 x 28 x 30 cm$^3$ and contained CF$_4$ at 150 Torr. Operation at higher-than-optimal gas pressure was chosen to enhance the HV stability of the gain structure. The chamber was read out by a single detector board referred to as a “$\mu$-PIC", preceded by a GEM for extra gas gain. The $\mu$-PIC has a micro-well gain structure produced using multi-layer printed circuit board technology. It is read out on two orthogonal, 400 micron-pitch arrays of strips. One array is connected to the central anode dots of the micro-well gain structure, and the other array to the surrounding cathodes. The strip amplifiers and position decoding electronics are on-board with the gain structures themselves, using an 8 layer PCB structure.
The detector was calibrated with a $^{252}$Cf neutron source. Nuclear recoils were detected and compared to a simulation, giving a detection efficiency rising from zero at 50 keV to 90% near 250 keV. For comparison, the maximum energy of a $^{19}$F recoil from an infinitely heavy WIMP with the galactic escape speed is about 180 keV. The measured rejection factor for $^{137}$Cs gamma rays was about 10$^{-4}$. The angular resolution was reported as 25$^{\circ}$ HWHM. Measurement of the forward/backward sense of the tracks (“head-tail" discrimination) was not reported.
Another gaseous Dark Matter search collaboration known as MIMAC [@santos2006] is led by a group at IPN Grenoble, and has reported work toward an electronically read-out direction sensitive detector. They proposed the use of $^3$He mixtures with isobutane near 1 bar, and also CF$_4$ gas fills to check the dependence on the atomic number A of any candidate Dark Matter signal. The advantages claimed for $^3$He as a Dark Matter search target include nonzero nuclear spin, low mass and hence sensitivity to low WIMP masses, and a very low Compton cross section which suppresses backgrounds from gamma rays. The characteristic (n,p) capture interaction with slow neutrons gives a strong signature for the presence of slow neutrons. The ionization efficiency of $\sim$ 1 keV $^3$He recoils is also expected to be very high, allowing efficient detection of the small energy releases expected for this target and for light WIMPs. A micropattern TPC with $\sim$ 350 $\mu$m anode pitch was proposed to obtain the desired electron rejection factor at a few keV. The MIMAC collaboration uses an ion source to generate monoenergetic $^3$He and F ions for measuring the ionization yield in their gas mixtures [@Guillaudin:2009fp].
DMTPC {#DMTPC}
-----
The Dark Matter Time Projection Chamber (DMTPC) collaboration has developed a new detector concept [@Sciolla:2009fb] that addresses the issue of scalability of directional Dark Matter detectors by using optical readout, a potentially very inexpensive readout solution.
The DMTPC detector [@Sciolla:2008ak; @Sciolla:2008mpla] is a low-pressure TPC filled with CF$_4$ at a nominal pressure of 50 torr. The detector is read out by an array of CCD cameras and photomultipliers (PMTs) mounted outside the vessel to reduce the amount of radioactive material in the active volume. The CCD cameras image the visible and near infrared photons that are produced by the avalanche process in the amplification region, providing a projection of the 3-D nuclear recoil on the 2-D amplification plane. The 3-D track length and direction of the recoiling nucleus is reconstructed by combining the measurement of the projection along the amplification plane (from pattern recognition in the CCD) with the projection along the direction of drift, determined from the waveform of the signal from the PMTs. The sense of the recoil track is determined by measuring dE/dx along the length of the track. The correlation between the energy of the recoil, proportional to the number of photons collected in the CCD, and the length of the recoil track provides an excellent rejection of all electromagnetic backgrounds.
Several alternative implementations of the amplification region [@Dujmic2008-58] were developed. In a first design, the amplification was obtained by applying a large potential difference ($\Delta$V = 0.6–1.1 kV) between a copper plate and a conductive woven mesh kept at a uniform distance of 0.5 mm. The copper or stainless steel mesh was made of 28 $\mu$m wire with a pitch of 256 $\mu$m. In a second design the copper plate was replaced with two additional woven meshes. This design has the advantage of creating a transparent amplification region, which allows a substantial cost reduction since a single CCD camera can image tracks originating in two drift regions located on either side of a single amplification region.
The current DMTPC prototype [@dujmicICHEP] consists of two optically independent regions contained in one stainless steel vessel. Each region is a cylinder with 30 cm diameter and 20 cm height contained inside a field cage. Gas gain is obtained using the mesh-plate design described above. The detector is read out by two CCD cameras, each imaging one drift region. Two f/1.2 55 mm Nikon photographic lenses focus light onto two commercial Apogee U6 CCD cameras equipped with Kodak 1001E CCD chips. Because the total area imaged is $16\times16$ cm$^2$, the detector has an active volume of about 10 liters. For WIMP-induced nuclear recoils of 50 keV, the energy and angular resolutions obtained with the CCD readout were estimated to be $\approx$ 15% and 25$^{\circ}$, respectively. This apparatus is currently being operated above ground with the goal of characterizing the detector response and understanding its backgrounds. A second 10-liter module is being constructed for underground operations at the Waste Isolation Pilot Plant (WIPP) in New Mexico.
A 5.5 MeV alpha source from $^{241}$Am is used to study the gain of the detector as a function of the voltage and gas pressure, as well as to measure the resolution as a function of the drift distance of the primary electrons to quantify the effect of the transverse diffusion. These studies [@Dujmic2008-327; @Caldwell] show that the transverse diffusion allows for a sub-millimeter spatial resolution in the reconstruction of the recoil track for drift distances up to 20–25 cm. The gamma ray rejection factor, measured using a $^{137}$Cs source, is better than 2 parts per million [@Dujmic2008-327].
The performance of the DMTPC detector in determining the sense and direction of nuclear recoils has been evaluated by studying the recoil of fluorine nuclei in interaction with low-energy neutrons. The initial measurements were obtained running the chamber at 280 Torr and using 14 MeV neutrons from a deuteron-triton generator and a $^{252}$Cf source. The “head-tail” effect was clearly observed [@Dujmic2008-327; @Dujmic:2008iq] for nuclear recoils with energy between 200 and 800 keV. Better sensitivity to lower energy thresholds was achieved by using higher gains and lowering the CF$_4$ pressure to 75 torr. These measurements demonstrated [@Dujmic2008-58] “head-tail” discrimination for recoils above 100 keV, and reported a good agreement with the predictions of the SRIM [@SRIM] simulation. “Head-tail” discrimination is expected to extend to recoils above 50 keV when the detector is operated at a pressure of 50 torr. To evaluate the event-by-event “head-tail” capability of the detector as a function of the energy of the recoil, the DMTPC collaboration introduced a quality factor $Q(E_R) = \epsilon(E_R) \times (1 - 2 w(E_R))^2$, where $\epsilon$ is the recoil reconstruction efficiency and $w$ is the fraction of wrong “head-tail” assignments. The $Q$ factor represents the effective fraction of reconstructed recoils with “head-tail” information, and the error on the “head-tail” asymmetry scales as $1/\sqrt(Q)$. Early measurements demonstrated a $Q$ factor of 20% at 100 keV and 80% at 200 keV [@Dujmic2008-58].
The DMTPC collaboration is currently designing a 1-m$^3$ detector. The apparatus consists of a stainless steel vessel of 1.3 m diameter and 1.2 m height. Nine CCD cameras and nine PMTs are mounted on each of the top and bottom plates of the vessel, separated from the active volume of the detector by an acrylic window. The detector consists of two optically separated regions. Each of these regions is equipped with a triple-mesh amplification device, located between two symmetric drift regions. Each drift region has a diameter of 1.2 m and a height of 25 cm, for a total active volume of 1 m$^3$. A field cage made of stainless steel rings keeps the uniformity of the electric field within 1% in the fiducial volume. A gas system recirculates and purifies the CF$_4$.
When operating the detector at a pressure of 50 torr, a 1 m$^3$ module will contain 250 g of CF$_4$. Assuming a detector threshold of 30 keVee (electron-equivalent energy, corresponding to nuclear recoil energy threshold $\sim$ 50 keV), and an overall data-taking efficiency of 50%, a one-year underground run will yield an exposure of 45 kg-days. Assuming negligible backgrounds, such an exposure will allow the DMTPC collaboration to improve the current limits on spin-dependent interactions on protons by about a factor of 50 [@Dujmic2008-58].
Conclusion
============
Directional detectors can provide an unambiguous positive observation of Dark Matter particles even in presence of insidious backgrounds, such as neutrons or neutrinos. Moreover, the dynamics of the galactic Dark Matter halo will be revealed by measuring the direction of the incoming WIMPs, opening the path to WIMP astronomy.
In the past decade, several groups have investigated new ideas to develop directional Dark Matter detectors. Low-pressure TPCs are best suited for this purpose if an accurate (sub-millimeter) 3-D reconstruction of the nuclear recoil can be achieved. A good tracking resolution also allows for an effective rejection of all electromagnetic backgrounds, in addition to statistical discrimination against neutrinos and neutrons based on the directional signature. The choice of different gaseous targets makes these detectors well suited for the study of both spin-dependent (CS$_2$) or spin-independent (CF$_4$ and $^3$He) interactions.
A vigorous R&D program has explored both electronic and optical readout solutions, demonstrating that both technologies can effectively and efficiently reconstruct the energy and vector direction of the nuclear recoils expected from Dark Matter interactions. The challenge for the field of directional Dark Matter detection is now to develop and deploy very sensitive and yet inexpensive readout solutions, which will make large directional detectors financially viable.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are grateful to D. Dujmic and M. Morii for useful discussions and for proofreading the manuscript. G. S. is supported by the M.I.T. Physics Department and the U.S. Department of Energy (contract number DE-FG02-05ER41360). C. J. M. is supported by Fermilab and Temple University.
References {#references .unnumbered}
==========
[^1]: A homoatomic molecular entity is a molecular entity consisting of one or more atoms of the same element.
[^2]: The scale factors are (in cgs-Gaussian units): $E_{TF} =
\frac{e^2}{a} Z_i Z_T \frac{M_i +M_T}{M_T}$, $R_{TF} =
\frac{1}{4 \pi a^2 N} \frac{(M_i + M_T)^2}{M_i M_T}$. Here, $N$= number density of target atoms, subscripts i and T refer to the incident particle and the target substance, and $a = a_0
\frac{.8853}{\sqrt{Z_i ^{2/3} + Z_T ^{2/3}}} $, with $a_0$ the Bohr radius.
[^3]: The parameter $k \stackrel{.}{=} \frac{0.0793Z_1^{1/6}}{(Z_1^{2/3} + Z_2^{2/3})^{3/4}}
\left[\frac{Z_1Z_2(A_1+A_2)^3}{A_1^3A_2}\right] ^{1/2}$ becomes substantially larger only for light recoils in heavy targets.
|
{
"pile_set_name": "arxiv"
}
|
using System;
using ModuleManager.Progress;
namespace ModuleManager.Patches.PassSpecifiers
{
public class LegacyPassSpecifier : IPassSpecifier
{
public bool CheckNeeds(INeedsChecker needsChecker, IPatchProgress progress)
{
if (needsChecker == null) throw new ArgumentNullException(nameof(needsChecker));
if (progress == null) throw new ArgumentNullException(nameof(progress));
return true;
}
public string Descriptor => ":LEGACY (default)";
}
}
|
{
"pile_set_name": "github"
}
|
Rakestraw
Rakestraw is a surname. Notable people with the surname include:
Larry Rakestraw (born 1942), American football player
Paulette Rakestraw (born 1967), American politician from the state of Georgia
Wilbur Rakestraw (1928–2014), American racing driver
W. Vincent Rakestraw (born 1940), Former Assistant Attorney General of the United States, Former Special Assistant to the Ambassador of India
See also
Rakestraw House, a historic home located near Garrett in Keyser Township, DeKalb County, Indiana.
Category:English-language surnames
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We study the effects of weak columnar and point disorder on the vortex-lattice phase transitions in high temperature superconductors. The combined effect of thermal fluctuations and of quenched disorder is investigated using a simplified cage model. For columnar disorder the problem maps into a quantum particle in a harmonic + random potential. We use the variational approximation to show that columnar and point disorder have opposite effect on the position of the melting line as observed experimentally. Replica symmetry breaking plays a role at the transition into a vortex glass at low temperatures.'
address: |
Department of Physics and Astronomy\
University of Pittsburgh\
Pittsburgh, PA 15260
author:
- 'Yadin Y. Goldschmidt'
date: 'August 4, 1996'
title: ' [**Phase Transitions of the Flux Line Lattice in High-Temperature Superconductors with Weak Columnar and Point Disorder**]{} '
---
[2]{}
There is a lot of interest in the physics of high temperature superconductors due to their potential technological applications. In particular these materials are of type II and allow for partial magnetic flux penetration. Pinning of the magnetic flux lines (FL) by many types of disorder is essential to eliminate dissipative losses associated with flux motion. In clean materials below the superconducting temperature there exist a ’solid ’ phase where the vortex lines form a triangular Abrikosov lattice [@blatter]. This solid can melt due to thermal fluctuations and the effect of impurities. In particular known observed transitions are into a flux liquid at higher temperatures via a [*melting line*]{} (ML)[@zeldov], and into a vortex glass at low temperature [@VG],[@Fisher],[@BG] in the presence of disorder- the so called [*entanglement line*]{} (EL). [@blatter]
Recently the effect of point and columnar disorder on the position of the melting transition has been measured experimentally in the high-$T_c$ material $Bi_2Sr_2CaCu_2O_8$ [@Khaykovitch]. Point disorder has been induced by electron irradiation (with 2.5 MeV electrons), whereas columnar disorder has been induced by heavy ion irradiation (1 GeV Xe or 0.9 GeV Pb). It turns out that the flux melting transition persists in the presence of either type of disorder, but its position shifts depending on the disorder type and strength.
A significant difference has been observed between the effects of columnar and point disorder on the location of the ML. Weak columnar defects stabilize the solid phase with respect to the vortex liquid phase and shift the transition to [*higher*]{} fields, whereas point-like disorder destabilizes the vortex lattice and shifts the melting transition to [*lower*]{} fields. In this paper we attempt to provide an explanation to this observation. The case of point defects has been addressed in a recent paper by Ertas and Nelson [@EN] using the cage-model approach which replaces the effect of vortex-vortex interactions by an harmonic potential felt by a single vortex. For columnar disorder the parabolic cage model was introduced by Nelson and Vinokur \[8\]. Here we use a different approach to analyze the cage-model Hamiltonian vis. the replica method together with the variational approximation. In the case of columnar defects our approach relies on our recent analysis of a quantum particle in a random potential [@yygold]. We compare the effect of the two types of disorder with each other and with results of recent experiments.
Assume that the average magnetic field is aligned along the $z$-axis. Following EN we describe the Hamiltonian of a single FL whose position is given by a two-component vector ${\bf r}(z)$ (overhangs are neglected) by: $$\begin{aligned}
{\cal H} = \int_0^L dz \left\{ {\frac{\tilde{\epsilon} }{2}} \left({\frac{ d%
{\bf r }}{dz}} \right)^2 + V(z,{\bf r }) + {\frac{\mu }{2}} {\bf r }^2
\right\}. \label{hamil}\end{aligned}$$
Here $\tilde \epsilon =\epsilon _0/\gamma ^2$ is the line tension of the FL, $\gamma ^2=m_z/m_{\perp }$ is the mass anisotropy, $\epsilon _0=(\Phi
_0/4\pi \lambda )^2$, ($\lambda $ being the penetration length), and $\mu
\approx \epsilon _0/a_0^2$ is the effective spring constant (setting the cage size) due to interactions with neighboring FLs, which are at a typical distance of $a_0=\sqrt{\Phi _0/B}$ apart.
For the case of columnar (or correlated) disorder, $V(z,{\bf r})=V({\bf r})$ is independent of $z$, and $$\begin{aligned}
\langle V({\bf r})V({\bf r^{\prime }})\rangle \equiv -2f(({\bf r}-{\bf %
r^{\prime }})^2/2)=g\epsilon _0^2\xi ^2\delta _\xi ^{(2)}({\bf r}-{\bf %
r^{\prime }}), \label{VVC}\end{aligned}$$ where $$\begin{aligned}
\delta _\xi ^{(2)}({\bf r}-{\bf r^{\prime }})\approx 1/(2\pi \xi ^2)\exp (-(%
{\bf r}-{\bf r^{\prime }})^2/2\xi ^2), \label{delta}\end{aligned}$$ and $\xi $ is the vortex core diameter. The dimensionless parameter g is a measure of the strength of the disorder. On the other hand for point-disorder, $V$ depends on $z$ and [@EN] $$\begin{aligned}
\langle V(z,{\bf r})V(z^{\prime },{\bf r^{\prime }})\rangle =\tilde
\Delta \epsilon
_0^2\xi ^3\delta _\xi ^{(2)}({\bf r}-{\bf r^{\prime }})\delta (z-z^{\prime
}). \label{VVP}\end{aligned}$$
The quantity that measures the transverse excursion of the FL is $$\begin{aligned}
u_0^2(\ell )\equiv \langle |{\bf r}(z)-{\bf r}(z+\ell )|^2\rangle \ /2,
\label{ul}\end{aligned}$$
Let us now review the connection between a quantum particle in a random potential and the behavior of a FL in a superconductor. The partition function of the former is just like the partition sum of the FL, provided one make the identification [@nelson] $$\begin{aligned}
\hbar \rightarrow T,\qquad \beta \hbar \rightarrow L, \label{corresp}\end{aligned}$$ Where T is the temperature of the superconductor and L is the system size in the $z$-direction. $\beta $ is the inverse temperature of the quantum particle. We are interested in large fixed L as T is varied, which corresponds to high $\beta $ for the quantum particle when $\hbar $ (or alternatively the mass of the particle) is varied. The variable $z$ is the so called Trotter time. This is the picture we will be using for the case of columnar disorder.
For the case of point-disorder the picture we use is that of a directed polymer in the presence of a random potential plus an harmonic potential as used by EN.
The main effect of the harmonic (or cage) potential is to cap the transverse excursions of the FL beyond a confinement length $\ell ^{*}\approx
a_0/\gamma $. The mean square displacement of the flux line is given by
$$u^2(T)\approx u_0^2(\ell ^{*}). \label{uT}$$
The location of the melting line is determined by the Lindemann criterion $$u^2(T_m(B))=c_L^2a_0^2, \label{Lind}$$ where $c_L\approx 0.15-0.2$ is the phenomenological Lindemann constant. This means that when the transverse excursion of a section of length $\approx
\ell ^{*}$becomes comparable to a finite fraction of the interline separation $a_0$, the melting of the flux solid occurs.
We consider first the case of columnar disorder. In the absence of disorder it is easily obtained from standard quantum mechanics and the correspondence (\[corresp\]), that when $L\rightarrow \infty ,$
$$u^2(T)=\frac T{\sqrt{\widetilde{\epsilon }\mu }}\left( 1-\exp (-\ell ^{*}%
\sqrt{\mu /\widetilde{\epsilon }})\right) =\frac T{\sqrt{\widetilde{\epsilon
}\mu }}(1-e^{-1}), \label{u2g0}$$
from which we find that
$$B_m(T)\approx \frac{\Phi _0^{}}{\xi ^2}\frac{\epsilon _0^2\xi ^2c_L^4}{%
\gamma ^2T^2}. \label{Bmg0}$$
When we turn on disorder we have to solve the problem of a quantum particle in a random quenched potential. This problem has been recently solved using the replica method and the variational approximation [@yygold]. Let us review briefly the results of this approach. In this approximation we chose the best quadratic Hamiltonian parametrized by the matrix $%
s_{ab}(z-z^{\prime })$:
$$\begin{aligned}
h_n &=&\frac 12\int_0^Ldz\sum_a[\widetilde{\epsilon }{\bf \dot r}_a^2+\mu
{\bf r}_a^2] \nonumber \\
&&-\frac 1{2T}\int_0^Ldz\int_0^Ldz^{\prime }\sum_{a,b}s_{ab}(z-z^{\prime })%
{\bf r}_a(z)\cdot {\bf r}_b(z^{\prime }). \label{hn}\end{aligned}$$
Here the replica index $a=1\ldots n$, and $n\rightarrow 0$ at the end of the calculation. This Hamiltonian is determined by stationarity of the variational free energy which is given by
$$\left\langle F\right\rangle _R/T=\left\langle H_n-h_n\right\rangle
_{h_n}-\ln \int [d{\bf r}]\exp (-h_n/T), \label{FV}$$
where $H_n$ is the exact $n$-body replicated Hamiltonian. The off-diagonal elements of $s_{ab}$can consistently be taken to be independent of $z$, whereas the diagonal elements are $z$-dependent. It is more convenient to work in frequency space, where $\omega $ is the frequency conjugate to $z$. $%
\omega _j=(2\pi /L)j,$with $j=0,\pm 1,\pm 2,\ldots $.Assuming replica symmetry, which is valid only for part of the temperature range, we can denote the off-diagonal elements of $\widetilde{s}_{ab}(\omega
)=(1/T)\int_0^Ldz\ e^{i\omega z}$ $s_{ab}(z)$, by $\widetilde{s}(\omega )=%
\widetilde{s}\delta _{\omega ,0}$. Denoting the diagonal elements by $%
\widetilde{s}_d(\omega )$, the variational equations become: $$\begin{aligned}
\tilde s &=&2\frac LT\widehat{f}\ ^{\prime }\left( {\frac{2T}{\mu L}}+{\frac{%
2T}L}\sum_{\omega ^{\prime }\neq 0}\frac 1{\epsilon \ \omega ^{\prime
}\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}\right) \label{s} \\
\tilde s_d(\omega ) &=&\tilde s-{\frac 2T}\int_0^Ld\zeta \ (1-e^{i\omega
\zeta })\times \nonumber \\
&&\ \ \widehat{f}\ ^{\prime }\left( {\frac{2T}L}\sum_{\omega ^{\prime }\neq
0}\ \frac{1-e^{-i\omega ^{\prime }\varsigma }}{\widetilde{\epsilon \ }\omega
^{\prime }\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}^{}\right) .
\label{sd}\end{aligned}$$ here $\widehat{f}$ $^{\prime }(y)$ denotes the derivative of the ”dressed” function $\widehat{f}(y)$ which is obtained in the variational scheme from the random potential’s correlation function $f(y)$ (see eq. (\[VVC\])), and in 2+1 dimensions is given by:
$$\widehat{f}(y)=-\frac{g\epsilon _0^2\xi ^2}{4\pi }\frac 1{\xi ^2+y}
\label{f}$$
The full equations, taking into account the possibility of replica-symmetry breaking are given in ref. [@yygold]. In terms of the variational parameters the function $u_0^2(\ell ^{*})$ is given by
$$u_0^2(\ell ^{*})={\frac{2T}L}\sum_{\omega ^{\prime }\neq 0}\frac{1-\cos
(\omega ^{\prime }\ell ^{*})}{\widetilde{\epsilon \ }\omega ^{\prime
}\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}. \label{u2qp}$$
This quantity has not been calculated in ref. [@yygold]. There we calculated $\left\langle {\bf r}^2(0)\right\rangle $ which does not measure correlations along the $z$-direction.
In the limit $L\rightarrow \infty $ we were able to solve the equations analytically to leading order in $g$. In that limit eq. (\[sd\]) becomes (for $\omega \neq 0$) :
$$\begin{aligned}
\tilde s_d(\omega ) &=&\frac 4\mu \widehat{f}\ ^{\prime \prime }(b_0)-\frac 2%
T\int_0^\infty d\varsigma (1-\cos (\omega \varsigma )) \nonumber \\
&&\times (\widehat{f}\ ^{\prime }(C_0(\varsigma ))-\widehat{f}\ ^{\prime
}(b_0)), \label{sdi}\end{aligned}$$
with
$$C_0(\varsigma )=2T\int_{-\infty }^\infty \frac{d\omega }{2\pi }\frac{1-\cos
(\omega \varsigma )}{\widetilde{\epsilon \ }\omega \,^2+\mu -\widetilde{s}%
_d(\omega )} \label{C0}$$
and $b_0$ given by a similar expression with the cosine term missing in the numerator of eq. (\[C0\]).
Defining
$$\begin{aligned}
\tau &=&T\ /\sqrt{\widetilde{\epsilon }\ \mu },\ \alpha =\tau \ /(\xi
^2+\tau ), \label{tau,al} \\
f_1(\alpha ) &=&1/(1-\alpha )-(1/\alpha )\log (1-\alpha ), \label{f1} \\
f_2(\alpha ) &=&\frac 1\alpha \sum_{k=1}^\infty (k+1)\alpha ^k/k^3
\label{f2} \\
a^2 &=&f_1(\alpha )/f_2(\alpha ),\ A=-\widehat{f}\ ^{\prime \prime }(\tau )\
f_1^2(\alpha )/f_2(\alpha )/\mu , \label{a2,A} \\
s_\infty &=&\widehat{f}\ ^{\prime \prime }(\tau )\ (4+f_1(\alpha ))/\mu ,
\label{sinf}\end{aligned}$$
a good representation of $\widetilde{s}_d(\omega ),\ (\omega \neq 0)$ with the correct behavior at low and high frequencies is
$$\widetilde{s}_d(\omega )=s_\infty +A\mu /(\widetilde{\epsilon \ }\omega
^2+a^2\mu ). \label{sde}$$
(notice that this function is negative for all $\omega $). Substituting in eq. (\[C0\]) and expanding the denominator to leading order in the strength of the disorder, we get :
$$\begin{aligned}
u_0^2(\ell ) &=&C_0(\sqrt{\widetilde{\epsilon }\ /\ \mu })=\tau
(1-A/(a^2-1)^2/\mu ) \nonumber \\
&&\ \times (1-e^{-\ell /\ell ^{*}})+\tau A/(a(a^2-1)^2\mu )\times \nonumber
\\
&&(1-e^{-a\ell /\ell ^{*}})+\tau /(2\mu )\times \ (s_\infty +A/(a^2-1))
\nonumber \\
&&\times \ (1-e^{-\ell /\ell ^{*}}-(\ell /\ell ^{*})\ e^{-\ell /\ell ^{*}}).
\label{u2f}\end{aligned}$$
In order to plot the results we measure all distances in units of $\xi $ , we measure the temperature in units of $\epsilon _0\xi $, and the magnetic field in units of $\Phi _0/\xi ^2$ . We observe that the spring constant $%
\mu $ is given in the rescaled units by $B$ and $a_0=1/\sqrt{B}$. We further use $\gamma =1$ for the plots.
Fig. 1 shows a plot of $\sqrt{u_0^2(\ell ^{*})}/a_0$ vs. $T$ for zero disorder (curve a) as well as for $g/2\pi =0.02$ (curve b). We have chosen $B=1/900$. We see that the disorder tends to align the flux lines along the columnar defects , hence decreasing $u^2(T)$ .Technically this happens since $%
\widetilde{s}_d(\omega )$ is negative. The horizontal line represents a possible Lindemann constant of 0.15.
In Fig. 2 we show the modified melting line $B_m(T)$ in the presence of columnar disorder. This is obtained from eq. (\[Lind\]) with $c_L=0.15$. We see that it shifts towards higher magnetic fields.
For $T<T_c\approx (\epsilon _0\xi /\gamma )[g^2\epsilon _0/(16\pi ^2\mu \xi
^2)]^{1/6}$, there is a solution with RSB but we will not pursue it further in this paper. This temperature is at the bottom of the range plotted in the figures for columnar disorder. We will pursue the RSB solution only for the case of point disorder, see below. The expression (\[u2f\]) becomes negative for very low temperature. This is an artifact of the truncation of the expansion in the strength of the disorder.
For the case of point defects the problem is equivalent to a directed polymer in a combination of a random potential and a fixed harmonic potential. This problem has been investigated by MP [@mp], who were mainly concerned with the limit of $\mu \rightarrow 0$. In this case the variational quadratic Hamiltonian is parametrized by:
$$\begin{aligned}
h_n &=&\frac 12\int_0^Ldz\sum_a[\widetilde{\epsilon }{\bf \dot r}_a^2+\mu
{\bf r}_a^2] \nonumber \\
&&\ \ -\frac 12\int_0^Ldz\sum_{a,b}^{}s_{ab}\ {\bf r}_a(z)\cdot {\bf r}_b(z),
\label{hnpd}\end{aligned}$$
with the elements of $s_{ab}$ all constants as opposed to the case of columnar disorder.
The replica symmetric solution to the variational equations is simply given by :
$$\begin{aligned}
s &=&s_d=\frac{2\xi }T\widehat{f}\ ^{\prime }(\tau ) \label{s,sd} \\
u_0^2(\ell ) &=&2T\int_{-\infty }^\infty \frac{d\omega }{2\pi }\frac{1-\cos
(\omega \ell )}{\widetilde{\epsilon \ }\omega \,^2+\mu } \left( 1+
\frac{s_d}{ \widetilde{\epsilon \ }\omega \,^2+\mu}\right) \label{u2p}\end{aligned}$$
and hence
$$\begin{aligned}
u_0^2(\ell ) &=&\tau (1-e^{-\ell /\ell ^{*}})+\tau \ s_d\ /\ (2\mu ) \nonumber
\\
&&\ \ \times \ (1-e^{-\ell /\ell ^{*}}-(\ell /\ell ^{*})\ e^{-\ell /\ell
^{*}}). \label{u2p2}\end{aligned}$$
In eq.(\[s,sd\]) $\widehat{f}$ is the same function as defined in eq. (\[f\]) with $g\ $replaced by $\widetilde{\Delta }$. As opposed the case of columnar disorder, in this case $s_d$ is positive and independent of $\omega
$, and hence the mean square displacement $u_0^2(\ell ^{*})$ is bigger than its value for zero disorder. Fig. 1 curve [*c* ]{}shows a plot of $\sqrt{%
u_0^2(\ell ^{*})}/a_0$ vs. $T$ for $\widetilde{\Delta }/2\pi =0.8$. Again $%
B=1/900$. For $T<T_{cp}\approx $ $(\epsilon _0\xi /\gamma )(\gamma $ $%
\widetilde{\Delta }/2\pi )^{1/3}$ it is necessary to break replica symmetry as shown by MP [@mp]. This means that the off-diagonal elements of the variational matrix $s_{ab}$ are not all equal to each other. MP worked out the solution in the limit of $\mu \rightarrow 0$, but it is not difficult to extend it to any value of $\mu .$ We have worked out the first stage RSB solution which is all is required for a random potential with short ranged correlations. The analytical expression is not shown here for lack of space. The solution is represented by curve [*d*]{} in Fig. 1 which consists of upward triangles.
The modified melting line in the presence of disorder is indicated by the curve [*c*]{} in Fig. 2 for $T>T_{cp}$. For $T<T_{cp}$ the so called [*entanglement line* ]{}is represented by curve [*d*]{} of filled squares.The value of the magnetic field $B_m(T_{cp})\approx (\Phi _0/\xi ^2)(\gamma
\widetilde{\Delta }/2\pi )^{-2/3}c_L^4$ gives a reasonable agreement with the experiments.
The analytical expressions given in eqs. (\[u2f\]), (\[u2p2\]), though quite simple, seem to capture the essential feature required to reproduce the position of the melting line. The qualitative agreement with experimental results is remarkable, especially the opposite effects of columnar and point disorder on the position of the melting line. The ’as grown’ experimental results are corresponding to very small amount of point disorder, and thus close to the line of no disorder in the figures. At low temperature, the entanglement transition is associated in our formalism with RSB, and is a sort of a spin-glass transition in the sense that many minima of the random potential and hence free energy, compete with each other. In this paper we worked out the one-step RSB for the case of point disorder. The experiments show that in the case of colmunar disorder the transition into the vortex glass seems to be absent. This has to be further clarified theoretically. We have shown that the [*cage model* ]{}together with the variational approximation reproduce the main feature of the experiments. Effects of many body interaction between vortex lines which are not taken into account by the effective cage model seem to be of secondary importance. Inclusion of such effects within the variational formalism remains a task for the future.
For point disorder, in the limit of infinite cage ( $\mu \rightarrow
0$), the variational approximation gives a wandering exponent of 1/2 for a random potential with short ranged correlations [@mp], whereas simulations give a value of 5/8 [@halpin]. This discrepancy does not seem of importance with respect to the conclusions obtained in this paper. Another point to notice is that columnar disorder is much more effective in shifting the position of the melting line as compared for point disorder in the range of parameters considered here. We have used a much weaker value of correlated disorder to achieve a similar or even larger shift of the melting line than for the case of point disorder. The fact that the random potential does not vary along the z-axis enhances its effect on the vortex lines.
We thank David Nelson and Eli Zeldov for discussions. We thank the Weizmann institute for a Michael Visiting Professorship, during which this research has been carried out.
G. Blatter [*et al.*]{}, Rev. Mod. Phys. [**66**]{}, 1125 (1994).
E. Zeldov [*et al.*]{}, Nature [**375**]{}, 373 (1995); see also H. Pastoria [*et al.*]{} Phys. Rev. Lett. [**72**]{}, 2951 (1994)
M. Feigelman, [*et al.*]{}, Phys. Rev. Lett. [**63**]{}, 2303 (1989); A. I. Larkin and V. M. Vinokur, ibid. [**75**]{}, 4666 (1995).
D. S. Fisher, M. P. A. Fisher and D. A. Huse, Phys. Rev. [**B43**]{}, 130 (1990).
T. Giamarchi and P. Le Doussal, Phys. Rev. Lett. [**72**]{}, 1530 (1994); Phys. Rev. [**B52**]{}, 1242 (1995); see also T. Nattermann, Phys. Rev. Lett. [**64**]{}, 2454 (1990).
B. Khaykovitch [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 2555 (1996) and preprint (1996).
D. Ertas and D. R. Nelson, preprint, cond-mat/9607142 (1996)
D. R. Nelson, Phys. Rev. Lett. [**60**]{}, 1973 (1988); D. R. Nelson and V. M. Vinokur, Phys. Rev. [**B48**]{}, 13060 (1993).
Y. Y. Goldschmidt, Phys. Rev. E [**53**]{}, 343 (1996); see also Phys. Rev. Lett. [**74**]{}, 5162 (1995)
M. Mezard and G. Parisi, J. Phys. I (France)[**1**]{}, 809 (1991)
T. Halpin-Healy and Y.-C. Zhang, Phys. Rep. [**254**]{}, 215 (1995) and references therein.
Figure Captions: Fig1: Transverse fluctuations in the cage model for (a) no disorder (b)columnar disorder (c)point disorder (d)RSB for point disorder. Fig. 2: Melting line for (a) no disorder (b) columnar disorder (c)point disorder (d) entanglement line for point disorder.
|
{
"pile_set_name": "arxiv"
}
|
{
"short_name": "React App",
"name": "Create React App Sample",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
}
],
"start_url": "./index.html",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}
|
{
"pile_set_name": "github"
}
|
/* -*- Mode: C++; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*-
* vim: set ts=4 sw=4 et tw=99:
*
* ***** BEGIN LICENSE BLOCK *****
* Version: MPL 1.1/GPL 2.0/LGPL 2.1
*
* The contents of this file are subject to the Mozilla Public License Version
* 1.1 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
* http://www.mozilla.org/MPL/
*
* Software distributed under the License is distributed on an "AS IS" basis,
* WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
* for the specific language governing rights and limitations under the
* License.
*
* The Original Code is Mozilla SpiderMonkey JavaScript 1.9 code, released
* May 28, 2008.
*
* The Initial Developer of the Original Code is
* Brendan Eich <brendan@mozilla.org>
*
* Contributor(s):
* David Anderson <danderson@mozilla.com>
* David Mandelin <dmandelin@mozilla.com>
*
* Alternatively, the contents of this file may be used under the terms of
* either of the GNU General Public License Version 2 or later (the "GPL"),
* or the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
* in which case the provisions of the GPL or the LGPL are applicable instead
* of those above. If you wish to allow use of your version of this file only
* under the terms of either the GPL or the LGPL, and not to allow others to
* use your version of this file under the terms of the MPL, indicate your
* decision by deleting the provisions above and replace them with the notice
* and other provisions required by the GPL or the LGPL. If you do not delete
* the provisions above, a recipient may use your version of this file under
* the terms of any one of the MPL, the GPL or the LGPL.
*
* ***** END LICENSE BLOCK ***** */
#if !defined jsjaeger_methodjit_inl_h__ && defined JS_METHODJIT
#define jsjaeger_methodjit_inl_h__
namespace js {
namespace mjit {
enum CompileRequest
{
CompileRequest_Interpreter,
CompileRequest_JIT
};
/* Number of times a script must be called before we run it in the methodjit. */
static const size_t CALLS_BEFORE_COMPILE = 16;
/* Number of loop back-edges we execute in the interpreter before methodjitting. */
static const size_t BACKEDGES_BEFORE_COMPILE = 16;
static inline CompileStatus
CanMethodJIT(JSContext *cx, JSScript *script, JSStackFrame *fp, CompileRequest request)
{
if (!cx->methodJitEnabled)
return Compile_Abort;
JITScriptStatus status = script->getJITStatus(fp->isConstructing());
if (status == JITScript_Invalid)
return Compile_Abort;
if (request == CompileRequest_Interpreter &&
status == JITScript_None &&
!cx->hasRunOption(JSOPTION_METHODJIT_ALWAYS) &&
script->incCallCount() <= CALLS_BEFORE_COMPILE)
{
return Compile_Skipped;
}
if (status == JITScript_None)
return TryCompile(cx, fp);
return Compile_Okay;
}
/*
* Called from a backedge in the interpreter to decide if we should transition to the
* methodjit. If so, we compile the given function.
*/
static inline CompileStatus
CanMethodJITAtBranch(JSContext *cx, JSScript *script, JSStackFrame *fp, jsbytecode *pc)
{
if (!cx->methodJitEnabled)
return Compile_Abort;
JITScriptStatus status = script->getJITStatus(fp->isConstructing());
if (status == JITScript_Invalid)
return Compile_Abort;
if (status == JITScript_None &&
!cx->hasRunOption(JSOPTION_METHODJIT_ALWAYS) &&
cx->compartment->incBackEdgeCount(pc) <= BACKEDGES_BEFORE_COMPILE)
{
return Compile_Skipped;
}
if (status == JITScript_None)
return TryCompile(cx, fp);
return Compile_Okay;
}
}
}
#endif
|
{
"pile_set_name": "github"
}
|
s [ ]
w [a-z0-9A-Z]
W [^a-z0-9A-Z]
d [0-9]
%%
((MERGE.*USING{s}*\()|(EXECUTE{s}*IMMEDIATE{s}*\")|({W}+{d}{s}+HAVING{s}+{d})|(MATCH{s}*[a-zA-Z\\(\\),+\-]+{s}*AGAINST{s}*\()) printf('attack detected');
%%
|
{
"pile_set_name": "github"
}
|
# Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
mojom = "//url/mojo/url.mojom"
public_headers = [ "//url/gurl.h" ]
traits_headers = [ "//url/mojo/url_gurl_struct_traits.h" ]
deps = [
"//url",
]
type_mappings = [ "url.mojom.Url=GURL" ]
|
{
"pile_set_name": "github"
}
|
OS 10.2 - Permanently deleting emails and files
This is my first time posting so I hope I don't screw this up...Does anyone have any advice on how to permanently delete emails and files? I am running on OS X 10.3.9 and have deleted files in my trash using the secure empty trash function, however, I have a large number of emails I have deleted in Mail. Are these permenently deleted as well? Secondly, is some of the shareware or freeware out there such as Shredit any good? I have a concern that someone is going to try and retrieve deleted data off my computer sometime soon and I really don't want any emails/files showing up that I have deleted.
If you are using Mail as your email client and you account is setup as a pop3 account not leaving a copy on the server and your mac is not remotely backed up and your home folder is local then your mail lives in /Users/username/Library/Mail. Using the erase deleted messages from the mailbox menu will get rid of your mail. Will it be recoveerable from a drive recovery company,...possibly. Your company on the other handprobably not. Unless the above criteria is false.
|
{
"pile_set_name": "pile-cc"
}
|
Whatever You Love, You Are
Whatever You Love, You Are is the fifth studio album by Australian trio, Dirty Three, which was released in March 2000. Cover art is by their guitarist, Mick Turner. Australian musicologist, Ian McFarlane, felt that it showed "deep, rich, emotional musical vistas, and furthered the band’s connection to the music and approach of jazz great John Coltrane".
Reception
Track listing
"Some Summers They Drop Like Flies" – 6:20
"I Really Should've Gone Out Last Night" – 6:55
"I Offered It Up to the Stars & the Night Sky" – 13:41
"Some Things I Just Don't Want to Know" – 6:07
"Stellar" – 7:29
"Lullabye for Christie" – 7:45
References
General
Note: Archived [on-line] copy has limited functionality.
Specific
Category:2000 albums
Category:ARIA Award-winning albums
Category:Dirty Three albums
Category:Touch and Go Records albums
|
{
"pile_set_name": "wikipedia_en"
}
|
<?php
/*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* This software consists of voluntary contributions made by many individuals
* and is licensed under the LGPL. For more information, see
* <http://www.doctrine-project.org>.
*/
namespace Doctrine\ORM\Internal\Hydration;
use Doctrine\DBAL\Connection;
/**
* Hydrator that produces flat, rectangular results of scalar data.
* The created result is almost the same as a regular SQL result set, except
* that column names are mapped to field names and data type conversions take place.
*
* @author Roman Borschel <roman@code-factory.org>
* @since 2.0
*/
class ScalarHydrator extends AbstractHydrator
{
/** @override */
protected function _hydrateAll()
{
$result = array();
$cache = array();
while ($data = $this->_stmt->fetch(\PDO::FETCH_ASSOC)) {
$result[] = $this->_gatherScalarRowData($data, $cache);
}
return $result;
}
/** @override */
protected function _hydrateRow(array $data, array &$cache, array &$result)
{
$result[] = $this->_gatherScalarRowData($data, $cache);
}
}
|
{
"pile_set_name": "github"
}
|
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// +k8s:deepcopy-gen=package
// +k8s:protobuf-gen=package
// +k8s:openapi-gen=true
// +groupName=coordination.k8s.io
package v1 // import "k8s.io/api/coordination/v1"
|
{
"pile_set_name": "github"
}
|
---
abstract: |
Flexible and performant Persistency Service is a necessary component of any HEP Software Framework. The building of a modular, non-intrusive and performant persistency component have been shown to be very difficult task. In the past, it was very often necessary to sacrifice modularity to achieve acceptable performance. This resulted in the strong dependency of the overall Frameworks on their Persistency subsystems.
Recent development in software technology has made possible to build a Persistency Service which can be transparently used from other Frameworks. Such Service doesn’t force a strong architectural constraints on the overall Framework Architecture, while satisfying high performance requirements. Java Data Object standard (JDO) has been already implemented for almost all major databases. It provides truly transparent persistency for any Java object (both internal and external). Objects in other languages can be handled via transparent proxies. Being only a thin layer on top of a used database, JDO doesn’t introduce any significant performance degradation. Also Aspect-Oriented Programming (AOP) makes possible to treat persistency as an orthogonal Aspect of the Application Framework, without polluting it with persistence-specific concepts.
All these techniques have been developed primarily (or only) for the Java environment. It is, however, possible to interface them transparently to Frameworks built in other languages, like for example C++.
Fully functional prototypes of flexible and non-intrusive persistency modules have been build for several other packages, as for example FreeHEP AIDA and LCG Pool AttributeSet (package Indicium).
author:
- Julius Hřivnáč
title: Transparent Persistence with Java Data Objects
---
JDO
===
Requirements on Transparent Persistence
---------------------------------------
The Java Data Object (JDO) [@JDO1],[@JDO2],[@Standard],[@Portal] standard has been created to satisfy several requirements on the object persistence in Java:
- [**Object Model independence on persistency**]{}:
- Java types are automatically mapped to native storage types.
- 3rd party objects can be persistified (even when their source is not available).
- The source of the persistent class is the same as the source of the transient class. No additional code is needed to make a class persistent.
- All classes can be made persistent (if it has a sense).
- [**Illusion of in-memory access to data**]{}:
- Dirty instances (i.e. objects which have been changed after they have been read) are implicitly updated in the database.
- Catching, synchronization, retrieval and lazy loading are done automatically.
- All objects, referenced from a persistent object, are automatically persistent ([*Persistence by reachability*]{}).
- [**Portability across technologies**]{}:
- A wide range of storage technologies (relational databases, object-oriented databases, files,…) can be transparently used.
- All JDO implementations are exchangeable.
- [**Portability across platforms**]{} is automatically available in Java.
- [**No need for a different language**]{} (DDL, SQL,…) to handle persistency (incl. queries).
- [**Interoperability with Application Servers**]{} (EJB [@EJB],…).
Architecture of Java Data Objects
---------------------------------
The Java Data Objects standard (Java Community Process Open Standard JSR-12) [@Standard] has been created to satisfy the requirements listed in the previous paragraph.
{width="135mm"}
The persistence capability is added to a class by the Enhancer (as shown in Figure \[Enhancement\]):
- Enhancer makes a transient class PersistenceCapable by adding it all data and methods needed to provide the persistence functionality. After enhancement, the class implements PersistenceCapable interface (as shown in Figure \[PersistenceCapable\]).
- Enhancer is generally applied to a class-file, but it can be also part of a compiler or a loader.
- Enhancing effects can be modified via Persistence Descriptor (XML file).
- All enhancers are compatible. Classes enhanced with one JDO implementation will work automatically with all other implementations.
![Enhancer makes any class PersistenceCapable.[]{data-label="PersistenceCapable"}](PersistenceCapable.eps){width="80mm"}
The main object, a user interacts with, is the PersistenceManager. It mediates all interactions with the database, it manages instances lifecycle and it serves as a factory for Transactions, Queries and Extents (as described in Figure \[PersistenceManager\]).
![All interactions with JDO are mediated by PersistenceManager.[]{data-label="PersistenceManager"}](PersistenceManager.eps){width="80mm"}
Available Implementations
-------------------------
After about a year since the JDO standardization, there are already many implementations available supporting all existing storage technologies.
JDO Implementations
-------------------
### Commercial JDO Implementations
Following commercial implementations of JDO standard exist:
enJin(Versant), FastObjects(Poet), FrontierSuit(ObjectFrontier), IntelliBO (Signsoft), JDOGenie(Hemisphere), JRelay(Object Industries), KODO(SolarMetric), LiDO(LIBeLIS), OpenFusion(Prism), Orient(Orient), PE:J(HYWY), …
These implementation often have a free community license available.
### Open JDO Implementations
There are already several open JDO implementations available:
- [**JDORI**]{} [@JDORI] (Sun) is the reference and standard implementation. It currently only works with the FOStore files. Support for a relational database via JDBC implementation is under development. It is the most standard, but not the most performant implementation.
- [**TJDO**]{} [@TJDO] (SourceForge) is a high quality implementation originally written by the TreeActive company, later put on the GPL license. It supports all important relational databases. It supports an automatic creation of the database schema. It implements full JDO standard.
- [**XORM**]{} [@XORM] (SourceForge) does not yet support full JDO standard. It does not automatically generate a database schema, on the other hand, it allows a reuse of existing schemas.
- [**JORM**]{} [@JORM] (JOnAS/ObjectWeb) has a fully functional object-relational mapping, the full JDO implementation is under development.
- [**OJB**]{} [@OJB] (Apache) has a mature object-relational engine. Full JDO interface is not yet provided.
Supported Databases
-------------------
All widely used databases are already supported either by their provider or by a third party:
- [**RDBS and ODBS**]{}: Oracle, MS SQL Server, DB2, PointBase, Cloudscape, MS Access, JDBC/ODBC Bridge, Sybase, Interbase, InstantDB, Informix, SAPDB, Postgress, MySQL, Hypersonic SQL, Versant,…
- [**Files**]{}: XML, FOSTORE, flat, C-ISAM,…
The performance of JDO implementations is determined by the native performance of a database. JDO itself introduces a very small overhead.
HEP Applications using JDO
==========================
Trivial Application
-------------------
A simple application using JDO to write and read data is shown in Listing \[Trivial\].
[|l|]{} $//\ Initialization$\
$PersistenceManagerFactory\ pmf = JDOHelper.getPersistenceManagerFactory(properties);$\
$PersistenceManager\ pm = pmf.getPersistenceManager();$\
$Transaction\ tx = pm.currentTransaction();$\
\
$//\ Writing$\
$tx.begin();$\
$\dots$\
$Event\ event = \dots;$\
$pm.makePersistent(event);$\
$\dots$\
$tx.commit();$\
\
$//\ Searching\ using\ Java-like\ query\ language\ translated\ internally\ to\ DB\ native\ query\ language$\
$//\ (SQL\ available\ too\ for\ RDBS)$\
$tx.begin();$\
$Extent\ extent = pm.getExtent(Track.class, true);$\
$String\ filter = "pt > 20.0";$\
$Query\ query = pm.newQuery(extent, filter);$\
$Collection\ results = query.execute();$\
$\dots$\
$tx.commit();$\
Indicium
--------
Indicium [@Indicium] has been created to satisfy the LCG [@LCG] Pool [@Pool] requirements on the Metadata management: “To define, accumulate, search, filter and manage Attributes (Metadata) external/additional to existing (Event) data.” Those metadata are a generalization of the traditional Paw ntuple concept. They are used in the first phase of the analysis process to make a pre-selection of Event for further processing. They should be efficient. They are apparently closely related to Collections (of Events).
The Indicium package provides an implementation of the AttributeSet (Event Metadata, Tags) for the LCG/Pool project in Java and C++ (with the same API). The core of Indicium is implemented in Java.
All expressed requirements can only be well satisfied by the system which allows in principle any object to act as an AttributeSet. Such system can be easily built when we realize that mentioned requirements are satisfied by JDO:
- [**AttributeSet**]{} is simply any Object with a reference to another (Event) Object.
- [**Explicit Collection**]{} is just any standard Java Collection.
- [**Implicit Collection**]{} (i.e. all objects of some type T within a Database) is directly the JDO Extent.
Indicium works with any JDO/DB implementation. As all the requirements are directly satisfied by the JDO itself, the Indicium only implements a simple wrapper and a code for database management (database creation, opening, …). That is in fact the only database-specific code.
It is easy to switch between various JDO/DB implementations via a simple properties file. The default Indicium implementation contains configuration for JDORI with FOStore file format and TJDO with Cloudscape or MySQL databases, others are simple to add.
The data stored by Indicium are accessible also via native database protocols (like JDBC or SQL) and tools using them.
As it has been already mentioned, Indicium provides just a simple convenience layer on top of JDO trying to capture standard AttributeSet usage patterns. There are four ways how AttributeSet can be defined:
- [**Assembled**]{} AttributeSet is fully constructed at run-time in a way similar to classical Paw ntuples.
- [**Generated**]{} AttributeSet class is generated from a simple XML specification.
- [**Implementing**]{} AttributeSet can be written by hand to implement the standard AttributeSet Interface.
- [**FreeStyle**]{} AttributeSet can be just about any class. It can be managed by the Indicium infrastructure, only some convenience functionality may be lost.
To satisfy also the requirements of C++ users, the C++ interface of Indicium has been created in the form of JACE [@JACE] proxies. This way, C++ users can directly use Indicium Java classes from a C++ program. CIndicium Architecture is shown in Figure \[AttributeSet\], an example of its use is shown in Listing \[CIndicium\].
{width="135mm"}
[|l|]{} $//\ Construct\ Signature$\
$Signature\ signature("AssembledClass");$\
$signature.add("j", "int", "Some Integer Number");$\
$signature.add("y", "double", "Some Double Number");$\
$signature.add("s", "String", "Some String");$\
\
$//\ Obtain\ Accessor\ to\ database$\
$Accessor\ accessor = AccessorFactory::createAccessor("MyDB.properties");$\
\
$//\ Create\ Collection$\
$accessor.createCollection("MyCollection", signature, true);$\
\
$//\ Write\ AttributeSets\ into\ database$\
$AssembledAttributeSet*\ as;$\
$for (int\ i = 0; i < 100; i++) \{$\
$\ \ as = new\ AssembledAttributeSet(signature);$\
$\ \ as->set("j", ...);$\
$\ \ as->set("y", ...);$\
$\ \ as->set("s", ...);$\
$\ \ accessor.write(*as);$\
$\ \ \}$\
\
$//\ Search\ database$\
$std::string\ filter = "y > 0.5";$\
$Query\ query = accessor.newQuery(filter);$\
$Collection\ collection = query.execute();$\
$std::cout << "First: " << collection.toArray()[0].toString() << std::endl;$\
AIDA Persistence
----------------
JDO has been used to provide a basic persistency service for the FreeHEP [@FreeHEP] reference implementation of AIDA [@AIDA]. Three kinds of extension to the existing implementation have been required:
- Implementation of the IStore interface as AidaJDOStore.
- Creation of the XML description for each AIDA class (for example see Listing \[AIDA\]).
- Several small changes to exiting classes, like creation of wrappers around arrays of primitive types, etc.
------------------------------------------------------------------------------------
$<jdo>$
$\ \ <package\ name="hep.aida.ref.histogram">$
$\ \ \ \ <class\ name="Histogram2D"$
$\ \ \ \ \ \ \ persistence-capable-superclass="hep.aida.ref.histogram.Histogram">$
$\ \ \ \ \ \ \ </class>$
$\ \ \ \ </package>$
$\ \ </jdo>$
------------------------------------------------------------------------------------
It has become clear, that the AIDA persistence API is not sufficient and it has to be made richer to allow more control over persistent objects, better searching capabilities, etc.
Minerva
-------
Minerva [@Minerva] is a lightweight Java Framework which implements main Architectural principles of the ATLAS C++ Framework Athena [@Athena]:
- [**Algorithm - Data Separation**]{}: Algorithmic code is separated from the data on which it operates. Algorithms can be explicitly called and don’t a have persistent state (except for parameters). Data are potentially persistent and processed by Algorithms.
- [**Persistent - Transient Separation**]{}: The Persistency mechanism is implemented by specified components and have no impact on the definition of the transient Interfaces. Low-level Persistence technologies can be replaced without changing the other Framework components (except for possible configuration). A specific definition of Transient-Persistent mapping is possible, but is not required.
- [**Implementation Independence**]{}: There are no implementation-specific constructs in the definition of the interfaces. In particular, all Interfaces are defined in an implementation independent way. Also all public objects (i.e. all objects which are exchanged between components and which subsequently appear in the Interface’ definitions) are identifiable by implementation independent Identifiers.
- [**Modularity**]{}: All components are explicitly designed with interchangeability in mind. This implies that the main deliverables are simple and precisely defined general interfaces and existing implementation of various modules serves mainly as a Reference implementation.
Minerva scheduling is based on InfoBus \[InfoBus\] Architecture:
- Algorithms are [*Data Producers*]{} or [*Data Consumers*]{} (or both).
- Algorithm declare their supported I/O types.
- Scheduling is done implicitly. An Algorithm runs when it has all its inputs ready.
- Both Algorithms and Services run as (static or dynamic) Servers.
- The environment is naturally multi-threaded.
Overview of the Minerva Architecture is shown in Figure \[InfoBus\].
{width="135mm"}
It is very easy to configure and run Minerva. For example, one can create a Minerva run with 5 parallel Servers. Two of them are reading Events from two independent databases, one is processing each Event and two last write new processed Events on two new databases depending on the Event characteristics. (See Figure \[Minerva\] for a schema of such run and Listing \[MinervaScript\] for its steering script.)
![Example of a Minerva run.[]{data-label="Minerva"}](Minerva.eps){width="80mm"}
---------------------------------------------------
$new\ Algorithm(<Algorithm\ properties>);$
$new\ ObjectOutput(<db3>, <Event\ properties1>);$
$new\ ObjectOutput(<db4>, <Event\ properties2>);$
$new\ ObjectInput(<db1>);$
$new\ ObjectInput(<db2>);$
---------------------------------------------------
: Example of steering script for a Minerva run.[]{data-label="MinervaScript"}
Minerva has also simple but powerful modular Graphical User Interface which allows to plug in easily other components as the BeanShell [@BeanShell] command-line interface, the JAS [@JAS] histogramming, the ObjectBrowser [@ObjectBrowser], etc. Figure \[GUI\] and Figure \[ObjectBrowser\] show examples of running Minerva with various interactive plugins loaded.
{width="135mm"}
{width="135mm"}
Prototypes using JDO
====================
Object Evolution
----------------
It is often necessary to change object’ shape while keeping its content and identity. This functionality is especially needed in the persistency domain to satisfy [*Schema Evolution*]{} (Versioning) or [*Object Mapping*]{} (DB Projection), i.e. retrieving an Object of type A dressed as an Object of another type B. This functionality is not addressed by JDO. In practice, it is handled either on the lower lever (in a database) or on the higher level (in the overall framework, for example EJB).
It is, however, possible to implement an Object Evolution for JDO with the help of Dynamic Proxies and Aspects.
Let’s suppose that a user wants to read an Object of a type A (of an Interface IA) dressed as an Object of another Interface IB. To enable that, four components should co-operate (as shown in Fig \[Evolution\]):
- JDO Enhancer enhances class A so it is PersistenceCapable and it is managed by JDO PersistenceManager.
- AspectJ [@AspectJ] adds read-callback with the mapping A $\rightarrow$ IB. This is called automatically when JDO reads an object A.
- A simple database of mappers provides a suitable mapping between A and IB.
- DynamicProxy delivers the content of the Object A with the interfaces IB:\
$IB\ b = (IB)DynamicProxy.newInstance(A, IB);$.
All those manipulations are of course hidden from the End User.
![Support for Object Evolution.[]{data-label="Evolution"}](Evolution.eps){width="80mm"}
Foreign References
------------------
HEP data are often stored in sets of independent databases, each one managed independently. This architectures do not directly support references between objects from different databases (while references inside one database are managed directly by the JDO support for Persistence by Reachability). As in the case of the Object Evolution, foreign references are usually resolved either on the lower level (i.e. all databases are managed by one storage manager and JDO operates on top) or on the higher level (for example by the EJB framework).
Another possibility is to use a similar Architecture as in the case of Object Evolution with Dynamic Proxy delivering foreign Objects.
Let’s suppose, that a User reads an object A, which contains a reference to another object B, which is actually stored in a different database (and thus managed by a different PersistenceManager). The database with the object A doesn’t in fact in this case contain an object B, but a DynamicProxy object. The object B can be transparently retrieved using three co-operating components (as shown on Fig \[References\]):
- When reference from an object A to an object B is requested, JDO delivers DynamicProxy instead.
- The DynamicProxy asks PersistenceManagerFactory for a PersistenceManager which handles the object B. It then uses that PersistenceManager to get the object B and casts itself into it.
- PersistenceManagerFactory gives this information by interrogating DBcatalog (possibly a Grid Service).
All those manipulations are of course hidden from the End User.
![Support for Foreign References.[]{data-label="References"}](References.eps){width="80mm"}
Summary
=======
It has been shown that JDO standard provides suitable foundation of the persistence service for HEP applications.
Two major characteristics of persistence solutions based on JDO are:
- Not intrusiveness.
- Wide range of available JDO implementations, both commercial and free, giving access to all major databases.
JDO profits from the native databases functionality and performance (SQL queries,...), but presents it to users in a native Java API.
[99]{}
More details talk about JDO:\
[http://hrivnac.home.cern.ch/hrivnac/Activities/2002/June/JDO]{}
More details talk about JDO:\
[http://hrivnac.home.cern.ch/hrivnac/Activities/2002/November/Indicium]{}
Java Data Objects Standard:\
[http://java.sun.com/products/jdo]{}
Java Data Objects Portal:\
[http://www.jdocentral.com]{}
JDO Reference Implementation (JDORI):\
[http://access1.sun.com/jdo]{}
TJDO:\
[http://tjdo.sourceforge.net]{}
XORM:\
[http://xorm.sourceforge.net]{}
JORM:\
[http://jorm.objectweb.org]{}
OJB:\
[http://db.apache.org/ojb/]{}
Indicium:\
[http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/Indicium]{}
AIDA:\
[http://aida.freehep.org]{}
FreeHEP Library:\
[http://java.freehep.org]{}
Minerva:\
[http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/Minerva]{}
JACE:\
[http://reyelts.dyndns.org:8080/jace/release/docs/index.html]{}
Lightweight Scripting for Java (BeanShell):\
[http://www.beanshell.org]{}
InfoBus:\
[http://java.sun.com/products/javabeans/infobus/]{}
Java Analysis Studio (JAS):\
[http://jas.freehep.org]{}
Object Browser:\
[http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/ObjectBrowser/]{}
AspectJ:\
[http://www.eclipse.org/aspectj/]{}
Enterprise Java Beans (EJB):\
[http://java.sun.com/products/ejb]{}
ATLAS C++ Framework (Athena):\
[http://atlas.web.cern.ch/ATLAS/GROUPS/SOFTWARE/OO/architecture/General/index.html]{}
LCG Computing Grid Project (LCG):\
[http://wenaus.home.cern.ch/wenaus/peb-app]{}
LCG Persistency Framework (Pool):\
[http://lcgapp.cern.ch/project/persist]{}
|
{
"pile_set_name": "arxiv"
}
|
CIBC Poll: Nearly half of all Canadians with debt not making progress in paying it down
Many say they simply don't have the money, but may be missing
opportunities to get advice about how to reduce their debt
TORONTO, June 5, 2013 /CNW/ - A new CIBC(TSX: CM) (NYSE: CM) Poll conducted by Harris/Decima reveals that half
of Canadians with debtsay their debt level is the same or higher than it was a year ago,
despite prior CIBC polls showing debt repayment as the top priority for
Canadians in 2013.
Highlights of the poll include:
71 per cent of Canadians said they currently carry some form of debt, in line with
the national average in a similar poll conducted last year (72 per cent)
Among Canadians with debt, 21 per cent say their level of debt has increased in the last 12 months, while
another 28 per cent say their debt level has stayed the same - which indicates nearly half (49 per cent) of Canadians with debt did not make progress towards paying it down in
the past year
The top reason cited for not making progress on debt reduction was not
having the money to do so
50 per cent said they have reduced their debt in the last year
"Though Canadians have identified paying down debt as their top
financial priority for the past three years, our poll shows almost an
even split between those who are making strides and those who aren't,"
said Christina Kramer, Executive Vice President, Retail Distribution
and Channel Strategy, CIBC. "Today's historically low interest rates
represent a real opportunity to reduce your total debt level, however
to take advantage of these low rates it is critical that Canadians have
a plan to make that happen."
CIBC's annual Financial Priorities Poll, released in January 2013, found
that paying down debt was the top financial priority of Canadians for
the third consecutive year.
"Not Having the Money" Cited as Top Reason for not Making Progress
Among those Canadians who said they aren't making progress on debt
repayment, the top reason provided was they don't have the money to put
against what they owe (29 per cent), followed by unplanned expenses which affected their ability to pay
more towards their debt (12 per cent).
A CIBC study from earlier this year shows that despite being a financial
priority, debt is not top of mind when it comes to getting advice. When
Canadians were asked what topics come to mind about a conversation they
may have with an advisor, only 6 per cent cited debt.
"It can be challenging to find the money each month to put towards
reducing your debt, but our poll clearly shows that many Canadians are
doing just that despite having the same everyday financial pressures of
those who say they are not making progress," said Ms. Kramer.
She noted that with many Canadians avoiding conversations about debt
management, they are missing an opportunity to get personalized advice
and put a plan in place.
"You should talk with an advisor about your debt management goals the
same way you would talk to them about your goals for retirement,
because your finances are all connected," added Ms. Kramer. "A
conversation with an advisor can lead to a plan that puts on you on
track to achieve your broader financial goals."
Advice on Managing Debt:
CIBC offers these tips to help Canadians take charge of their finances
and reduce debt as part of their long term financial plan.
Make lump sum payments to higher interest debt first to reduce interest
costs
If you have debt, work with an advisor to structure it to minimize your
overall interest costs by utilizing debt products that offer a lower
interest rate and having a strategy to pay these balances down in a
specific time frame
While interest rates remain near historic lows, don't ignore the long
term benefits of making small adjustments to your payment today.
Setting your debt payment even slightly higher than your required
payment can reduce your overall interest costs and help you become debt
free faster
Use free budgeting tools to help you stay on budget - CIBC CreditSmart,
available to CIBC credit card holders, allows you to set customized
budgets and receive spend alerts if you exceed your planned budget for
the month, helping you stay on top of your everyday budgeting and
saving
KEY POLL FINDINGS
Percentage of Canadians currently managing some form of debt, by region:
2013
2012
National
71%
72%
Atlantic Canada
79%
78%
Quebec
71%
72%
Ontario
71%
69%
Manitoba and Saskatchewan
73%
77%
Alberta
69%
75%
B.C.
64%
71%
Percentage of Canadians currently managing some form of debt, by age:
2013
2012
National
71%
72%
18-24
59%
51%
25-34
82%
84%
35-44
79%
83%
45-54
78%
78%
55-64
66%
67%
65 + over
56%
56%
Among Canadians with debt, percentage of those that say they have
increased their debt over the past 12 months, by region:
National
21%
Atlantic Canada
8%
Quebec
24%
Ontario
23%
Manitoba and Saskatchewan
24%
Alberta
18%
British Columbia
21%
Among Canadians with debt, percentage of those that say their level of
debt has stayed the same over the past 12 months, by region:
National
28%
Atlantic Canada
32%
Quebec
33%
Ontario
26%
Manitoba and Saskatchewan
23%
Alberta
24%
British Columbia
31%
*Each week, Harris/Decima interviews just over 1000 Canadians through
teleVox, the company's national telephone omnibus survey. These data
were gathered in samples of 2002 Canadians between March 28 to April 7,
2013 and 1002 Canadians between April 25 - 28, 2013. Samples of this
size have a margin of error of +/-2.2%, 19 times out of 20 and +/-3.1%,
19 times out of 20 respectively.
CIBC is a leading North American financial institution with over 11
million personal banking and business clients. CIBC offers a full range
of products and services through its comprehensive electronic banking
network, branches and offices across Canada, and has offices in the
United States and around the world. You can find other news releases
and information about CIBC in our Media Centre on our corporate website
at www.cibc.com.
|
{
"pile_set_name": "pile-cc"
}
|
[**The srank Conjecture on Schur’s $Q$-Functions**]{}
William Y. C. Chen$^{1}$, Donna Q. J. Dou$^2$,\
Robert L. Tang$^3$ and Arthur L. B. Yang$^{4}$\
Center for Combinatorics, LPMC-TJKLC\
Nankai University, Tianjin 300071, P. R. China\
$^{1}$[chen@nankai.edu.cn]{}, $^{2}$[qjdou@cfc.nankai.edu.cn]{}, $^{3}$[tangling@cfc.nankai.edu.cn]{}, $^{4}$[yang@nankai.edu.cn]{}
**Abstract.** We show that the shifted rank, or srank, of any partition $\lambda$ with distinct parts equals the lowest degree of the terms appearing in the expansion of Schur’s $Q_{\lambda}$ function in terms of power sum symmetric functions. This gives an affirmative answer to a conjecture of Clifford. As pointed out by Clifford, the notion of the srank can be naturally extended to a skew partition $\lambda/\mu$ as the minimum number of bars among the corresponding skew bar tableaux. While the srank conjecture is not valid for skew partitions, we give an algorithm to compute the srank.
**MSC2000 Subject Classification:** 05E05, 20C25
Introduction
============
The main objective of this paper is to answer two open problems raised by Clifford [@cliff2005] on sranks of partitions with distinct parts, skew partitions and Schur’s $Q$-functions. For any partition $\lambda$ with distinct parts, we give a proof of Clifford’s srank conjecture that the lowest degree of the terms in the power sum expansion of Schur’s $Q$-function $Q_{\lambda}$ is equal to the number of bars in a minimal bar tableaux of shape $\lambda$. Clifford [@cliff2003; @cliff2005] also proposed an open problem of determining the minimum number of bars among bar tableaux of a skew shape $\lambda/\mu$. As noted by Clifford [@cliff2003], this minimum number can be naturally regarded as the shifted rank, or srank, of $\lambda/\mu$, denoted $\mathrm{srank}(\lambda/\mu)$. For a skew bar tableau, we present an algorithm to generate a skew bar tableau without increasing the number of bars. This algorithm eventually leads to a bar tableau with the minimum number of bars.
Schur’s $Q$-functions arise in the study of the projective representations of symmetric groups [@schur1911], see also, Hoffman and Humphreys [@hofhum1992], Humphreys [@humphr1986], J$\rm{\acute{o}}$zefiak [@jozef1989], Morris [@morri1962; @morri1979] and Nazarov [@nazar1988]. Shifted tableaux are closely related to Schur’s $Q$-functions analogous to the role of ordinary tableaux to the Schur functions. Sagan [@sagan1987] and Worley [@worley1984] have independently developed a combinatorial theory of shifted tableaux, which includes shifted versions of the Robinson-Schensted-Knuth correspondence, Knuth’s equivalence relations, Schützenberger’s jeu de taquin, etc. The connections between this combinatorial theory of shifted tableaux and the theory of projective representations of the symmetric groups are further explored by Stembridge [@stemb1989].
Clifford [@cliff2005] studied the srank of shifted diagrams for partitions with distinct parts. Recall that the rank of an ordinary partition is defined as the number of boxes on the main diagonal of the corresponding Young diagram. Nazarov and Tarasov [@naztar2002] found an important generalization of the rank of an ordinary partition to a skew partition in their study of tensor products of Yangian modules. A general theory of border strip decompositions and border strip tableaux of skew partitions is developed by Stanley [@stanl2002], and it has been shown that the rank of a skew partition is the least number of strips to construct a minimal border strip decomposition of the skew diagram. Motivated by Stanley’s theorem, Clifford [@cliff2005] generalized the rank of a partition to the rank of a shifted partition, called srank, in terms of the minimal bar tableaux.
On the other hand, Clifford has noticed that the srank is closely related to Schur’s $Q$-function, as suggested by the work of Stanley [@stanl2002] on the rank of a partition. Stanley introduced a degree operator by taking the degree of the power sum symmetric function $p_{\mu}$ as the number of nonzero parts of the indexing partition $\mu$. Furthermore, Clifford and Stanley [@clista2004] defined the bottom Schur functions to be the sum of the lowest degree terms in the expansion of the Schur functions in terms of the power sums. In [@cliff2005] Clifford studied the lowest degree terms in the expansion of Schur’s $Q$-functions in terms of power sum symmetric functions and conjectured that the lowest degree of the Schur’s $Q$-function $Q_{\lambda}$ is equal to the srank of $\lambda$. Our first result is a proof of this conjecture.
However, in general, the lowest degree of the terms, which appear in the expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ in terms of the power sums, is not equal to the srank of the shifted skew diagram of $\lambda/\mu$. This is different from the case for ordinary skew partitions and skew Schur functions. Instead, we will take an algorithmic approach to the computation of the srank of a skew partition. It would be interesting to find an algebraic interpretation in terms of Schur’s $Q$-functions.
Shifted diagrams and bar tableaux {#sect2}
=================================
Throughout this paper we will adopt the notation and terminology on partitions and symmetric functions in [@macdon1995]. A *partition* $\lambda$ is a weakly decreasing sequence of positive integers $\lambda_1\geq \lambda_2\geq \ldots\geq
\lambda_k$, denoted $\lambda=(\lambda_1, \lambda_2, \ldots,
\lambda_k)$, and $k$ is called the *length* of $\lambda$, denoted $\ell(\lambda)$. For convenience we may add sufficient 0’s at the end of $\lambda$ if necessary. If $\sum_{i=1}^k\lambda_i=n$, we say that $\lambda$ is a partition of the integer $n$, denoted $\lambda\vdash n$. For each partition $\lambda$ there exists a geometric representation, known as the Young diagram, which is an array of squares in the plane justified from the top and left corner with $\ell(\lambda)$ rows and $\lambda_i$ squares in the $i$-th row. A partition is said to be *odd* (resp. even) if it has an odd (resp. even) number of even parts. Let $\mathcal{P}^o(n)$ denote the set of all partitions of $n$ with only odd parts. We will call a partition *strict* if all its parts are distinct. Let $\mathcal
{D}(n)$ denote the set of all strict partitions of $n$. For each partition $\lambda\in \mathcal {D}(n)$, let $S(\lambda)$ be the shifted diagram of $\lambda$, which is obtained from the Young diagram by shifting the $i$-th row $(i-1)$ squares to the right for each $i>1$. For instance, Figure \[shifted diagram\] illustrates the shifted diagram of shape $(8,7,5,3,1)$.
(180,120) (10,100)[(1,0)[160]{}]{} (10,80)[(1,0)[160]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[100]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{}
(10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{}
(70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,60)[(0,1)[40]{}]{}
Given two partitions $\lambda$ and $\mu$, if for each $i$ we have $\lambda_i\geq \mu_i$, then the skew partition $\lambda/\mu$ is defined to be the diagram obtained from the diagram of $\lambda$ by removing the diagram of $\mu$ at the top-left corner. Similarly, the skew shifted diagram $S(\lambda/\mu)$ is defined as the set-theoretic difference of $S(\lambda)$ and $S(\mu)$.
Now we recall the definitions of bars and bar tableaux as given in Hoffman and Humphreys [@hofhum1992]. Let $\lambda\in \mathcal
{D}(n)$ be a partition with length $\ell(\lambda)=k$. Fixing an odd positive integer $r$, three subsets $I_{+}, I_{0}, I_{-}$ of integers between $1$ and $k$ are defined as follows: $$\begin{aligned}
I_{+}& = &\{i: \lambda_{j+1}<\lambda_i-r<\lambda_j\: \mbox{for
some }
j\leq k,\: \mbox {taking}\:\lambda_{k+1}=0\},\\[5pt]
I_{0} & = & \{i: \lambda_i=r\},\\[5pt]
I_{-} & = & \{i: r-\lambda_{i}=\lambda_j \:\mbox{for some} \:j\:
\mbox{with} \:i<j\leq k\}.\end{aligned}$$
Let $I(\lambda,r)=I_{+}\cup I_{0}\cup I_{-}$. For each $i\in
I(\lambda,r)$, we define a new strict partition $\lambda(i,r)$ of $\mathcal {D}(n-r)$ in the following way:
- If $i\in I_{+}$, then $\lambda_i>r$, and let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$ and inserting $\lambda_i-r$ between $\lambda_j$ and $\lambda_{j+1}$.
- If $i\in I_{0}$, let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$.
- If $i\in I_{-}$, then let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing both $\lambda_i$ and $\lambda_j$.
Meanwhile, for each $i\in I(\lambda,r)$, the associated $r$-bar is given as follows:
- If $i\in I_{+}$, the $r$-bar consists of the rightmost $r$ squares in the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $1$.
- If $i\in I_{0}$, the $r$-bar consists of all the squares of the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $2$.
- If $i\in I_{-}$, the $r$-bar consists of all the squares of the $i$-th and $j$-th rows, and we say that the $r$-bar is of Type $3$.
For example, as shown in Figure \[bar tableau\], the squares filled with $6$ are a $7$-bar of Type $1$, the squares filled with $4$ are a $3$-bar of Type $2$, and the squares filled with $3$ are a $7$-bar of Type $3$.
(180,103) (10,100)[(1,0)[180]{}]{} (10,80)[(1,0)[180]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[120]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{}
(10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{}
(70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,40)[(0,1)[60]{}]{} (190,80)[(0,1)[20]{}]{}
(18,87)[$1$]{}(38,87)[$1$]{}(58,87)[$6$]{} (78,87)[$6$]{}(98,87)[$6$]{}(118,87)[$6$]{} (138,87)[$6$]{}(158,87)[$6$]{}(178,87)[$6$]{}
(38,67)[$1$]{}(58,67)[$2$]{}(78,67)[$2$]{} (98,67)[$2$]{}(118,67)[$5$]{}(138,67)[$5$]{} (158,67)[$5$]{}
(58,47)[$3$]{}(78,47)[$3$]{}(98,47)[$3$]{} (118,47)[$3$]{}(138,47)[$3$]{}(158,47)[$3$]{}
(78,27)[$4$]{}(98,27)[$4$]{}(118,27)[$4$]{}
(98,7)[$3$]{}
A *bar tableau* of shape $\lambda$ is an array of positive integers of shape $S(\lambda)$ subject to the following conditions:
- It is weakly increasing in every row;
- The number of parts equal to $i$ is odd for each positive integer $i$;
- Each positive integer $i$ can appear in at most two rows, and if $i$ appears in two rows, then these two rows must begin with $i$;
- The composition obtained by removing all squares filled with integers larger than some $i$ has distinct parts.
We say that a bar tableau $T$ is of type $\rho=(\rho_1,\rho_2,\ldots)$ if the total number of $i$’s appearing in $T$ is $\rho_i$. For example, the bar tableau in Figure \[weight\] is of type $(3,1,1,1)$. For a bar tableau $T$ of shape $\lambda$, we define its weight $wt(T)$ recursively by the following procedure. If $T$ is empty, let $wt(T)=1$. Let $\varepsilon(\lambda)$ denote the parity of the partition $\lambda$, i.e., $\varepsilon(\lambda)=0$ if $\lambda$ has an even number of even parts; otherwise, $\varepsilon(\lambda)=1$. Suppose that the largest numbers in $T$ form an $r$-bar, which is associated with an index $i\in I(\lambda, r)$. Let $j$ be the integer that occurrs in the definitions of $I_{+}$ and $I_{-}$. Let $T'$ be the bar tableau of shape $\lambda(i, r)$ obtained from $T$ by removing this $r$-bar. Now, let $$wt(T)=n_i\, wt(T'),$$ where $$n_i=\left\{\begin{array}{cc}
(-1)^{j-i}2^{1-\varepsilon(\lambda)},&
\mbox{if}\ i\in I_{+},\\[6pt]
(-1)^{\ell(\lambda)-i},& \mbox{if}\ i\in I_{0},\\[6pt]
(-1)^{j-i+\lambda_i}2^{1-\varepsilon(\lambda)},& \mbox{if}\ i\in
I_{-}.
\end{array}
\right.$$ For example, the weight of the bar tableau $T$ in Figure \[weight\] equals $$wt(T)=(-1)^{1-1}2^{1-0}\cdot(-1)^{1-1}2^{1-1}\cdot(-1)^{2-2}
\cdot(-1)^{1-1}=2.$$
(180,40) (40,40)[(1,0)[100]{}]{}(40,20)[(1,0)[100]{}]{} (60,0)[(1,0)[20]{}]{}
(40,20)[(0,1)[20]{}]{}(60,0)[(0,1)[40]{}]{} (80,0)[(0,1)[40]{}]{}(100,20)[(0,1)[20]{}]{} (120,20)[(0,1)[20]{}]{}(140,20)[(0,1)[20]{}]{} (48,26)[$1$]{}(68,26)[$1$]{} (88,26)[$1$]{}(108,26)[$3$]{}(128,26)[$4$]{} (68,6)[$2$]{}
The following lemma will be used in Section 3 to determine whether certain terms will vanish in the power sum expansion of Schur’s $Q$-functions indexed by partitions with two distinct parts.
\[vanishbar\] Let $\lambda=(\lambda_1,\lambda_2)$ be a strict partition with the two parts $\lambda_1$ and $\lambda_2$ having the same parity. Given an partition $\sigma=(\sigma_1,\sigma_2)\in
\mathcal{P}^o(|\lambda|)$, if $\sigma_2<\lambda_2$, then among all bar tableaux of shape $\lambda$ there exist only two bar tableaux of type $\sigma$, say $T_1$ and $T_2$, and furthermore, we have $wt(T_1)+wt(T_2)=0$.
Suppose that both $\lambda_1$ and $\lambda_2$ are even. The case when $\lambda_1$ and $\lambda_2$ are odd numbers can be proved similarly. Note that $\sigma_2<\lambda_{2}<\lambda_{1}$. By putting $2$’s in the last $\sigma_2$ squares of the second row and then filling the remaining squares in the diagram with $1$’s, we obtain one tableau $T_1$. By putting $2$’s in the last $\sigma_2$ squares of the first row and then filling the remaining squares with $1$’s, we obtain another tableau $T_2$. Clearly, both $T_1$ and $T_2$ are bar tableaux of shape $\lambda$ and type $\sigma$, and they are the only two such bar tableaux. We notice that $$wt(T_1)=(-1)^{2-2}2^{1-0}\cdot (-1)^{2-1+\lambda_1} 2^{1-1}=-2.$$ While, for the weight of $T_2$, there are two cases to consider. If $\lambda_1-\sigma_2>\lambda_2$, then $$wt(T_2)=(-1)^{1-1}2^{1-0}\cdot
(-1)^{2-1+\lambda_1-\sigma_2}2^{1-1}=2.$$ If $\lambda_1-\sigma_2<\lambda_2$, then $$wt(T_2)=(-1)^{2-1}2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=2.$$ Thus we have $wt(T_2)=2$ in either case, so the relation $wt(T_1)+wt(T_2)=0$ holds.
For example, taking $\lambda=(8,6)$ and $\sigma=(11,3)$, the two bar tableaux $T_1$ and $T_2$ in the above lemma are depicted as in Figure \[2-bar tableaux1\].
(180,40) (0,30)[(1,0)[80]{}]{}(0,20)[(1,0)[80]{}]{} (10,10)[(1,0)[60]{}]{}(0,30)[(0,-1)[10]{}]{} (10,30)(10,0)[7]{}[(0,-1)[20]{}]{} (80,30)[(0,-1)[10]{}]{} (4,23)(10,0)[8]{}[$1$]{} (14,13)(10,0)[3]{}[$1$]{} (44,13)(10,0)[3]{}[$2$]{} (40,0)[$T_1$]{} (110,30)[(1,0)[80]{}]{}(110,20)[(1,0)[80]{}]{} (120,10)[(1,0)[60]{}]{}(110,30)[(0,-1)[10]{}]{} (120,30)(10,0)[7]{}[(0,-1)[20]{}]{} (190,30)[(0,-1)[10]{}]{} (114,23)(10,0)[5]{}[$1$]{} (164,23)(10,0)[3]{}[$2$]{}(124,13)(10,0)[6]{}[$1$]{} (150,0)[$T_2$]{}
Clifford gave a natural generalization of bar tableaux to skew shapes [@cliff2005]. Formally, a *skew bar tableau* of shape $\lambda/\mu$ is an assignment of nonnegative integers to the squares of $S(\lambda)$ such that in addition to the above four conditions (1)-(4) we further impose the condition that
- the partition obtained by removing all squares filled with positive integers and reordering the remaining rows is $\mu$.
For example, taking the skew partition $(8,6,5,4,1)/(8,2,1)$, Figure \[skew bar tableau\] is a skew bar tableau of such shape.
(180,160) (-125,150)[(1,0)[120]{}]{}(-125,135)[(1,0)[120]{}]{} (-110,120)[(1,0)[90]{}]{}(-95,105)[(1,0)[75]{}]{} (-80,90)[(1,0)[60]{}]{}(-65,75)[(1,0)[15]{}]{} (-125,135)[(0,1)[15]{}]{}(-110,120)[(0,1)[30]{}]{} (-95,105)[(0,1)[45]{}]{}(-80,90)[(0,1)[60]{}]{} (-65,75)[(0,1)[75]{}]{}(-50,75)[(0,1)[75]{}]{} (-35,90)[(0,1)[60]{}]{}(-20,90)[(0,1)[60]{}]{} (-5,135)[(0,1)[15]{}]{} (-5,105)[$\longrightarrow$]{}
(-120,139)[$0$]{}(-105,139)[$0$]{}(-90,139)[$0$]{} (-75,139)[$0$]{}(-60,139)[$0$]{}(-45,139)[$0$]{} (-30,139)[$0$]{}(-15,139)[$0$]{}
(-105,124)[$1$]{}(-90,124)[$1$]{} (-75,124)[$1$]{}(-60,124)[$3$]{}(-45,124)[$3$]{} (-30,124)[$3$]{}
(-90,109)[$0$]{} (-75,109)[$0$]{}(-60,109)[$2$]{}(-45,109)[$2$]{} (-30,109)[$2$]{}
(-75,94)[$1$]{}(-60,94)[$1$]{}(-45,94)[$1$]{} (-30,94)[$1$]{} (-60,79)[$0$]{}
(25,150)[(1,0)[120]{}]{}(25,135)[(1,0)[120]{}]{} (40,120)[(1,0)[75]{}]{}(55,105)[(1,0)[60]{}]{} (70,90)[(1,0)[45]{}]{}(85,75)[(1,0)[15]{}]{} (25,135)[(0,1)[15]{}]{}(40,120)[(0,1)[30]{}]{} (55,105)[(0,1)[45]{}]{}(70,90)[(0,1)[60]{}]{} (85,75)[(0,1)[75]{}]{}(100,75)[(0,1)[75]{}]{} (115,90)[(0,1)[60]{}]{}(130,135)[(0,1)[15]{}]{} (145,135)[(0,1)[15]{}]{}(145,105)[$\longrightarrow$]{}
(30,139)[$0$]{}(45,139)[$0$]{}(60,139)[$0$]{} (75,139)[$0$]{}(90,139)[$0$]{}(105,139)[$0$]{} (120,139)[$0$]{}(135,139)[$0$]{}
(45,124)[$0$]{}(60,124)[$0$]{} (75,124)[$2$]{}(90,124)[$2$]{}(105,124)[$2$]{}
(60,109)[$1$]{} (75,109)[$1$]{}(90,109)[$1$]{}(105,109)[$1$]{}
(75,94)[$1$]{}(90,94)[$1$]{}(105,94)[$1$]{}
(90,79)[$0$]{}
(175,150)[(1,0)[120]{}]{}(175,135)[(1,0)[120]{}]{} (190,120)[(1,0)[60]{}]{}(205,105)[(1,0)[45]{}]{} (220,90)[(1,0)[30]{}]{}(235,75)[(1,0)[15]{}]{}
(175,135)[(0,1)[15]{}]{}(190,120)[(0,1)[30]{}]{} (205,105)[(0,1)[45]{}]{}(220,90)[(0,1)[60]{}]{} (235,75)[(0,1)[75]{}]{}(250,75)[(0,1)[75]{}]{} (265,135)[(0,1)[15]{}]{}(280,135)[(0,1)[15]{}]{} (295,135)[(0,1)[15]{}]{}
(180,139)[$0$]{}(195,139)[$0$]{}(210,139)[$0$]{} (225,139)[$0$]{}(240,139)[$0$]{}(255,139)[$0$]{} (270,139)[$0$]{}(285,139)[$0$]{}
(195,124)[$1$]{}(210,124)[$1$]{} (225,124)[$1$]{}(240,124)[$1$]{}
(210,109)[$1$]{} (225,109)[$1$]{}(240,109)[$1$]{}
(225,94)[$0$]{}(240,94)[$0$]{}
(240,79)[$0$]{}
(-50,30)[$\longrightarrow$]{} (25,45)[(1,0)[120]{}]{}(25,30)[(1,0)[120]{}]{} (40,15)[(1,0)[30]{}]{}(55,0)[(1,0)[15]{}]{}
(25,30)[(0,1)[15]{}]{}(40,15)[(0,1)[30]{}]{} (55,0)[(0,1)[45]{}]{}(70,0)[(0,1)[45]{}]{} (85,30)[(0,1)[15]{}]{}(100,30)[(0,1)[15]{}]{} (115,30)[(0,1)[15]{}]{}(130,30)[(0,1)[15]{}]{} (145,30)[(0,1)[15]{}]{}
(30,34)[$0$]{}(45,34)[$0$]{}(60,34)[$0$]{} (75,34)[$0$]{}(90,34)[$0$]{}(105,34)[$0$]{} (120,34)[$0$]{}(135,34)[$0$]{}
(45,19)[$0$]{}(60,19)[$0$]{} (60,4)[$0$]{}
A bar tableau of shape $\lambda$ is said to be *minimal* if there does not exist a bar tableau with fewer bars. Motivated by Stanley’s results in [@stanl2002], Clifford defined the srank of a shifted partition $S(\lambda)$, denoted ${\rm srank}(\lambda)$, as the number of bars in a minimal bar tableau of shape $\lambda$ [@cliff2005]. Clifford also gave the following formula for ${\rm
srank}(\lambda)$.
\[min bar\] Given a strict partition $\lambda$, let $o$ be the number of odd parts of $\lambda$, and let $e$ be the number of even parts. Then ${\rm srank}(\lambda)=\max(o,e+(\ell(\lambda) \ \mathrm{mod}\ 2))$.
Next we consider the number of bars in a minimal skew bar tableau of shape $\lambda/\mu$. Note that the squares filled with $0$’s in the skew bar tableau give rise to a shifted diagram of shape $\mu$ by reordering the rows. Let $o_ r$ (resp. $e_r$) be the number of nonempty rows of odd (resp. even) length with blank squares, and let $o_s$ (resp. $e_s$) be the number of rows of $\lambda$ with some squares filled with $0$’s and an odd (resp. even) number of blank squares. It is obvious that the number of bars in a minimal skew bar tableau is greater than or equal to $$o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)).$$ In fact the above quantity has been considered by Clifford [[@cliff2003]]{}. Observe that this quantity depends on the positions of the 0’s.
It should be remarked that a legal bar tableau of shape $\lambda/\mu$ may not exist once the positions of $0$’s are fixed. One open problem proposed by Clifford [@cliff2003] is to find a characterization of ${\rm srank}(\lambda/\mu)$. In Section 5 we will give an algorithm to compute the srank of a skew shape.
Clifford’s conjecture {#sect3}
=====================
In this section, we aim to show that the lowest degree of the power sum expansion of a Schur’s $Q$-function $Q_{\lambda}$ equals ${\rm
srank}(\lambda)$. Let us recall relevant terminology on Schur’s $Q$-functions. Let $x=(x_1,x_2,\ldots)$ be an infinite sequence of independent indeterminates. We define the symmetric functions $q_k=q_k(x)$ in $x_1,x_2,\ldots$ for all integers $k$ by the following expansion of the formal power series in $t$: $$\prod_{i\geq 1}\frac{1+x_it}{1-x_it}=\sum_{k}q_{k}(x)t^k.$$ In particular, $q_k=0$ for $k<0$ and $q_0=1$. It immediately follows that $$\label{eq-def}
\sum_{i+j=n}(-1)^iq_iq_j=0,$$ for all $n\geq 1$. Let $Q_{(a)}=q_a$ and $$Q_{(a,b)}=q_aq_b+2\sum_{m=1}^b(-1)^m q_{a+m}q_{b-m}.$$ From we see that $Q_{(a,b)}=-Q_{(b,a)}$ and thus $Q_{(a,a)}=0$ for any $a,b$. In general, for any strict partition $\lambda$, the symmetric function $Q_{\lambda}$ is defined by the recurrence relations: $$\begin{aligned}
Q_{(\lambda_1,\ldots,\lambda_{2k+1})}&=& \sum_{m=1}^{2k+1}
(-1)^{m+1}
q_{\lambda_m}Q_{(\lambda_1,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k+1})},\\[5pt]
Q_{(\lambda_1,\ldots,\lambda_{2k})}&=& \sum_{m=2}^{2k} (-1)^{m}
Q_{(\lambda_1,\lambda_m)}Q_{(\lambda_2,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k})},\end{aligned}$$ where $\hat{}$ stands for a missing entry.
It was known that $Q_{\lambda}$ can be also defined as the specialization at $t=-1$ of the Hall-Littlewood functions associated with $\lambda$ [@macdon1995]. Originally, these $Q_{\lambda}$ symmetric functions were introduced in order to express irreducible projective characters of the symmetric groups [@schur1911]. Note that the irreducible projective representations of $S_n$ are in one-to-one correspondence with partitions of $n$ with distinct parts, see [@jozef1989; @stemb1988; @stemb1989]. For any $\lambda\in \mathcal{D}(n)$, let $\langle\lambda\rangle$ denote the character of the irreducible projective or spin representation indexed by $\lambda$. Morris [@morri1965] has found a combinatorial rule for calculating the characters, which is the projective analogue of the Murnaghan-Nakayama rule. In terms of bar tableaux, Morris’s theorem reads as follows:
\[mnrule\] Let $\lambda\in \mathcal{D}(n)$ and $\pi\in
\mathcal{P}^o(n)$. Then $$\label{mnruleeq}
\langle\lambda\rangle(\pi)=\sum_{T}wt(T)$$ where the sum ranges over all bar tableaux of shape $\lambda$ and type $\pi$.
The above theorem for projective characters implies the following formula, which will be used later in the proof of Lemma \[len2\].
\[2odd\] Let $\lambda$ be a strict partition of length $2$. Suppose that the two parts $\lambda_1,\lambda_2$ are both odd. Then we have $$\langle\lambda\rangle(\lambda)=-1.$$
Let $T$ be the bar tableau obtained by filling the last $\lambda_2$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, and let $T'$ be the bar tableau obtained by filling the first row of $S(\lambda)$ with $1$’s and the second row with $2$’s. Clearly, $T$ and $T'$ are of the same type $\lambda$. Let us first consider the weight of $T$. If $\lambda_1-\lambda_2<\lambda_2$, then $$wt(T)=(-1)^{2-1} 2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=-2.$$ If $\lambda_1-\lambda_2>\lambda_2$, then $$wt(T)=(-1)^{1-1} 2^{1-0}\cdot
(-1)^{2-1+\lambda_1-\lambda_2}2^{1-1}=-2.$$ In both cases, the weight of $T'$ equals $$wt(T')=(-1)^{2-2}\cdot (-1)^{1-1}=1.$$ Since there are only two bar tableaux, $T$ and $T'$, of type $\lambda$, the corollary immediately follows from Theorem \[mnrule\].
Let $p_k(x)$ denote the $k$-th power sum symmetric functions, i.e., $p_k(x)=\sum_{i\geq 1}x_i^k$. For any partition $\lambda=(\lambda_1,\lambda_2,\cdots)$, let $p_{\lambda}=p_{\lambda_1}p_{\lambda_2}\cdots$. The fundamental connection between $Q_{\lambda}$ symmetric functions and the projective representations of the symmetric group is as follows.
\[conn\] Let $\lambda\in \mathcal{D}(n)$. Then we have $$Q_{\lambda}=\sum_{\pi\in \mathcal{P}^o(n)}
2^{[\ell(\lambda)+\ell(\pi)+\varepsilon(\lambda)]/2}
\langle\lambda\rangle(\pi)\frac{p_{\pi}}{z_{\pi}},$$ where $$z_{\pi}=1^{m_1}m_1!\cdot 2^{m_2}m_2!\cdot \cdots, \quad \mbox{if $\pi=\langle 1^{m_1}2^{m_2}\cdots \rangle$.}$$
Stanley [@stanl2002] introduced a degree operator on symmetric functions by defining $\deg(p_i)=1$, and so $\deg(p_{\nu})=\ell(\nu)$. Clifford [@cliff2005] applied this operator to Schur’s $Q$-functions and obtained the following lower bound from Theorem \[conn\].
\[atleast\] The terms of the lowest degree in $Q_{\lambda}$ have degree at least ${\rm srank}(\lambda)$.
The following conjecture is proposed by Clifford:
The terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm
srank}(\lambda)$.
Our proof of the above conjecture depends on the Pfaffian formula for Schur’s $Q$-functions. Given a skew-symmetric matrix $A=(a_{i,j})$ of even size $2n\times 2n$, the *Pfaffian* of $A$, denoted [Pf]{}(A), is defined by $${\rm Pf}(A)=\sum_{\pi}(-1)^{{\rm cr}(\pi)} a_{i_1j_1}\cdots a_{i_nj_n},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots,
2n\}$ into two element blocks $i_k<j_k$ and $cr(\pi)$ is the number of crossings of $\pi$, i.e., the number of pairs $h<k$ for which $i_h<i_k<j_h<j_k$.
\[pfexp\] Given a strict partition $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_{2n})$ satisfying $\lambda_1>\ldots>\lambda_{2n}\geq 0$, let $M_{\lambda}=(Q_{(\lambda_i,\lambda_j)})$. Then we have $$Q_{\lambda}={\rm Pf}(M_{\lambda}).$$
We first prove that Clifford’s conjecture holds for strict partitions of length less than three. The proof for the general case relies on this special case.
\[len2\] Let $\lambda$ be a strict partition of length $\ell(\lambda)<3$. Then the terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm srank}(\lambda)$.
In view of Theorem \[mnrule\] and Theorem \[conn\], if there exists a unique bar tableau of shape $\lambda$ and type $\pi$, then the coefficient of $p_{\pi}$ is nonzero in the expansion of $Q_{\lambda}$. There are five cases to consider.
- $\ell(\lambda)=1$ and $\lambda_1$ is odd. Clearly, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $\lambda$ with all squares of $S(\lambda)$ filled with $1$’s. Therefore, the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the lowest degree of $Q_{\lambda}$ is $1$.
- $\ell(\lambda)=1$ and $\lambda_1$ is even. We see that ${\rm srank}(\lambda)=2$. Since the bars are all of odd size, there does not exist any bar tableau of shape $\lambda$ and of type $\lambda$. But there is a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,1)$, which is obtained by filling the rightmost square of $S(\lambda)$ with $2$ and the remaining squares with $1$’s. So the coefficient of $p_{(\lambda_1-1,1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of the lowest degree in $Q_{\lambda}$ have degree $2$.
- $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ have different parity. In this case, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1+\lambda_2)$, which is obtained by filling all the squares of $S(\lambda)$ with $1$’s. Thus, the coefficient of $p_{\lambda_1+\lambda_2}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of lowest degree in $Q_{\lambda}$ have degree $1$.
- $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both even. It is easy to see that ${\rm
srank}(\lambda)=2$. Since there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,\lambda_2+1)$, which is obtained by filling the rightmost $\lambda_2+1$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, the coefficient of $p_{(\lambda_1-1,\lambda_2+1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero; hence the lowest degree of $Q_{\lambda}$ is equal to $2$.
- $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both odd. In this case, we have ${\rm
srank}(\lambda)=2$. By Corollary \[2odd\], the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero, and therefore the terms of the lowest degree in $Q_{\lambda}$ have degree $2$.
This completes the proof.
Given a strict partition $\lambda$, we consider the Pfaffian expansion of $Q_{\lambda}$ as shown in Theorem \[pfexp\]. To prove Clifford’s conjecture, we need to determine which terms may appear in the expansion of $Q_{\lambda}$ in terms of power sum symmetric functions. Suppose that the Pfaffian expansion of $Q_{\lambda}$ is as follows: $$\label{q-expand}
{\rm Pf}(M_{\lambda})=\sum_{\pi}(-1)^{{\rm cr}(\pi)}
Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots,
2m\}$ into two element blocks $\{(\pi_1,\pi_2),\ldots,(\pi_{2m-1},\pi_{2m})\}$ with $\pi_1<\pi_3<\cdots<\pi_{2m-1}$ and $\pi_{2k-1}<\pi_{2k}$ for any $k$. For the above expansion of $Q_{\lambda}$, the following two lemmas will be used to choose certain lowest degree terms in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$ in the matrix $M_\lambda$.
\[lemma1\] Suppose that $\lambda$ has both odd parts and even parts. Let $\lambda_{i_1}$ (resp. $\lambda_{j_1}$) be the largest odd (resp. even) part of $\lambda$. If the power sum symmetric function $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in the terms of lowest degree originated from the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in the expansion , then we have $(\pi_1,\pi_2)=(i_1,j_1)$.
Without loss of generality, we may assume that $\lambda_{i_1}>
\lambda_{j_1}$. By Lemma \[len2\], the term $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in $Q_{(\lambda_{i_1},
\lambda_{j_1})}$ with nonzero coefficients. Since $\lambda_{i_1},
\lambda_{j_1}$ are the largest odd and even parts, $p_{\lambda_{i_1}+\lambda_{j_1}}$ does not appear as a factor of any term of the lowest degree in the expansion of $Q_{(\lambda_{i_k},
\lambda_{j_k})}$, where $\lambda_{i_k}$ and $\lambda_{j_k}$ have different parity. Meanwhile, if $\lambda_{i_k}$ and $\lambda_{j_k}$ have the same parity, then we consider the bar tableaux of shape $(\lambda_{i_k}, \lambda_{j_k})$ and of type $(\lambda_{i_1}+\lambda_{j_1}, \lambda_{i_k}+
\lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1})$. Observe that $\lambda_{i_k}+
\lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1}<\lambda_{j_k}$. Since the lowest degree of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$ is $2$, from Lemma \[vanishbar\] it follows that $p_{\lambda_{i_1}+\lambda_{j_1}}$ can not be a factor of any term of lowest degree in the power sum expansion of $Q_{(\lambda_{i_k},
\lambda_{j_k})}$. This completes the proof.
\[lemma2\] Suppose that $\lambda$ only has even parts. Let $\lambda_1,
\lambda_2$ be the two largest parts of $\lambda$ (allowing $\lambda_2=0$). If the power sums $p_{\lambda_1-1}p_{\lambda_2+1}$ appears in the terms of the lowest degree given by the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in , then we have $(\pi_1,\pi_2)=(1,2)$.
From Case (4) of the proof of Lemma \[len2\] it follows that $p_{\lambda_1-1}p_{\lambda_2+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_1,\lambda_2)}$. We next consider the power sum expansion of any other $Q_{(\lambda_i,\lambda_j)}$. First, we consider the case when $\lambda_i+\lambda_j>\lambda_2+1$ and $\lambda_i \leq\lambda_2$. Since $\lambda_i+\lambda_j-(\lambda_2+1)<\lambda_j$, by Lemma \[vanishbar\], the term $p_{\lambda_2+1}$ is not a factor of any term of the lowest degree in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$. Now we are left with the case when $\lambda_i+\lambda_j>\lambda_1-1$ and $\lambda_i\leq \lambda_1-2$. Since $\lambda_i+\lambda_j-(\lambda_1-1)<\lambda_j$, by Lemma \[vanishbar\] the term $p_{\lambda_1-1}$ does not appear as a factor in the terms of the lowest degree of $Q_{(\lambda_i,\lambda_j)}$. So we have shown that if either $p_{\lambda_2+1}$ or $p_{\lambda_1-1}$ appears as a factor of some lowest degree term for $Q_{(\lambda_i,\lambda_j)}$, then we deduce that $\lambda_i=\lambda_1$. Moreover, if both $p_{\lambda_1-1}$ and $p_{\lambda_2+1}$ are factors of the lowest degree terms in the power sum expansion of $Q_{(\lambda_1,\lambda_j)}$, then we have $\lambda_j=\lambda_2$. The proof is complete.
We now present the main result of this paper.
For any $\lambda\in\mathcal{D}(n)$, the terms of the lowest degree in $Q_\lambda$ have degree ${\rm srank}(\lambda)$.
We write the strict partition $\lambda$ in the form $(\lambda_1,\lambda_2,\ldots,\lambda_{2m})$, where $\lambda_1>\ldots>\lambda_{2m}\geq 0$. Suppose that the partition $\lambda$ has $o$ odd parts and $e$ even parts (including $0$ as a part). For the sake of presentation, let $(\lambda_{i_1},\lambda_{i_2},\ldots,\lambda_{i_o})$ denote the sequence of odd parts in decreasing order, and let $(\lambda_{j_1},\lambda_{j_2},\ldots,\lambda_{j_e})$ denote the sequence of even parts in decreasing order.
We first consider the case $o\geq e$. In this case, it will be shown that ${\rm srank}(\lambda)=o$. By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=o.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=o.$$
Let $$A=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots
p_{\lambda_{i_e}+\lambda_{j_e}}p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots
p_{\lambda_{i_o}}.$$ We claim that $A$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. For this purpose, we need to determine those matchings $\pi$ of $\{1,2,\ldots,2m\}$ in , for which the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ contains $A$ as a term of the lowest degree.
By Lemma \[lemma1\], if the $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears as a factor in the lowest degree terms of the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $\{\pi_1,\pi_2\}=\{i_1,j_1\}$. Iterating this argument, we see that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots
p_{\lambda_{i_e}+\lambda_{j_e}}$ appears as a factor in the lowest degree terms of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $$\{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2e-1},\pi_{2e}\}=\{i_e,j_e\}.$$ It remains to determine the ordered pairs $$\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}.$$ By the same argument as in Case (5) of the proof of Lemma \[len2\], for any $e+1\leq k<l\leq o$, the term $p_{\lambda_{i_{k}}}p_{\lambda_{i_{l}}}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{i_k},\lambda_{i_l})}$. Moreover, if the power sum symmetric function $p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots
p_{\lambda_{i_o}}$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_{2e+1}},\lambda_{\pi_{2e+2}})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the composition of the pairs $\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}$ could be any matching of $\{1,2,\ldots,2m\}/\{i_1,j_1,\ldots,i_e,j_e\}$.
To summarize, there are $(2(m-e)-1)!!$ matchings $\pi$ such that $A$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$. Combining Corollary \[2odd\] and Theorem \[conn\], we find that the coefficient of $p_{\lambda_{i_k}}p_{\lambda_{i_l}}$ $(e+1\leq k<l\leq o)$ in the power sum expansion of $Q_{(\lambda_{i_k}, \lambda_{i_l})}$ is $-\frac{4}{\lambda_{i_k}\lambda_{i_l}}$. It follows that the coefficient of $A$ in the expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}}, \lambda_{\pi_{2m}})}$ is independent of the choice of $\pi$. Since $(2(m-e)-1)!!$ is an odd number, the term $A$ will not vanish in the expansion of $Q_{\lambda}$. Note that the degree of $A$ is $e+(o-e)=o,$ which is equal to ${\rm
srank}(\lambda)$, as desired.
Similarly, we consider the case $e>o$. In this case, we aim to show that ${\rm srank}(\lambda)=e.$ By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=e.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=e.$$
Let $$B=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots
p_{\lambda_{i_o}+\lambda_{j_o}}p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots
p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}.$$ We proceed to prove that $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. Applying Lemma \[lemma1\] repeatedly, we deduce that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots
p_{\lambda_{i_o}+\lambda_{j_o}}$ appears as a factor in the lowest degree terms of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match1}
\{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2o-1},\pi_{2o}\}=\{i_o,j_o\}.$$ On the other hand, iteration of Lemma \[lemma2\] reveals that if the power sum symmetric function $p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots
p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_{2o+1}},\lambda_{\pi_{2o+2}})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match2}
\{\pi_{2o+1},\pi_{2o+2}\}=\{j_{o+1},j_{o+2}\},\ldots,\{\pi_{2m-1},\pi_{2m}\}=\{j_{e-1},j_e\}.$$ Therefore, if $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots
Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the matching $\pi$ is uniquely determined by and . Note that the degree of $B$ is $e$, which coincides with ${\rm
srank}(\lambda)$.
Since there is always a term of degree ${\rm srank}(\lambda)$ in the power sum expansion of $Q_\lambda$, the theorem follows.
Skew Schur’s $Q$-functions
==========================
In this section, we show that the srank ${\rm srank}(\lambda/\mu)$ is a lower bound of the lowest degree of the terms in the power sum expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$. Note that Clifford’s conjecture does not hold for skew shapes.
We first recall a definition of the skew Schur’s $Q$-function in terms of strip tableaux. The concept of strip tableaux were introuduced by Stembridge [@stemb1988] to describe the Morris rule for the evaluation of irreducible spin characters. Given a skew partition $\lambda/\mu$, the *$j$-th diagonal* of the skew shifted diagram $S(\lambda/\mu)$ is defined as the set of squares $(1,j), (2, j+1), (3, j+2), \ldots$ in $S(\lambda/\mu)$. A skew diagram $S(\lambda/\mu)$ is called a *strip* if it is rookwise connected and each diagonal contains at most one box. The *height* $h$ of a strip is defined to be the number of rows it occupies. A *double strip* is a skew diagram formed by the union of two strips which both start on the diagonal consisting of squares $(j,j)$. The *depth* of a double strip is defined to be $\alpha+\beta$ if it has $\alpha$ diagonals of length two and its diagonals of length one occupy $\beta$ rows. A *strip tableau* of shape $\lambda/\mu$ and type $\pi=(\pi_1,\ldots,\pi_k)$ is defined to be a sequence of shifted diagrams $$S(\mu)=S(\lambda^0)\subseteq S(\lambda^1)\subseteq \cdots \subseteq S(\lambda^k)=S(\lambda)$$ with $|\lambda^i/\lambda^{i-1}|=\pi_i$ ($1\leq i\leq k$) such that each skew shifted diagram $S(\lambda^i/\lambda^{i-1})$ is either a strip or a double strip.
The skew Schur’s $Q$-function can be defined as the weight generating function of strip tableaux in the following way. For a strip of height $h$ we assign the weight $(-1)^{h-1}$, and for a double strip of depth $d$ we assign the weight $2(-1)^{d-1}$. The weight of a strip tableau $T$, denoted $wt(T)$, is the product of the weights of strips and double strips of which $T$ is composed. Then the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ is given by $$Q_{\lambda/\mu}=\sum_{\pi\in \mathcal{P}^o(|\lambda/\mu|)}\sum_{T}
2^{\ell(\pi)}wt(T)\frac{p_{\pi}}{z_{\pi}},$$ where $T$ ranges over all strip tableaux $T$ of shape $\lambda/\mu$ and type $\pi$, see [@stemb1988 Theorem 5.1].
J$\rm{\acute{o}}$zefiak and Pragacz [@jozpra1991] obtained the following Pfaffian formula for the skew Schur’s $Q$-function.
\[skewpf\] Let $\lambda, \mu$ be strict partitions with $m=\ell(\lambda)$, $n=\ell(\mu)$, $\mu\subset \lambda$, and let $M(\lambda,\mu)$ denote the skew-symmetric matrix $$\begin{pmatrix}
A & B\\ -B^t & 0
\end{pmatrix},$$ where $A=(Q_{(\lambda_i,\lambda_j)})$ and $B=(Q_{(\lambda_i-\mu_{n+1-j})})$.
Then
- if $m+n$ is even, we have $Q_{\lambda/\mu}={\rm
Pf}(M(\lambda,\mu))$;
- if $m+n$ is odd, we have $Q_{\lambda/\mu}={\rm Pf}(M(\lambda,\mu^\prime))$, where $\mu^\prime=(\mu_1,\cdots,\mu_n,
0)$.
A combinatorial proof of the above theorem was given by Stembridge [@stemb1990] in terms of lattice paths, and later, Hamel [@hamel1996] gave an interesting generalization by using the border strip decompositions of the shifted diagram.
Given a skew partition $\lambda/\mu$, Clifford [@cliff2003] constructed a bijection between skew bar tableaux of shape $\lambda/\mu$ and skew strip tableaux of the same shape, which preserves the type of the tableau. Using this bijection, it is straightforward to derive the following result.
The terms of the lowest degree in $Q_{\lambda/\mu}$ have degree at least ${\rm srank}(\lambda/\mu)$.
Different from the case of non-skew shapes, in general, the lowest degree terms in $Q_{\lambda/\mu}$ do not have the degree ${\rm
srank}(\lambda/\mu)$. For example, take the skew partition $(4,3)/3$. It is easy to see that ${\rm srank}((4,3)/3)=2$. While, using Theorem \[skewpf\] and Stembridge’s SF Package for Maple [@stem2], we obtain that $$Q_{(4,3)/3}={\rm Pf}
\begin{pmatrix}0 & Q_{(4,3)} & Q_{(4)} & Q_{(1)}\\[5pt]
Q_{(3,4)} & 0 & Q_{(3)} & Q_{(0)}\\[5pt]
-Q_{(4)} & -Q_{(3)}& 0 & 0\\[5pt]
-Q_{(1)} & -Q_{(0)}& 0 & 0
\end{pmatrix}=2p_1^4.$$ This shows that the lowest degree of $Q_{(4,3)/3}$ equals 4, which is strictly greater than ${\rm srank}((4,3)/3)$.
The srank of skew partitions {#sect4}
============================
In this section, we present an algorithm to determine the srank for the skew partition $\lambda/\mu$. In fact, the algorithm leads to a configuration of $0$’s. To obtain the srank of a skew partition, we need to minimize the number of bars by adjusting the positions of $0$’s. Given a configuration $\mathcal{C}$ of $0$’s in the shifted diagram $S(\lambda)$, let $$\kappa(\mathcal{C})=o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)),$$ where $o_ r$ (resp. $e_r$) counts the number of nonempty rows in which there are an odd (resp. even) number of squares and no squares are filled with $0$, and $o_s$ (resp. $e_s$) records the number of rows in which at least one square is filled with $0$ but there are an odd (resp. nonzero even) number of blank squares.
If there exists at least one bar tableau of type $\lambda/\mu$ under some configuration $\mathcal{C}$, we say that $\mathcal{C}$ is *admissible*. For a fixed configuration $\mathcal{C}$, each row is one of the following eight possible types:
- an even row bounded by an even number of $0$’s, denoted $(e,e)$,
- an odd row bounded by an even number of $0$’s, denoted $(e,o)$,
- an odd row bounded by an odd number of $0$’s, denoted $(o,e)$,
- an even row bounded by an odd number of $0$’s, denoted $(o,o)$,
- an even row without $0$’s, denoted $(\emptyset,e)$,
- an odd row without $0$’s, denoted $(\emptyset,o)$,
- an even row filled with $0$’s, denoted $(e,
\emptyset)$,
- an odd row filled with $0$’s, denoted $(o,
\emptyset)$.
Given two rows with respective types $s$ and $s'$ for some configuration $\mathcal{C}$, if we can obtain a new configuration $\mathcal{C}'$ by exchanging the locations of $0$’s in these two rows such that their new types are $t$ and $t'$ respectively, then denote it by $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop
s'}}\right] \rightarrow \left[\tiny{{{t} \atop
{t'}}}\right]\right)$. Let $o_r,e_r,o_s,e_s$ be defined as above corresponding to configuration $\mathcal{C}$, and let $o_r',e_r',o_s',e_s'$ be those of $\mathcal{C}'$.
In the following we will show that how the quantity $\kappa(\mathcal{C})$ changes when exchanging the locations of $0$’s in $\mathcal{C}$.
\[varyzero1-1\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right]
\rightarrow \left[\tiny{{s \atop s'}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right]
\rightarrow \left[\tiny{{{s'} \atop s}}\right]\right)$, i.e., the types of the two involved rows are remained or exchanged, where $s,s'$ are any two possible types, then $\kappa({\mathcal{C}'})=
\kappa({\mathcal{C}})$.
\[varyzero1-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)}
\atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow
\left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt}
{(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$.
In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad
o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ Note that $o_r+e_r=\ell(\lambda)-\ell(\mu)$. Now there are two cases to consider.
**Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\
2)$.
- If $o_r\leq e_r$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+e_r,\\
\kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq
e_r+1=e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+o_r,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$
**Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\
2)$.
- If $o_r\leq e_r+1$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq
e_r+2>e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+o_r,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$
Therefore, the inequality $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$ holds under the assumption.
\[varyzero2-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,e)}
\atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow
\left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt}
{(o,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$.
In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad
o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Now there are two possibilities.
**Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\
2)$.
- If $o_r\leq e_r-2$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+e_r,\\
\kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\geq e_r$, then $o_r^\prime=o_r+1>
e_r-1=e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+o_r,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$
**Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\
2)$.
- If $o_r\leq e_r-3$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\geq e_r-1$, then $o_r^\prime=o_r+1>
e_r-1=e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C})&=o_s+2e_s+o_r,\\
\kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$
In both cases we have $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$, as required.
\[varyzero1-4\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)}
\atop\rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow
\left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt}
{(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})<
\kappa({\mathcal{C}})$.
In this case, we have $$o_s^\prime=o_s+2, \quad e_s^\prime=e_s-2, \quad
o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore, $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\
\mathrm{mod}\ 2))=\kappa(\mathcal{C})-2.$$ The desired inequality immediately follows.
\[varyzero3-5\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)}
\atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow
\left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt}
{(e,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$.
Under this transformation we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad
o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Since $o_r+e_r=\ell(\lambda)-\ell(\mu)$ is invariant, there are two cases.
**Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\
2)$.
- If $o_r\geq e_r$, then $o_r^\prime\geq e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime
=o_s-1+2e_s+o_r+1 =\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\leq e_r-2$, then $o_r^\prime=o_r+1\leq
e_r-1=e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$
**Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\
2)$.
- If $o_r\geq e_r+1$, then $o_r^\prime=o_r+1\geq
e_r+2>e_r^\prime+1$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\leq e_r-1$, then $o_r^\prime=o_r+1\leq
e_r=e_r^\prime+1$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$
Hence the proof is complete.
\[varyzero5-8\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,o)}
\atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow
\left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt}
{(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$.
In this case we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad
o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ There are two possibilities:
**Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu)
\equiv 0\ (\mathrm{mod}\ 2)$.
- If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq
e_r+1=e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime
=o_s-1+2e_s+o_r-1<\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\leq e_r$, then $o_r^\prime=o_r-1\leq
e_r-1<e_r^\prime$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$
**Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\
2)$.
- If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq
e_r+2=e_r^\prime+1$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r-2<
\kappa(\mathcal{C}).\end{aligned}$$
- If $o_r\leq e_r+1$, then $o_r^\prime=o_r-1\leq
e_r<e_r^\prime+1$ and $$\begin{aligned}
\kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$
Therefore, in both cases we have $\kappa({\mathcal{C}'})\leq
\kappa({\mathcal{C}})$.
\[varyzero2-3\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)}
\atop\rule{0pt}{10pt} {(o,o)}}}\right] \rightarrow
\left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt}
{(e,\emptyset)}}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left(
\left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,o)}}}\right]
\rightarrow \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt}
{(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})=
\kappa({\mathcal{C}})$.
In each case we have $$o_s^\prime=o_s-2, \quad e_s^\prime=e_s+1, \quad
o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\
\mathrm{mod}\ 2))=\kappa(\mathcal{C}),$$ as desired.
\[varyzero1-7\] If $\mathcal{C}'$ is one of the following possible cases: $$\begin{array}{ccc}
\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt}
{(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,e)} \atop
\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), &
\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt}
{(o,o)}}}\right] \rightarrow \left[\tiny{{{(o,o)}
\atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), &
\mathcal{C}\left( \left[\tiny{{{(e,o)} \atop \rule{0pt}{10pt}
{(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,o)} \atop
\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right),\\[10pt]
\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt}
{(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)}
\atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), &
\mathcal{C}\left( \left[\tiny{{{(o,o)} \atop \rule{0pt}{10pt}
{(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop
\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), &
\mathcal{C}\left( \left[\tiny{{ {(o,e)} \atop\rule{0pt}{10pt}
{(e,o)}}}\right] \rightarrow \left[\tiny{{{(e,o)}
\atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),\\[10pt]
\mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt}
{(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,e)}
\atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), &
\mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt}
{(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)}
\atop \rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),&
\end{array}$$ then $\kappa({\mathcal{C}'})< \kappa({\mathcal{C}})$.
In each case we have $$o_s^\prime=o_s, \quad e_s^\prime=e_s-1, \quad
o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\
\mathrm{mod}\ 2))<\kappa(\mathcal{C}),$$ as required.
Note that Lemmas \[varyzero1-1\]-\[varyzero1-7\] cover all possible transformations of exchanging the locations of $0$’s in two involved rows. Lemmas \[varyzero1-6\]-\[varyzero1-4\] imply that, to minimize the number of bars, we should put $0$’s in the skew shifted diagram such that there are as more as possible rows for which the first several squares are filled with $0$’s and then followed by an odd number of blank squares. Meanwhile, from Lemmas \[varyzero3-5\]-\[varyzero1-7\] we know that the number of rows fully filled with $0$’s should be as more as possible. Based on these observations, we have the following algorithm to determine the location of $0$’s for a given skew partition $\lambda/\mu$, where both $\lambda$ and $\mu$ are strict partitions. Using this algorithm we will obtain a shifted diagram with some squares filled with $0$’s such that the corresponding quantity $\kappa(\mathcal{C})$ is minimized. This property allows us to determine the srank of $\lambda/\mu$.
[**The Algorithm for Determining the Locations of $0$’s:**]{}
- Let $\mathcal{C}_1=S(\lambda)$ be the initial configuration of $\lambda/\mu$ with blank square. Set $i=1$ and $J=\{1,\ldots,\ell(\lambda)\}$.
- For $i\leq \ell(\mu)$, iterate the following procedure:
- If $\mu_i=\lambda_j$ for some $j\in J$, then we fill the $j$-th row of $\mathcal{C}_i$ with $0$.
- If $\mu_i\neq \lambda_j$ for any $j\in J$, then there are two possibilities.
- $\lambda_j-\mu_i$ is odd for some $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares with $0$ in the $j$-th row of $\mathcal{C}_i$.
- $\lambda_j-\mu_i$ is even for any $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares by $0$ in the $j$-th row of $\mathcal{C}_i$.
Denote the new configuration by $\mathcal{C}_{i+1}$. Set $J=J\backslash \{j\}$.
- Set $\mathcal{C}^{*}=\mathcal{C}_{i}$, and we get the desired configuration.
It should be emphasized that although the above algorithm does not necessarily generate a bar tableau, it is sufficient for the computation of the srank of a skew partition.
Using the arguments in the proofs of Lemmas \[varyzero1-1\]-\[varyzero1-7\], we can derive the following crucial property of the configuration $\mathcal{C}^*$. The proof is omitted since it is tedious and straightforward.
\[prop-min\] For any configuration ${\mathcal{C}}$ of $0$’s in the skew shifted diagram of $\lambda/\mu$, we have $\kappa({\mathcal{C}^*})\leq
\kappa({\mathcal{C}})$.
\[number of skew\] Given a skew partition $\lambda/\mu$, let $\mathcal{C}^*$ be the configuration of $0$’s obtained by applying the algorithm described above. Then $$\label{srank}
{\rm srank}(\lambda/\mu)=\kappa({\mathcal{C}^*}).$$
Suppose that for the configuration ${\mathcal{C}^*}$ there are $o_r^*$ rows of odd size with blank squares, and there are $o_s^*$ rows with at least one square filled with $0$ and an odd number of squares filled with positive integers. Likewise we let $e_r^*$ and $e_s^*$ denote the number of remaining rows. Therefore, $$\kappa(\mathcal{C}^*)=o_s^*+2e_s^*+\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2)).$$ Since for each configuration $\mathcal{C}$ the number of bars in a minimal bar tableau is greater than or equal to $\kappa({\mathcal{C}})$, by Proposition \[prop-min\], it suffices to confirm the existence of a skew bar tableau, say $T$, with $\kappa({\mathcal{C}^*})$ bars.
Note that it is possible that the configuration ${\mathcal{C}^*}$ is not admissible. The key idea of our proof is to move $0$’s in the diagram such that the resulting configuration ${\mathcal{C}'}$ is admissible and $\kappa({\mathcal{C}'})=\kappa({\mathcal{C}^*})$. To achieve this goal, we will use the numbers $\{1,2,\ldots,\kappa({\mathcal{C}^*})\}$ to fill up the blank squares of $\mathcal{C}^*$ guided by the rule that the bars of Type $2$ or Type $3$ will occur before bars of Type $1$.
Let us consider the rows without $0$’s, and there are two possibilities: (A) $o_r^*\geq e_r^*$, (B) $o_r^*<e_r^*$.
In Case (A) we choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})$ to generate a bar of Type $3$. Then we continue to choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})-1$. Repeat this procedure until all even rows are filled up. Finally, we fill the remaining rows of odd size with $\kappa({\mathcal{C}^*})-e_r^*,
\kappa({\mathcal{C}^*})-e_r^*-1, \ldots,
\kappa({\mathcal{C}^*})-o_r^*+1$ to generate bars of Type $2$.
In Case (B) we choose the row with the $i$-th smallest even size and the row with the $i$-th smallest odd size and fill their squares with the number $\kappa({\mathcal{C}^*})-i+1$ for $i=1,\ldots,o_r^*$. In this way, we obtain $o_r^*$ bars of Type $3$. Now consider the remaining rows of even size without $0$’s. There are two subcases.
- The remaining diagram, obtained by removing the previous $o_r^*$ bars of Type $3$, does not contain any row with only one square. Under this assumption, it is possible to fill the squares of a row of even size with the number $\kappa({\mathcal{C}^*})-o_r^*$ except the leftmost square. This operation will result in a bar of Type $1$. After removing this bar from the diagram, we may combine this leftmost square of the current row and another row of even size, if it exists, and to generate a bar of Type $3$. Repeating this procedure until there are no more rows of even size, we obtain a sequence of bars of Type $1$ and Type $3$. Evidently, there is a bar of Type $2$ with only one square. To summarize, we have $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\
\mathrm{mod}\ 2))$ bars.
- The remaining diagram contains a row composed of the unique square filled with $0$. In this case, we will move this $0$ into the leftmost square of a row of even size, see Figure \[case2-2\]. Denote this new configuration by $\mathcal{C}^{\prime}$, and from Lemma \[varyzero5-8\] we see that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. If we start with ${\mathcal{C}'}$ instead of ${\mathcal{C}^*}$, by a similar construction, we get $\max(o_r',e_r'+((e_r'+o_r')\
\mathrm{mod}\ 2))$ bars, occupying the rows without $0$’s in the diagram.
(300,100) (40,0)[(1,0)[20]{}]{}(40,20)[(1,0)[20]{}]{} (40,0)[(0,1)[20]{}]{}(60,0)[(0,1)[20]{}]{} (50,25)[$\vdots$]{}(20,40)[(1,0)[80]{}]{} (20,60)[(1,0)[80]{}]{} (20,40)(20,0)[5]{}[(0,1)[20]{}]{} (50,65)[$\vdots$]{} (0,80)(0,20)[2]{}[(1,0)[120]{}]{} (0,80)(20,0)[7]{}[(0,1)[20]{}]{}(48,6)[$0$]{} (130,50)[(1,0)[40]{}]{} (220,0)[(1,0)[20]{}]{}(220,20)[(1,0)[20]{}]{} (220,0)[(0,1)[20]{}]{}(240,0)[(0,1)[20]{}]{} (230,25)[$\vdots$]{}(200,40)[(1,0)[80]{}]{} (200,60)[(1,0)[80]{}]{} (200,40)(20,0)[5]{}[(0,1)[20]{}]{} (230,65)[$\vdots$]{} (180,80)(0,20)[2]{}[(1,0)[120]{}]{} (180,80)(20,0)[7]{}[(0,1)[20]{}]{}(188,86)[$0$]{}
Without loss of generality, we may assume that for the configuration ${\mathcal{C}^*}$ the rows without $0$’s in the diagram have been occupied by the bars with the first $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ positive integers in the decreasing order, namely, $(\kappa({\mathcal{C}^*}),
\ldots, 2, 1, 0)$. By removing these bars and reordering the remaining rows, we may get a shifted diagram with which we can continue the above procedure to construct a bar tableau.
At this point, it is necessary to show that it is possible to use $o_s^*+2e_s^*$ bars to fill this diagram. In doing so, we process the rows from bottom to top. If the bottom row has an odd number of blank squares, then we simply assign the symbol $o_s^*+2e_s^*$ to these squares to produce a bar of Type $1$. If the bottom row are completely filled with $0$’s, then we continue to deal with the row above the bottom row. Otherwise, we fill the rightmost square of the bottom row with $o_s^*+2e_s^*$ and the remaining squares with $o_s^*+2e_s^*-1$. Suppose that we have filled $i$ rows from the bottom and all the involved bars have been removed from the diagram. Then we consider the $(i+1)$-th row from the bottom. Let $t$ denote the largest number not greater than $o_s^*+2e_s^*$ which has not been used before. If all squares in the $(i+1)$-th row are filled with $0$’s, then we continue to deal with the $(i+2)$-th row. If the number of blank squares in the $(i+1)$-th row is odd, then we fill these squares with $t$. If the number of blank squares in the $(i+1)$-th row is even, then we are left with two cases:
- The rows of the diagram obtained by removing the rightmost square of the $(i+1)$-th row have distinct lengths. In this case, we fill the rightmost square with $t$ and the remaining blank squares of the $(i+1)$-th row with $t-1$.
- The removal of the rightmost square of the $(i+1)$-th row does not result in a bar tableau. Suppose that the $(i+1)$-th row has $m$ squares in total. It can only happen that the row underneath the $(i+1)$-th row has $m-1$ squares and all these squares are filled with $0$’s. By interchanging the location of $0$’s in these two rows, we get a new configuration $\mathcal{C}^{\prime}$, see Figure \[case2’\]. From Lemma \[varyzero2-3\] we deduce that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. So we can transform ${\mathcal{C}^*}$ to ${\mathcal{C}'}$ and continue to fill up the $(i+1)$-th row.
(340,40) (20,0)[(1,0)[120]{}]{} (0,20)(0,20)[2]{}[(1,0)[140]{}]{} (20,0)(20,0)[7]{}[(0,1)[40]{}]{} (0,20)[(0,1)[20]{}]{} (28,6)(20,0)[6]{}[$0$]{}(8,26)(20,0)[3]{}[$0$]{} (150,20)[(1,0)[40]{}]{} (220,0)[(1,0)[120]{}]{} (200,20)(0,20)[2]{}[(1,0)[140]{}]{} (220,0)(20,0)[7]{}[(0,1)[40]{}]{} (200,20)[(0,1)[20]{}]{} (228,6)(20,0)[3]{}[$0$]{}(208,26)(20,0)[6]{}[$0$]{}
Finally, we arrive at a shifted diagram whose rows are all filled up. Clearly, for those rows containing at least one $0$ there are $o_s^*+2e_s^*$ bars that are generated in the construction, and for those rows containing no $0$’s there are $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ bars that are generated. It has been shown that during the procedure of filling the diagram with nonnegative numbers if the configuration ${\mathcal{C}^*}$ is transformed to another configuration ${\mathcal{C}^{\prime}}$, then $\kappa({\mathcal{C}^\prime})$ remains equal to $\kappa({\mathcal{C}^*})$. Hence the above procedure leads to a skew bar tableau of shape $\lambda/\mu$ with $\kappa({\mathcal{C}^*})$ bars. This completes the proof.
[**Acknowledgments.**]{} This work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China.
[19]{}
P. Clifford, Algebraic and combinatorial properties of minimal border strip tableaux, Ph.D. Thesis, M.I.T., 2003.
P. Clifford, Minimal bar tableaux, *Ann. Combin.* **9** (2005), 281–291.
P. Clifford and R. P. Stanley, Bottom Schur functions, *Electron. J. Combin.* **11** (2004), Research Paper 67, 16 pp.
A. M. Hamel, Pfaffians and determinants for Schur Q-functions, *J. Combin. Theory Ser. A* **75** (1996), 328–340.
P. Hoffman and J. F. Humphreys, Projective Representations of the Symmetric Groups, Oxford University Press, Oxford, 1992.
J. F. Humphreys, Blocks of projective representations of the symmetric groups, *J. London Math. Soc.* **33** (1986), 441–452.
T. J$\rm{\acute{o}}$zefiak, Characters of projective representations of symmetric groups, *Exposition. Math.* **7** (1989), 193–247.
T. J$\rm{\acute{o}}$zefiak and P. Pragacz, A determinantal formula for skew Schur $Q$-functions, *J. London Math. Soc.* **43** (1991), 76–90.
I. G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd Edition, Oxford University Press, Oxford, 1995.
A. O. Morris, The spin representation of the symmetric group, *Proc. London Math. Soc.* **12** (1962), 55–76.
A. O. Morris, The spin representation of the symmetric group. *Canad. J. Math.* **17** (1965), 543–549.
A. O. Morris, The projective characters of the symmetric group—an alternative proof, *J. London Math. Soc.* **19** (1979), 57–58.
M. L. Nazarov, An orthogonal basis in irreducible projective representations of the symmetric group, *Funct. Anal. Appl.* **22** (1988), 66–68.
M. Nazarov and V. Tarasov, On irreducibility of tensor products of Yangian modules associated with skew Young diagrams, *Duke Math. J.* **112** (2002), 343–378.
B. E. Sagan, Shifted tableaux, Schur $Q$-functions, and a conjecture of R. Stanley, *J. Combin. Theory Ser. A* **45** (1987), 62–103.
I. Schur, Über die Darstellung der symmetrischen und der alternierenden Gruppe durch gebrochene lineare Substitutionen, *J. Reine Angew. Math.* **139** (1911), 155–250.
R. P. Stanly, The rank and minimal border strip decompositions of a skew partition, *J. Combin. Theory Ser. A* **100** (2002), 349–375.
J. R. Stembridge, On symmetric functions and the spin characters of $S_n$, Topics in Algebra, Part 2 (Warsaw, 1988), 433–453, Banach Center Publ., 26, Part 2, PWN, Warsaw, 1990.
J. R. Stembridge, Shifted tableaux and the projective representations of symmetric groups, *Adv. Math.* **74** (1989), 87–134.
J. R. Stembridge, Nonintersecting paths, Pfaffians and plane partitions, *Adv. Math.* **83** (1990), 96–131.
J. R. Stembridge, The SF Package for Maple, http://www.math.lsa.umich.edu/\~jrs/maple.html \#SF.
D. Worley, A theory of shifted Young tableaux, Ph.D. Thesis, Massachusetts Inst. Tech., Cambridge, Mass., 1984.
|
{
"pile_set_name": "arxiv"
}
|
package org.jetbrains.dokka.base.transformers.documentables
import org.jetbrains.dokka.model.*
import org.jetbrains.dokka.plugability.DokkaContext
import org.jetbrains.dokka.transformers.documentation.PreMergeDocumentableTransformer
import org.jetbrains.dokka.transformers.documentation.perPackageOptions
import org.jetbrains.dokka.transformers.documentation.source
import org.jetbrains.dokka.transformers.documentation.sourceSet
import java.io.File
class SuppressedDocumentableFilterTransformer(val context: DokkaContext) : PreMergeDocumentableTransformer {
override fun invoke(modules: List<DModule>): List<DModule> {
return modules.mapNotNull(::filterModule)
}
private fun filterModule(module: DModule): DModule? {
val packages = module.packages.mapNotNull { pkg -> filterPackage(pkg) }
return when {
packages == module.packages -> module
packages.isEmpty() -> null
else -> module.copy(packages = packages)
}
}
private fun filterPackage(pkg: DPackage): DPackage? {
val options = perPackageOptions(pkg)
if (options?.suppress == true) {
return null
}
val filteredChildren = pkg.children.filterNot(::isSuppressed)
return when {
filteredChildren == pkg.children -> pkg
filteredChildren.isEmpty() -> null
else -> pkg.copy(
functions = filteredChildren.filterIsInstance<DFunction>(),
classlikes = filteredChildren.filterIsInstance<DClasslike>(),
typealiases = filteredChildren.filterIsInstance<DTypeAlias>(),
properties = filteredChildren.filterIsInstance<DProperty>()
)
}
}
private fun isSuppressed(documentable: Documentable): Boolean {
if (documentable !is WithSources) return false
val sourceFile = File(source(documentable).path).absoluteFile
return sourceSet(documentable).suppressedFiles.any { suppressedFile ->
sourceFile.startsWith(suppressedFile.absoluteFile)
}
}
}
|
{
"pile_set_name": "github"
}
|
---
abstract: 'The $f(R)$ gravity models formulated in Einstein conformal frame are equivalent to Einstein gravity together with a minimally coupled scalar field. We shall explore phantom behavior of $f(R)$ models in this frame and compare the results with those of the usual notion of phantom scalar field.'
---
.5cm -26pt -.85in
\
${\bf Yousef~Bisabr}$[^1]\
\
Introduction
============
There are strong observational evidences that the expansion of the universe is accelerating. These observations are based on type Ia supernova [@super], cosmic microwave background radiation [@cmbr], large scale structure surveys [@ls] and weak lensing [@wl]. There are two classes of models aim at explaining this phenomenon: In the first class, one modifies the laws of gravity whereby a late-time acceleration is produced. A family of these modified gravity models is obtained by replacing the Ricci scalar $R$ in the usual Einstein-Hilbert Lagrangian density for some function $f(R)$ [@carro] [@sm]. In the second class, one invokes a new matter component usually referred to as dark energy. This component is described by an equation of state parameter $\omega \equiv \frac{p}{\rho}$, namely the ratio of the homogeneous dark energy pressure over the energy density. For a cosmic speed up, one should have $\omega < -\frac{1}{3}$ which corresponds to an exotic pressure $p<-\rho/3$. Recent analysis of the latest and the most reliable dataset (the Gold dataset [@gold]) have indicated that significantly better fits are obtained by allowing a redshift dependent equation of state parameter [@data]. In particular, these observations favor the models that allow the equation of state parameter crossing the line corresponding to $\omega=-1$, the phantom divide line (PDL), in the near past. It is therefore important to construct dynamical models that provide a redshift dependent equation of state parameter and allow for crossing the phantom barrier.\
Most simple models of this kind employ a scalar field coupled minimally to curvature with negative kinetic energy which referred to as phantom field [@ph] [@caldwell]. In contrast to these models, one may consider models which exhibit phantom behavior due to curvature corrections to gravitational equations rather than introducing exotic matter systems. Recently, there is a number of attempts to find phantom behavior in $f(R)$ gravity models. It is shown that one may realize crossing the PDL in this framework without recourse to any extra component relating to matter degrees of freedom with exotic behavior [@o] [@n]. Following these attempts, we intend to explore phantom behavior in some $f(R)$ gravity models which have a viable cosmology, i.e. a matter-dominated epoch followed by a late-time acceleration. In contrast to [@n], we shall consider $f(R)$ gravity models in Einstein conformal frame. It should be noted that mathematical equivalence of Jordan and Einstein conformal frames does not generally imply that they are also physically equivalent. In fact it is shown that some physical systems can be differently interpreted in different conformal frames [@soko] [@no]. The physical status of the two conformal frames is an open question which we are not going to address here. Our motivation to work in Einstein conformal frame is that in this frame, $f(R)$ models consist of Einstein gravity plus an additional dynamical degree of freedom, the scalar partner of the metric tensor. This suggests that it is this scalar degree of freedom which drives late-time acceleration in cosmologically viable $f(R)$ models. We compare this scalar degree of freedom with the usual notion of phantom scalar field. We shall show that behaviors of this scalar field attributed to $f(R)$ models which allow crossing the PDL are similar to those of a quintessence field with a negative potential rather than a phantom with a wrong kinetic term.
Phantom as a Minimally coupled Scalar Field
===========================================
The simplest class of models that provides a redshift dependent equation of state parameter is a scalar field minimally coupled to gravity whose dynamics is determined by a properly chosen potential function $V(\varphi)$. Such models are described by the Lagrangian density [^2] $$L=\frac{1}{2}\sqrt{-g}(R-\alpha ~g^{\mu\nu}\partial_{\mu}\varphi
\partial_{\nu}\varphi-2V(\varphi))
\label{a1}$$ where $\alpha=+1$ for quintessence and $\alpha=-1$ for phantom. The distinguished feature of the phantom field is that its kinetic term enters (\[a1\]) with opposite sign in contrast to the quintessence or ordinary matter. The Einstein field equations which follow (\[a1\]) are $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=T_{\mu\nu}
\label{a2}$$ with $$T_{\mu\nu}=\alpha~\partial_{\mu}\varphi
\partial_{\nu}\varphi-\frac{1}{2}\alpha~g_{\mu\nu}
\partial_{\gamma}\varphi \partial^{\gamma}\varphi-g_{\mu\nu} V(\varphi)
\label{a3}$$ In a homogeneous and isotropic spacetime, $\varphi$ is a function of time alone. In this case, one may compare (\[a3\]) with the stress tensor of a perfect fluid with energy density $\rho_{\varphi}$ and pressure $p_{\varphi}$. This leads to the following identifications $$\rho_{\varphi}=\frac{1}{2}\alpha
\dot{\varphi}^2+V(\varphi)~,~~~~~p_{\varphi}=\frac{1}{2}\alpha
\dot{\varphi}^2-V(\varphi) \label{a4}$$ The equation of state parameter is then given by $$\omega_{\varphi}=\frac{\frac{1}{2}\alpha
\dot{\varphi}^2-V(\varphi)}{\frac{1}{2}\alpha
\dot{\varphi}^2+V(\varphi)} \label{a5}$$ In the case of a quintessence (phantom) field with $V(\varphi)>0$ ($V(\varphi)<0$) the equation of state parameter remains in the range $-1<\omega_{\varphi}<1$. In the limit of small kinetic term (slow-roll potentials [@slow]), it approaches $\omega_{\varphi}=-1$ but does not cross this line. The phantom barrier can be crossed by either a phantom field ($\alpha<0$) with $V(\varphi)>0$ or a quintessence field ($\alpha>0$) with $V(\varphi)<0$, when we have $2|V(\varphi)|>\dot{\varphi}^2$. This situation corresponds to $$\rho_{\varphi}>0~~~~~,~~~~~p_{\varphi}<0~~~~~,~~~~~V(\varphi)>0~~~~~~~~~~~~~~~phantom
\label{a51}$$ $$\rho_{\varphi}<0~~~~~,~~~~~p_{\varphi}>0~~~~~,~~~~~V(\varphi)<0~~~~~~~~~~quintessence\label{a52}$$ Here it is assumed that the scalar field has a canonical kinetic term $\pm \frac{1}{2}\dot{\varphi}^2$. It is shown [@vik] that any minimally coupled scalar field with a generalized kinetic term (k-essence Lagrangian [@k]) can not lead to crossing the PDL through a stable trajectory. However, there are models that employ Lagrangians containing multiple fields [@multi] or scalar fields with non-minimall coupling [@non] which in principle can achieve crossing the barrier.\
There are some remarks to do with respect to $V(\varphi)<0$ appearing in (\[a52\]). In fact, the role of negative potentials in cosmological dynamics has been recently investigated by some authors [@neg]. One of the important points about the cosmological models containing such potentials is that they predict that the universe may end in a singularity even if it is not closed. For more clarification, consider a model containing different kinds of energy densities such as matter, radiation, scalar fields and so on. The Friedmann equation in a flat universe is $H^2 \propto
\rho_{t}$ with $\rho_{t}=\Sigma_{i}\rho_{i}$ being the sum of all energy densities. It is clear that the universe expands forever if $\rho_{t}>0$. However, if the contribution of some kind of energy is negative so that $\rho_{i}<0$, then it is possible to have $H^2=0$ at finite time and the size of the universe starts to decrease [^3]. We will return to this issue in the context of $f(R)$ gravity models in the next section.\
The possibility of existing a fluid with a surenegative pressure ($\omega<-1$) leads to problems such as vacuum instability and violation of energy conditions [@carroll]. For a perfect fluid with energy density $\rho$ and pressure $p$, the weak energy condition requires that $\rho\geq 0$ and $\rho+p \geq 0$. These state that the energy density is positive and the pressure is not too large compared to the energy density. The null energy condition $\rho+p\geq 0$ is a special case of the latter and implies that energy density can be negative if there is a compensating positive pressure. The strong energy condition as a hallmark of general relativity states that $\rho+p \geq 0$ and $\rho+3p\geq 0$. It implies the null energy condition and excludes excessively large negative pressures. The null dominant energy condition is a statement that $\rho\geq |p|$. The physical motivation of this condition is to prevent vacuum instability or propagation of energy outside the light cone. Applying to an equation of state $p=\omega \rho$ with a constant $\omega$, it means that $\omega \geq -1$. Violation of all these reasonable constraints by phantom, gives an unusual feature to this principal energy component of the universe. There are however some remarks concerning how these unusual features may be circumvented [@carroll] [@mc].
$f(R)$ Gravity
==============
Let us consider an $f(R)$ gravity model described by the action $$S=\frac{1}{2} \int d^{4}x \sqrt{-g}~ f(R) + S_{m}(g_{\mu\nu},
\psi)\label{b1}$$ where $g$ is the determinant of $g_{\mu\nu}$, $f(R)$ is an unknown function of the scalar curvature $R$ and $S_{m}$ is the matter action depending on the metric $g_{\mu\nu}$ and some matter field $\psi$. It is well-known that these models are equivalent to a scalar field minimally coupled to gravity with an appropriate potential function. In fact, we may use a new set of variables $$\bar{g}_{\mu\nu} =p~ g_{\mu\nu} \label{b2}$$ $$\phi = \frac{1}{2\beta} \ln p
\label{b3}$$ where $p\equiv\frac{df}{dR}=f^{'}(R)$ and $\beta=\sqrt{\frac{1}{6}}$. This is indeed a conformal transformation which transforms the above action in the Jordan frame to the Einstein frame [@soko] [@maeda] [@wands] $$S=\frac{1}{2} \int d^{4}x \sqrt{-g}~\{ \bar{R}-\bar{g}^{\mu\nu}
\partial_{\mu} \phi~ \partial_{\nu} \phi -2V(\phi)\} + S_{m}(\bar{g}_{\mu\nu}
e^{2\beta \phi}, \psi) \label{b4}$$ In the Einstein frame, $\phi$ is a minimally coupled scalar field with a self-interacting potential which is given by $$V(\phi(R))=\frac{Rf'(R)-f(R)}{2f'^2(R)} \label{b5}$$ Note that the conformal transformation induces the coupling of the scalar field $\phi$ with the matter sector. The strength of this coupling $\beta$, is fixed to be $\sqrt{\frac{1}{6}}$ and is the same for all types of matter fields.\
Variation of the action (\[b4\]) with respect to $\bar{g}_{\mu\nu}$, gives the gravitational field equations $$\bar{G}_{\mu\nu}=T^{\phi}_{\mu\nu}+\bar{T}^{m}_{\mu\nu}
\label{b6}$$ where $$\bar{T}^{m}_{\mu\nu}=\frac{-2}{\sqrt{-g}}\frac{\delta
S_{m}}{\delta \bar{g}^{\mu\nu}}\label{b7}$$ $$T^{\phi}_{\mu\nu}=\partial_{\mu} \phi~\partial_{\nu} \phi
-\frac{1}{2}\bar{g}_{\mu\nu} \partial_{\gamma}
\phi~\partial^{\gamma} \phi-V(\phi) \bar{g}_{\mu\nu}
\label{b8}$$ Here $\bar{T}^{m}_{\mu\nu}$ and $T^{\phi}_{\mu\nu}$ are stress tensors of the matter system and the minimally coupled scalar field $\phi$, respectively. Comparing (\[a3\]) and (\[b8\]) indicates that $\alpha=1$ and $\phi$ appears as a normal scalar field. Thus the equation of state parameter which corresponds to $\phi$ is given by $$\omega_{\phi} \equiv
\frac{p_{\phi}}{\rho_{\phi}}=\frac{\frac{1}{2}
\dot{\phi}^2-V(\phi)}{\frac{1}{2} \dot{\phi}^2+V(\phi)}
\label{b9}$$ Inspection of (\[b9\]) reveals that for $\omega_{\phi}<-1$, we should have $V(\phi)<0$ and $|V(\phi)|>\frac{1}{2}\dot{\phi}^2$ which corresponds to (\[a52\]). In explicit terms, crossing the PDL in this case requires that $\phi$ appear as a quintessence (rather than a phantom) field with a negative potential.\
Here the scalar field $\phi$ has a geometric nature and is related to the curvature scalar by (\[b3\]). One may therefore use (\[b3\]) and (\[b5\]) in the expression (\[b9\]) to obtain $$\omega_{\phi}=\frac{3\dot{R}^2
f''^2(R)-\frac{1}{2}(Rf'(R)-f(R))}{3\dot{R}^2
f''^2(R)+\frac{1}{2}(Rf'(R)-f(R))} \label{b10}$$ which is an expression relating $\omega_{\phi}$ to the function $f(R)$. It is now possible to use (\[b10\]) and find the functional forms of $f(R)$ that fulfill $\omega_{\phi}<-1$. In general, to find such $f(R)$ gravity models one may start with a particular $f(R)$ function in the action (\[b1\]) and solve the corresponding field equations for finding the form of $H(z)$. One can then use this function in (\[b10\]) to obtain $\omega_{\phi}(z)$. However, this approach is not efficient in view of complexity of the field equations. An alternative approach is to start from the best fit parametrization $H(z)$ obtained directly from data and use this $H(z)$ for a particular $f(R)$ function in (\[b10\]) to find $\omega_{\phi}(z)$. We will follow the latter approach to find $f(R)$ models that provide crossing the phantom barrier.\
We begin with the Hubble parameter $H\equiv \frac{\dot{a}}{a}$. Its derivative with respect to cosmic time $t$ is $$\dot{H}=\frac{\ddot{a}}{a}-(\frac{\dot{a}}{a})^2
\label{b11}$$ where $a(t)$ is the scale factor of the Friedman-Robertson-Walker (FRW) metric. Combining this with the definition of the deceleration parameter $$q(t)=-\frac{\ddot{a}}{aH^2} \label{b12}$$ gives $$\dot{H}=-(q+1)H^2 \label{b13}$$ One may use $z=\frac{a(t_{0})}{a(t)}-1$ with $z$ being the redshift, and the relation (\[b12\]) to write (\[b13\]) in its integration form
$$H(z)=H_{0}~exp~[\int_{0}^{z} (1+q(u))d\ln(1+u)]
\label{b14}$$
where the subscript “0" indicates the present value of a quantity. Now if a function $q(z)$ is given, then we can find evolution of the Hubble parameter. Here we use a two-parametric reconstruction function characterizing $q(z)$ [@wang][@q], $$q(z)=\frac{1}{2}+\frac{q_{1}z+q_{2}}{(1+z)^2}
\label{b15}$$ where fitting this model to the Gold data set gives $q_{1}=1.47^{+1.89}_{-1.82}$ and $q_{2}=-1.46\pm
0.43$ [@q]. Using this in (\[b14\]) yields $$H(z)=H_{0}(1+z)^{3/2}exp[\frac{q_{2}}{2}+\frac{q_{1}z^2-q_{2}}{2(z+1)^2}]
\label{b16}$$ In a spatially flat FRW spacetime $R=6(\dot{H}+2H^2)$ and therefore $\dot{R}=6(\ddot{H}+4\dot{H}H)$. In terms of the deceleration parameter we have $$R=6(1-q)H^2 \label{b17}$$and $$\dot{R}=6H^3 \{2(q^2-1)-\frac{\dot{q}}{H}\}
\label{b18}$$ which the latter is equivalent to $$\dot{R}=6H^3 \{2(q^2-1)+(1+z)\frac{dq}{dz}\}
\label{b19}$$ It is now possible to use (\[b15\]) and (\[b16\]) for finding $R$ and $\dot{R}$ in terms of the redshift. Then for a given $f(R)$ function, the relation (\[b10\]) determines the evolution of the equation of state parameter $\omega_{\phi}(z)$.\
As an illustration we apply this procedure to some $f(R)$ functions. Let us first consider the model [@cap] [@A] $$f(R)=R+\lambda R^n \label{b20}$$ in which $\lambda$ and $n$ are constant parameters. In terms of the values attributed to these parameters, the model (\[b20\]) is divided in three cases [@A]. Firstly, when $n>1$ there is a stable matter-dominated era which does not follow by an asymptotically accelerated regime. In this case, $n = 2$ corresponds to Starobinsky’s inflation and the accelerated phase exists in the asymptotic past rather than in the future. Secondly, when $0<n<1$ there is a stable matter-dominated era followed by an accelerated phase only for $\lambda<0$. Finally, in the case that $n<0$ there is no accelerated and matter-dominated phases for $\lambda>0$ and $\lambda<0$, respectively. Thus the model (\[b20\]) is cosmologically viable in the regions of the parameters space which is given by $\lambda<0$ and $0<n<1$.\
Due to complexity of the resulting $\omega_{\phi}(z)$ function, we do not explicitly write it here and only plot it in Fig.1a for some parameters. As the figure shows, there is no phantom behavior and $\omega_{\phi}(z)$ remains near the line of the cosmological constant $\omega_{\phi}=-1$. We also plot $\omega_{\phi}$ in terms of $n$ and $\lambda$ for $z=1$ in Fig.1b. The figure shows that $\omega_{\phi}$ remains near unity except for a small region in which $-1\leq \omega_{\phi}<0$ and therefore the PDL is never crossed.\
Now we consider the models presented by Starobinsky [@star] $$f(R)=R-\gamma R_{c} \{1-[1+(\frac{R}{R_{c}})^2]^{-m}\}
\label{b21}$$ and Hu-Sawicki [@hs] $$f(R)=R-\gamma
R_{c}\{\frac{(\frac{R}{R_{c}})^m}{1+(\frac{R}{R_{c}})^m}\}
\label{b22}$$ where $\gamma$, $m$ and $R_{c}$ are positive constants with $R_{c}$ being of the order of the presently observed effective cosmological constant. Using the same procedure, we can obtain evolution of the equation of state parameter for both models (\[b21\]) and (\[b22\]). We plot the resulting functions in Fig.2. The figures show that while the model (\[b22\]) allows crossing the PDL for the given values of the parameters, in the model (\[b21\]) the equation of state parameter remains near $\omega_{\phi}=-1$. To explore the behavior of the models in a wider range of the parameters, we also plot $\omega_{\phi}$ in the redshift $z=1$ in Fig.3.\
It is interesting to consider violation of energy conditions for the model (\[b22\]) which can exhibit phantom behavior. In Fig.4, we plot some expressions corresponding to null, weak and strong energy conditions. As it is indicated in the figures, the model violates weak and strong energy conditions while it respects null energy condition for a period of evolution of the universe. Moreover, Fig.4a indicates that $\rho_{\phi}<0$ for some parameters in terms of which the PDL is crossed. This is in accord with (\[a52\]) and (\[b9\]) which require that in order for crossing the PDL, $\phi$ should be a quintessence field with a negative potential function.
Concluding Remarks
==================
We have studied phantom behavior for some $f(R)$ gravity models in which the late-time acceleration of the universe is realized. Working in Einstein conformal frame, we separate the scalar degree of freedom which is responsible for the late-time acceleration. Comparing this scalar field with the phantom field, we have made our first observation that the former appears as a minimally coupled quintessence whose dynamics is characterized by a negative potential. The impact of such a negative potential in cosmological dynamics is that it leads to a collapsing universe or a big crunch [@neg]. As a consequence, the $f(R)$ gravity models in which crossing the phantom barrier is realized predict that the universe stops expanding and eventually collapses. This is in contrast to phantom scalar fields in which the final stage of the universe has a divergence of the scale factor at a finite time, or a big rip [@ph] [@caldwell].\
We have used the reconstruction functions $q(z)$ and $H(z)$ fitting to the Gold data set to find evolution of equation of state parameter $\omega_{\phi}(z)$ for some cosmologically viable $f(R)$ models. We obtained the following results :\
\
1) The model (\[b20\]) does not provide crossing the PDL. It however allows $\omega_{\phi}$ to be negative for a small region in the parameters space. For $n=0$, the expression (\[b20\]) appears as the Einstein gravity plus a cosmological constant. This state is indicated in Fig.1b when the equation of state parameter experiences a sharp decrease to $\omega_{\phi}=-1$.\
\
2) We also do not observe phantom behavior in the Starobinsky’s model (\[b21\]). In the region of the parameters space corresponding to $m>0.5$ the equation of state parameter decreases to $\omega_{\phi}=-1$ and the model effectively appears as $\Lambda$CDM.\
\
3) The same analysis is fulfilled for Hu-Sawicki’s model (\[b22\]). This model exhibits phantom crossing in a small region of the parameters space as it is indicated in Fig.2b and Fig.3b. Due to crossing the PDL in this case, we also examine energy conditions. We find that in contrast to weak and strong energy conditions which are violated, the null energy condition hold in a period of the evolution.\
Although the properties of $\phi$ differ from those of the phantom due to the sign of its kinetic term, violation of energy conditions remains as a consequence of crossing the PDL in both cases. However, the scalar field $\phi$ in our case should not be interpreted as an exotic matter since it has a geometric nature characterized by (\[b3\]). In fact, taking $\omega_{\phi}<-1$ as a condition in (\[b10\]) just leads to some algebraic relations constraining the explicit form of the $f(R)$ function.
[99]{} A. G. Riess et al., Astron. J. [**116**]{}, 1009 (1998)\
S. Perlmutter et al., Bull. Am. Astron. Soc., [**29**]{}, 1351 (1997)\
S. Perlmutter et al., Astrophys. J., [**517**]{} 565 (1997) L. Melchiorri et al., Astrophys. J. Letts., [**536**]{}, L63 (2000)\
C. B. Netterfield et al., Astrophys. J., [**571**]{}, 604 (2002)\
N. W. Halverson et al., Astrophys. J., [**568**]{}, 38 (2002)\
A. E. Lange et al, Phys. Rev. D [**63**]{}, 042001 (2001)\
A. H. Jaffe et al, Phys. Rev. Lett. [**86**]{}, 3475 (2001) M. Tegmark et al., Phys. Rev. D [**69**]{}, 103501 (2004)\
U. Seljak et al., Phys. Rev. D [**71**]{}, 103515 (2005) B. Jain and A. Taylor, Phys. Rev. Lett. [**91**]{}, 141302 (2003) S. M. Carroll, V. Duvvuri, M. Trodden, M. S. Turner, Phys. Rev. D [**70**]{}, 043528 (2004) S. M. Carroll, A. De Felice, V. Duvvuri, D. A. Easson, M. Trodden and M. S. Turner, Phys. Rev. D [**71**]{}, 063513 (2005)\
G. Allemandi, A. Browiec and M. Francaviglia, Phys. Rev. D [**70**]{}, 103503 (2004)\
X. Meng and P. Wang, Class. Quant. Grav. [**21**]{}, 951 (2004)\
M. E. soussa and R. P. Woodard, Gen. Rel. Grav. [**36**]{}, 855 (2004)\
S. Nojiri and S. D. Odintsov, Phys. Rev. D [**68**]{}, 123512 (2003)\
P. F. Gonzalez-Diaz, Phys. Lett. B [**481**]{}, 353 (2000)\
K. A. Milton, Grav. Cosmol. [**9**]{}, 66 (2003) A. G. Riess et al., Astrophys. J. [**607**]{}, 665 (2004) U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. [**354**]{}, 275 (2004)\
S. Nesseris and L. Perivolaropoulos, Phys. Rev. D [**70**]{}, 043531 (2004) R. R. Caldwell, Phys. Lett. B [**545**]{}, 23 (2002) R. R. Caldwell, M. Kamionkowski and N. N. Weinberg Phys. Rev. Lett. [**91**]{}, 071301 (2003) K. Bamba, C. Geng, S. Nojiri, S. D. Odintsov, Phys. Rev. D [**79**]{}, 083014 (2009) K. Nozari and T. Azizi, Phys. Lett. B [**680**]{}, 205 (2009) G. Magnano and L. M. Sokolowski, Phys. Rev. D [**50**]{}, 5039 (1994) Y. M. Cho, Class. Quantum Grav. [**14**]{}, 2963 (1997)\
E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D [**70**]{}, 043539 (2004)\
S. Nojiri and S. D. Odintsov, Phys. Rev. D [**74**]{}, 086005 (2006)\
S. Capozziello, S. Nojiri, S. D. Odintsov and A. Troisi, Phys. Lett. B [**639**]{}, 135 (2006)\
K. Bamba, C. Q. Geng, S. Nojiri and S. D. Odintsov, Phys. Rev. D [ **79**]{}, 083014 (2009)\
K. Nozari and S. D. Sadatian, Mod. Phys. Lett. A [**24**]{}, 3143 (2009) R. J. Scherrer and A. A. Sen, Phys. Rev. D [**77**]{}, 083515 (2008)\
R. J. Scherrer and A. A. Sen, Phys. Rev. D [**78**]{}, 067303 (2008)\
S. Dutta, E. N. Saridakis and R. J. Scherrer, Phys. Rev. D [**79**]{}, 103005 (2009) A. Vikman, Phys. Rev. D [**71**]{}, 023515 (2005) C. Armendariz-Picon, V. Mukhanov and P. J. Steinhardt, Phys. Rev. D [**63**]{}, 103510 (2001)\
A. Melchiorri, L. Mersini, C. J. Odman and M. Trodden, Phys. Rev. D [**68**]{}, 043509 (2003) R. R. Caldwell and M. Doran, Phys.Rev. D [**72**]{}, 043527 ( 2005)\
W. Hu, Phys. Rev. D [**71**]{}, 047301 (2005)\
Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Phys.Lett. B [**608**]{}, 177 (2005)\
B. Feng, X. L. Wang and X. M. Zhang, Phys. Lett. B [**607**]{}, 35 (2005)\
B. Feng, M. Li, Y. S. Piao and X. Zhang, Phys. Lett. B [**634**]{}, 101, (2006) L. Perivolaropoulos, JCAP 0510, 001 (2005) A. Linde, JHEP 0111, 052 (2001)\
J. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, Phys. Rev. D [**64**]{}, 123522 (2001)\
P. J. Steinhardt and N. Turok, Phys. Rev. D [**65**]{}, 126003 (2002)\
N. Felder, A.V. Frolov, L. Kofman and A. V. Linde, Phys. Rev. D [**66**]{}, 023507 (2002) A. de la Macorra and G. German, Int. J. Mod. Phys. D [ **13**]{}, 1939 (2004) S. M. Carroll, M. Hoffman and M. Trodden Phys. Rev. D [**68**]{}, 023509 (2004) B. McInnes, JHEP 0208, 029 (2002) K. Maeda, Phys. Rev. D [**39**]{}, 3159 (1989) D. Wands, Class. Quant. Grav. [**11**]{}, 269 (1994) Y.G. Gong and A. Wang, Phys. Rev. D [**73**]{}, 083506 (2006) Y. Gong and A. Wang, Phys. Rev. D [**75**]{}, 043520 (2007) S. Capozziello, V. F. Cardone, S. Carloni and A. Troisi, Int. J. Mod. Phys. D [**12**]{}, 1969 (2003) L. Amendola, R. Gannouji, D. Polarski and S. Tsujikawa, Phys. Rev. D [**75**]{}, 083504 (2007) A. A. Starobinsky, JETP. Lett. [**86**]{}, 157 (2007) W. Hu and I. Sawicki, Phys. Rev. D [**76**]{}, 064004 (2007)
{width="0.45\linewidth"} {width="0.45\linewidth"}
{width="0.49\linewidth"} {width="0.49\linewidth"}
{width="0.43\linewidth"} {width="0.43\linewidth"}
{width="0.45\linewidth"} {width="0.45\linewidth"}
{width="0.45\linewidth"}
[^1]: e-mail: y-bisabr@srttu.edu.
[^2]: We use the unit system $8\pi
G=\hbar=c=1$ and the metric signature $(-,+,+,+)$.
[^3]: For a more detailed discussion see, e.g., [@mac].
|
{
"pile_set_name": "arxiv"
}
|
I've learned the nitrogen vacancies used in Memristors are for "switching", between excited states and inhibited states, akin to our neurons and SYNAPSES abilities to generate EPSPs and IPSPs, this is the entire point to Memristors and DARPAs SyNAPSE program, emulating Neurons..
So in the memristor, NVs (which are truly Ancillas),
Return to "resting states", just like Neurons do, hence Inhibitory states versus excited states, when a neuron reaches an action potential and fires..
So the ancillas use prepared/ known states, and are the equivalent of the ancillas ground state, which is equal to a neurons resting potential...
So by weakly measuring certain aspects of living neurons, it is possible to superbroadcast/ teleport the wavefunction non-classically to the memristors vacancies, correlating each memristor with its neuron statistical ensemble counterpart, sharing the quantum state of the resting potential.
the ground state of the ancilla.
The type of measurement determines which property is shown. However the single and double-slit experiment and other experiments show that some effects of wave and particle can be measured in one measurement.
Hence Mach-Zehnder interferometry, which also involves ANCILLAS
Quote:
When for example measuring a photon using a Mach-Zehnder interferometer, the photon acts as a wave if the second beam-splitter is inserted, but as a particle if this beam-splitter is omitted. The decision of whether or not to insert this beam-splitter can be made after the photon has entered the interferometer, as in Wheeler’s famous delayed-choice thought experiment. In recent quantum versions of this experiment, this decision is controlled by a quantum ancilla, while the beam splitter is itself still a classical object.
and the no-cloning theorem is about pure states..
But an ensemble of particles in a neuron would make it a mixed state..
The no-cloning theorem is normally stated and proven for pure states; the no-broadcast theorem generalizes this result to mixed states.
And thats why PHASE works for quantum metrology and its ability to harness non classical states
Apparently, worrying about measuring both position and momentum works differently for particles than it does waves.
It may actually be possible using phase.
Quote:
Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding the latter's newly discovered (and not yet published) uncertainty principle. Upon returning from his vacation, by which time Heisenberg had already submitted his paper on the uncertainty principle for publication, he convinced Heisenberg that the uncertainty principle was a manifestation of the deeper concept of complementarity.[6] Heisenberg duly appended a note to this effect to his paper on the uncertainty principle, before its publication, stating:
Quote:
Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand.
And "quadratures" is about position and momentum..
Which are apparently always orthogonal to each other.
There is obviously something to all of this.
Counterfactual Communication was recently used to transmit information without sending any PARTICLES.
the information was sent in the phase.. of a wavefunction?
and it used MachZenhder Interferometry..
which is part of Quantum Metrology and its ability to harness non-classical states..
and all of this can teleport non-classical light..
and it all uses ANCILLAS... which store VALUES, and WAVEFUNCTIONS.. because they are Qubits/ Nitrogen vacancies..
and are used in WEAK MEASUREMENT... which was used to measure a wavefunction.. something most would argue is impossible.. because of the uncertainty principle..
Quote:
An interpretation of quantum mechanics can be said to involve the use of counterfactual definiteness if it includes in the statistical population of measurement results, any measurements that are counterfactual because they are excluded by the quantum mechanical impossibility of simultaneous measurement of conjugate pairs of properties.
For example, the Heisenberg uncertainty principle states that one cannot simultaneously know, with arbitrarily high precision, both the position and momentum of a particle
Quote:
The word "counterfactual" does not mean "characterized by being opposed to fact." Instead, it characterizes values that could have been measured but, for one reason or another, were not
and its the Ancillas that store values.. and may or may not be part of the measurement apparatus... / interferometer..
In 2015, Counterfactual Quantum Computation was demonstrated in the experimental context of "spins of a negatively charged Nitrogen-vacancy color center in a diamond".[5] Previously suspected limits of efficiency were exceeded, achieving counterfactual computational efficiency of 85% with the higher efficiency foreseen in principle
Quote:
The quantum computer may be physically implemented in arbitrary ways but the common apparatus considered to date features a Mach–Zehnder interferometer. The quantum computer is set in a superposition of "not running" and "running" states by means such as the Quantum Zeno Effect. Those state histories are quantum interfered. After many repetitions of very rapid projective measurements, the "not running" state evolves to a final value imprinted into the properties of the quantum computer. Measuring that value allows for learning the result of some types of computations such as Grover's algorithm even though the result was derived from the non-running state of the quantum computer.
NV CENTERS can also be used asQUANTUM SPIN PROBES, QUBITS & AS, ANCILLAS
in devices such as
BIOMEMs scanners
QUANTUM REPEATERS
PHOTONIC NETWORKING
and..
MEMRISTORS.. where the vacancies are used for switching between inhibited and excited states, thus simulating NEURONS
MEMRISTORS utilize wavefunctions.
Wavefunctions can be weakly measured by ANCILLAS
ANCILLAS hold "values" ie : wavefunctions
and have GROUND STATES
which measured particles are "cooled" into for measurement techniques. a literal form of "photon counting"..
"This de-excitation is called ‘fluorescence’, and it is characterized by a
lifetime of a few nanoseconds of the lowest vibrational level of the first excited state.
De-excitation from the excited singlet state to the ground state also occurs by other mechanisms, such as non-radiant thermal decay or ‘phosphorescence’. In the latter case, the chromophore undergoes a forbidden transition from the excited singlet state into the triplet state (intersystem crossing, ISC, Fig 2.4), which has a non-zero probability, for example because of spin orbit coupling of the electrons’ magnetic moments"
its a type of INTERSYSTEM CROSSING
doing a search for Intersystem crossing, memristor brings up this link..
A composite optical microcavity, in which nitrogen vacancy (NV) centers in a diamond nanopillar are coupled to whispering gallery modes in a silica microsphere, is demonstrated. Nanopillars with a diameter as small as 200 nm are fabricated from a bulk diamond crystal by reactive ion etching and are positioned with nanometer precision near the equator of a silica microsphere. The composite nanopillar-microsphere system overcomes the poor controllability of a nanocrystal-based microcavity system and takes full advantage of the exceptional spin properties of NV centers and the ultrahigh quality factor of silica microspheres.
We investigate the construction of two universal three-qubit quantum gates in a hybrid system. The designed system consists of a flying photon and a stationary negatively charged nitrogen-vacancy (NV) center fixed on the periphery of a whispering-gallery-mode (WGM) microresonator, with the WGM cavity coupled to tapered fibers functioning as an add-drop structure. These gate operations are accomplished by encoding the information both on the spin degree of freedom of the electron confined in the NV center and on the polarization and spatial-mode states of the flying photon, respectively
Now Somewhere in this is evidence of a memristor holding a wavefunction
The shown SPICE implementation (macro model) for a
charge controlled memristor model exactly reproduces the
results from [2]. However, these simulation results do not
have a good compliance - not even qualitatively - with the
characteristic form of I/V curves of manufactured devices.
Therefore the following equations (3) to (9) try to approach
memristor modeling from a different point of view to get a
closer match to the measured curves from [2],[6],[7],[8],[10]
or [11] even with a simple linear drift of w.
Besides the charge steering mechanism of a memristor modelled in [2],
[1] also defined a functional relationship for a memristor
which explains the memristive behavior in dependence on its
magnetic flux: i(t) = W φ(t) · v(t) . (3)
Variable W (φ) represents the memductance which is the
reciprocal of memristance M. Here a mechanism is demanded
that maps the magnetic flux as the input signal to the current
that is flowing through the memristor. The magnetic flux φ
is the integral of voltage v(t) over time: φ = R v(t) dt.
We can assume that an external voltage which is applied to
the previously described two-layer structure has an influence
on the movable 2+-dopants over time. The width w(t) of
the semiconductor layer is depending on the velocity of the
dopants vD(t) via the time integral:
w(t) = w0 + Z0t vD(τ)dτ . (4)
The drift velocity vD in an electric field E is defined via its
mobility µD: vD(t) = µD · E(t) (5) and the electric field E is connected with the voltage via E(t) = v(t)
D(6)with D denoting the total thickness of the two-layer structure
(D = tOX + tSEMI). Due the good conductance of the
semiconductor layer the electric field is applied to the time
depending thickness of the insulator layer tOX for the most
part (due to v(l) = R E dl). However, this was neglected for
reasons of simplification. If we combine (4), (5) and (6), we
obtain: n(t) = w0 + µDD· Z0t v(τ)dτ = w0 + µDD · φ(t) . (7)
This equation shows a proportional dependence of the width w
from the magnetic flux φ. Since the thickness of the insulator
layer is in the low nanometer region a tunnel current or
equivalent mechanism is possible. The magnetic flux slightly
decreases the thickness of the insulator layer wich is the barrierfor the tunnel current.This current rises exponentially with a
reduction of the width tOX(φ) (the exponential dependenceis deducible from the quantum mechanic wave function)
which must become the GROUND STATE of the ANCILLA upon non-classical correlation..
because a wavefunction is essentially the "master equation" (which describe wave equations)
We investigate theoretically how the spectroscopy of an ancillary qubit can probe cavity (circuit) QED ground states containing photons. We consider three classes of systems (Dicke, Tavis-Cummings and Hopfield-like models), where non-trivial vacua are the result of ultrastrong coupling between N two-level systems and a single-mode bosonic field. An ancillary qubit detuned with respect to the boson frequency is shown to reveal distinct spectral signatures depending on the type of vacua. In particular, the Lamb shift of the ancilla is sensitive to both ground state photon population and correlations. Back-action of the ancilla on the cavity ground state is investigated, taking into account the dissipation via a consistent master equation for the ultrastrong coupling regime. The conditions for high-fidelity measurements are determined.
\\
Notice BACK-ACTION, which goes right back to DARPAs Nanodiamond Biosensors and their ability to overcome the standard quantum limit, because of the known/ prepared states in the ancillas/NITROGEN VACANCIES
Quote:
(Quantum) back action refers (in the regime of Quantum systems) to the effect of a detector on the measurement itself, as if the detector is not just making the measurement but also affecting the measured or observed system under a perturbing effect.
Back action has important consequences on the measurement process and is a significant factor in measurements near the quantum limit, such as measurements approaching the Standard Quantum Limit (SQL).
Back action is an actively sought-after area of interest in present times. There have been experiments in recent times, with nanomechanical systems, where back action was evaded in making measurements, such as in the following paper :
When performing continuous measurements of position with sensitivity approaching quantum mechanical limits, one must confront the fundamental effects of detector back-action.Back-action forces are responsible for the ultimate limit on continuous position detection, can also be harnessed to cool the observed structure[1,2,3,4], and are expected to generate quantum entanglement.
Back-action can also be evaded, allowing measurements with sensitivities that exceed the standard quantum limit, and potentially allowing for the generation of quantum
squeezed states.
So the NV centers are used as ancillas in the measurement process.. which weakly measure wavefunctions of particles in neurons, most likely singlet and triplet states occurring in ATP and phosphase...
then those same wavefunctions are transfered and produce a correlation at the ground state..
where the ancilla takes on the new value/wavefunction.. and here we find all these ideas..
minus the switching which I can explain
Memristors use NV centers to switch between inhibited and excited states
singlet and triplet states
thus producing/simulating/ EMULATING, living neurons and action potentials
and it may just BE the network and its computing speed, that even allows the wavefunction to be "found"
Artificial Neural Network. A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]. A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by ...
While there are lots of things that artificial intelligence can't do yet—science being one of them—neural networks are proving themselves increasingly adept at a huge variety of pattern recognition ... That's due in part to the description of a quantum system called its wavefunction. ... Neural network chip built using memristors.
https://books.google.ca/books?isbn=9814434809Andrew Adamatzky, Guanrong Chen - 2013 - Computers
Global and local symmetries In quantum physics, all the properties of a system can be derived from the state or wave function associated with that system. The absolute phase of a wave function cannot be measured, and has no practical meaning, as it cancels out the calculations of the probability distribution. Only relative ...
The las vegas shooting left 58 INNOCENT PEOPLE DEAD.
The gunmans brother was later arrested for possession of child porn.
This technology was developed to defend against terrorism and child abuse.
Connect the dots.
I bet the brothers were sharing files and one of them ended up a "targeted individual"
So he began to stockpile weapons and plan the only way out of his nightmare.
There has been no mentioning of him."hearing voices"
But the fact his brother was later arrested for such a crime paints a picture worth looking into.
Those vibrations, are the result of this assumed BIOMEMS "deployable biosensor" And its use of excitation techniques made to single out single neurons to measure the WAVEFUNCTIONS during a tomographic scan.
which makes such possible Quantum-assisted Nano-imaging of Living Organism Is a First
Quote:
“In QuASAR we are building sensors that capitalize on the extreme precision and control of atomic physics. We hope these novel measurement tools can provide new capabilities to the broader scientific and operational communities,” said Jamil Abo-Shaeer, DARPA program manager. “The work these teams are doing to apply quantum-assisted measurement to biological imaging could benefit DoD’s efforts to develop specialized drugs and therapies, and potentially support DARPA’s work to better understand how the human brain functions.”
"Nuclear spin imaging at the atomic level is essential for the under-standing of fundamental biological phenomena and for applicationssuch as drug discovery. The advent of novel nano-scale sensors hasgiven hope of achieving the long-standing goal of single-protein, highspatial-resolution structure determination in their natural environ-ment and ambient conditions. In particular, quantum sensors basedon the spin-dependent photoluminescence of Nitrogen Vacancy (NV)centers in diamond have recently been used to detect nanoscale en-sembles of external nuclear spins. While NV sensitivity is approachingsingle-spin levels, extracting relevant information from a very com-plex structure is a further challenge, since it requires not only theability to sense the magnetic field of an isolated nuclear spin, butalso to achieve atomic-scale spatial resolution. Here we propose amethod that, by exploiting the coupling of the NV center to an intrin-sic quantum memory associated with the Nitrogen nuclear spin, canreach a tenfold improvement in spatial resolution, down to atomic
scales."
So what its all doing essentially, is mapping the phase of atoms/SINGLETS in ATP, onto a NV center based CCD
and at the singlet level, correlations occur.. creating entanglement
so the particles in the neuron are being correlated with the ancillas, the nitrogen vacancies, where they take on the "target" state..
not only is the above imaging done to obtain a correlation to living neurons, via the singlet states within, but once the connection is established, the MEMRISTOR NETWORK itself can be used to RECONSTRUCT VISION IN REAL TIME
Now add the above method, a direct connection using correlated states shared from neurons TO Memristors... and imagine the reconstruction aided by the AI within the memristor network, as it works on so.. (note, this example is done MERELY using fMRI information)
now Imagine statistical ensembles being observed in real time via non-classical entanglement
But what I'm trying to show, is hows its this assumed entanglement based BCI technology, plus the memristor network it is coupled to, that is responsible for the TI communities complaints that "they (the government) can see through my own eyes"
The nitrogen vacancies in the scanners hold values, wavefunctions, which are prepared states aka Ancilla bits, and are the time domain/reference frequency, which carrries the "quantum event/wavefunction" which causes the singlet pairs to form up in the scanned biology..
and correlates with them at the ground state as the relaxation occurs..
Quote:
It is important to realize that particles in singlet states need not be locally bound to each other. For example, when the spin states of two electrons are correlated by their emission from a single quantum event that conserves angular momentum, the resulting electrons remain in a shared singlet state even as their separation in space increases indefinitely over time, provided only that their angular momentum states remain unperturbed
and that weakly measured value, the wavefunction is sent through the optical cavity, teleported to identical nitrogen vacancies in memristors.. so the ground states in both system are correlated and thus the neural activity can be monitored in real time in the memristors
|
{
"pile_set_name": "pile-cc"
}
|
require_relative '../../../spec_helper'
require 'cgi'
describe "CGI::QueryExtension#from" do
before :each do
ENV['REQUEST_METHOD'], @old_request_method = "GET", ENV['REQUEST_METHOD']
@cgi = CGI.new
end
after :each do
ENV['REQUEST_METHOD'] = @old_request_method
end
it "returns ENV['HTTP_FROM']" do
old_value, ENV['HTTP_FROM'] = ENV['HTTP_FROM'], "googlebot(at)googlebot.com"
begin
@cgi.from.should == "googlebot(at)googlebot.com"
ensure
ENV['HTTP_FROM'] = old_value
end
end
end
|
{
"pile_set_name": "github"
}
|
@comment $NetBSD: PLIST,v 1.5 2017/06/21 08:28:43 markd Exp $
share/texmf-dist/scripts/luaotfload/luaotfload-tool.lua
share/texmf-dist/scripts/luaotfload/mkcharacters
share/texmf-dist/scripts/luaotfload/mkglyphlist
share/texmf-dist/scripts/luaotfload/mkimport
share/texmf-dist/scripts/luaotfload/mkstatus
share/texmf-dist/scripts/luaotfload/mktests
share/texmf-dist/tex/luatex/luaotfload/fontloader-2017-02-11.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-gen.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-nod.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-data-con.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-afk.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cff.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cid.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-con.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-def.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-dsp.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-gbn.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ini.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-lua.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-map.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ocl.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-one.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-onr.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-osd.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ota.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otc.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oti.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otj.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otl.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oto.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otr.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ots.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oup.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-tfm.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ttf.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-demo-vf-1.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-enc.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-ext.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-syn.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-boolean.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-file.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-function.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-io.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lpeg.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lua.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-math.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-string.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-table.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-math.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-math.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-plain.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-reference.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-util-fil.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-util-str.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-auxiliary.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-blacklist.cnf
share/texmf-dist/tex/luatex/luaotfload/luaotfload-characters.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-colors.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-configuration.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-database.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-diagnostics.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-features.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-glyphlist.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-init.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-letterspace.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-loaders.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-log.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-main.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-parsers.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-resolvers.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-status.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload.sty
|
{
"pile_set_name": "github"
}
|
---
abstract: |
Background
: Measurements of $\beta$ decay provide important nuclear structure information that can be used to probe isospin asymmetries and inform nuclear astrophysics studies.
Purpose
: To measure the $\beta$-delayed $\gamma$ decay of $^{26}$P and compare the results with previous experimental results and shell-model calculations.
Method
: A $^{26}$P fast beam produced using nuclear fragmentation was implanted into a planar germanium detector. Its $\beta$-delayed $\gamma$-ray emission was measured with an array of 16 high-purity germanium detectors. Positrons emitted in the decay were detected in coincidence to reduce the background.
Results
: The absolute intensities of $^{26}$P $\beta$-delayed $\gamma$-rays were determined. A total of six new $\beta$-decay branches and 15 new $\gamma$-ray lines have been observed for the first time in $^{26}$P $\beta$-decay. A complete $\beta$-decay scheme was built for the allowed transitions to bound excited states of $^{26}$Si. $ft$ values and Gamow-Teller strengths were also determined for these transitions and compared with shell model calculations and the mirror $\beta$-decay of $^{26}$Na, revealing significant mirror asymmetries.
Conclusions
: A very good agreement with theoretical predictions based on the USDB shell model is observed. The significant mirror asymmetry observed for the transition to the first excited state ($\delta=51(10)\%$) may be evidence for a proton halo in $^{26}$P.
author:
- 'D. Pérez-Loureiro'
- 'C. Wrede'
- 'M. B. Bennett'
- 'S. N. Liddick'
- 'A. Bowe'
- 'B. A. Brown'
- 'A. A. Chen'
- 'K. A. Chipps'
- 'N. Cooper'
- 'D. Irvine'
- 'E. McNeice'
- 'F. Montes'
- 'F. Naqvi'
- 'R. Ortez'
- 'S. D. Pain'
- 'J. Pereira'
- 'C. J. Prokop'
- 'J. Quaglia'
- 'S. J. Quinn'
- 'J. Sakstrup'
- 'M. Santia'
- 'S. B. Schwartz'
- 'S. Shanab'
- 'A. Simon'
- 'A. Spyrou'
- 'E. Thiagalingam'
title: '${\bm \beta}$-delayed $\gamma$ decay of $\bm{^{26}\mathrm{P}}$: Possible evidence of a proton halo'
---
Introduction\[sec:intro\]
=========================
The detailed study of unstable nuclei was a major subject in nuclear physics during recent decades. $\beta$ decay measurements provide not only important information on the structure of the daughter and parent nuclei, but can also be used to inform nuclear astrophysics studies and probe fundamental subatomic symmetries [@Hardy2015]. The link between experimental results and theory is given by the reduced transition probabilities, $ft$. Experimental $ft$ values involve three measured quantities: the half-life, $t_{1/2}$, the $Q$ value of the transition, which determines the statistical phase space factor $f$, and the branching ratio associated with that transition, $BR$.
In the standard $\mathcal{V\!\!-\!\!A}$ description of $\beta$ decay, $ft$ values are related to the fundamental constants of the weak interaction and the matrix elements through this equation:
$$ft=\frac{\mathcal{K}}{g_V^2|\langle f|\tau|i\rangle|^2+g_A^2|\langle f|\sigma\tau|i\rangle|^2} ,
\label{eq:theo_ft}$$
where $\mathcal{K}$ is a constant and $g_{V(A)}$ are the vector (axial) coupling constants of the weak interaction; $\sigma$ and $\tau$ are the spin and isospin operators, respectively. Thus, a comparison of the experimental $ft$ values with the theoretical ones obtained from the calculated matrix elements is a good test of the nuclear wave functions obtained with model calculations. However, to reproduce the $ft$ values measured experimentally, the axial-vector coupling constant $g_A$ involved in Gamow-Teller transitions has to be renormalized [@Wilkinson1973; @WILKINSON1973_2]. The effective coupling constant $g'_A=q\times g_A$ is deduced empirically from experimental results and depends on the mass of the nucleus: The quenching factor is $q=0.820(15)$ in the $p$ shell [@Chou1993], $q=0.77(2)$ in the $sd$ shell [@Wildenthal1983], and $q=0.744(15)$ in the $pf$ shell [@Martinez1996]. Despite several theoretical approaches attempting to reveal the origin of the quenching factor it is still not fully understood [@Brown2005].
Another phenomenon which shows the limitations of our theoretical models is the so-called *$\beta$-decay mirror asymmetry*. If we assume that the nuclear interaction is independent of isospin, the theoretical description of $\beta$ decay is identical for the decay of a proton ($\beta^+$) or a neutron ($\beta^-$) inside a nucleus. Therefore, the $ft$ values corresponding to analog transitions should be identical. Any potential asymmetries are quantified by the asymmetry parameter $\delta=ft^+/ft^-1$, where the $ft^\pm$ refers to the $\beta^\pm$ decays in the mirror nuclei. The average value of this parameter is $(4.8\pm0.4)\%$ for $p$ and $sd$ shell nuclei [@Thomas2004]. From a theoretical point of view the mirror asymmetry can have two origins: (a) the possible existence of exotic *second-class currents* [@Wilkinson1970447; @PhysRevLett.38.321; @WilkinsonEPJ], which are not allowed within the framework of the standard $\mathcal{V\!\!-\!\!A}$ model of the weak interaction and (b) the breaking of the isospin symmetry between the initial or final nuclear states. Shell-model calculations were performed to test the isospin non-conserving part of the interaction in $\beta$ decay [@Smirnova2003441]. The main contribution to the mirror asymmetry from the nuclear structure was found to be from the difference in the matrix elements of the Gamow-Teller operator ($|\langle f|\sigma\tau|i\rangle|^2$), because of isospin mixing and/or differences in the radial wave functions.
Large mirror asymmetries have been reported for transitions involving halo states [@Tanihata2013]. For example, the asymmetry parameter for the $A=17$ mirror decays $^{17}$Ne$\rightarrow^{17}$F and $^{17}$N$\rightarrow^{17}$O to the first excited states of the respective daughters was measured to be $\delta=(-55\pm9)\%$ and $\delta=(-60\pm1)\%$ in two independent experiments [@Borge1993; @Ozawa1998]. This result was interpreted as evidence for a proton halo in the first excited state of $^{17}$F assuming that the fraction of the $2s_{1/2}$ component of the valence nucleons remains the same in $^{17}$Ne and $^{17}$N. However, a different interpretation was also given in terms of charge dependent effects which increase the $2s_{1/2}$ fraction in $^{17}$Ne by about 50% [@PhysRevC.55.R1633]. The latter result is also consistent with the high cross section obtained in the fragmentation of $^{17}$Ne [@Ozawa199418; @Ozawa199663], suggesting the existence of a halo in $^{17}$Ne. More recently Kanungo *et al.* reported the possiblity of a two-proton halo in $^{17}$Ne [@Kanungo200321]. An extremely large mirror asymmetry was also observed in the mirror decay of $A=9$ isobars $^{9}$Li$\rightarrow^{9}$Be and $^{9}$C$\rightarrow^{9}$B. A value of $\delta=(340\pm70)\%$ was reported for the $^{9}$Li and $^{9}$C $\beta$-decay transitions to the 11.8 and 12.2 MeV levels of their respective daughters, which is the largest ever measured [@Bergmann2001427; @Prezado2003]. Despite the low experimental interaction cross sections measured with various targets in attempts to establish the halo nature of $^{9}$C [@Ozawa199663; @Blank1997242], recent results at intermediate energies [@Nishimura2006], together with the anomalous magnetic moment [@Matsuta1995c153] and theoretical predictions [@0256-307X-27-9-092101; @PhysRevC.52.3013; @Gupta2002], make $^{9}$C a proton halo candidate. The potential relationship between large mirror asymmetries and halos is therefore clear. Precision measurements of mirror asymmetries in states involved in strong, isolated, $\beta$-decay transitions might provide a technique to probe halo nuclei that is complementary to total interaction cross section and momentum distribution measurements in knockout reactions [@Tanihata2013].
Moreover, $\beta$ decay of proton-rich nuclei can be used for nuclear astrophysics studies. Large $Q_\beta$-values of these nuclei not only allow the population of the bound excited states of the daughter, but also open particle emission channels. Some of these levels correspond to astrophysically significant resonances which cannot be measured directly because of limited radioactive beam intensities. For example, the $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}$ reaction [@Wrede_2009] plays an important role in the abundance of the cosmic $\gamma$-ray emitter $^{26}\mathrm{Al}$. The effect of this reaction is to reduce the amount of ground state $^{26}\mathrm{Al}$, which is bypassed by the sequence $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}(\beta\nu)^{26m}\mathrm{Al}$, reducing therefore the intensity of the 1809-keV $\gamma$-ray line characteristic of the $^{26}\mathrm{Al}$ $\beta$ decay [@Iliadis_96]. Thus it is important to constrain the $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}$ reaction rate.
$^{26}$P is the most proton-rich bound phosphorus isotope. With a half-life of $43.7(6)$ ms and a $Q_{EC}$ value of $18258(90)$ keV [@Thomas2004] the $\beta$ decay can be studied over a wide energy interval. $\beta$-delayed $\gamma$-rays and protons from excited levels of $^{26}$Si below and above the proton separation energy of $5513.8(5)$ keV [@AME2012] were observed directly in previous experiments [@Thomas2004; @Cable_83; @Cable_84] and, more recently, indirectly from the Doppler broadening of peaks in the $\beta$-delayed proton-$\gamma$ spectrum [@Schwartz2015]. The contribution of novae to the abundance of $^{26}\mathrm{Al}$ in the galaxy was recently constrained by using experimental data on the $\beta$ decay of $^{26}$P [@Bennett2013].
In addition, $^{26}\mathrm{P}$ is a candidate to have a proton halo [@Brown1996; @Ren1996; @Gupta2002; @Liang2009]. Phosphorus isotopes are the lightest nuclei expected to have a ground state with a dominant contribution of a $\pi s_{1/2}$ orbital. Low orbital angular momentum orbitals enhance the halo effect, because higher $\ell$-values give rise to a confining centrifugal barrier. The low separation energy of $^{26}$P (143(200) keV [@AME2012], 0(90) keV[@Thomas2004]), together with the narrow momentum distribution and enhanced cross section observed in proton-knockout reactions [@Navin1998] give some experimental evidence for the existence of a proton halo in $^{26}$P.
In this paper, we present a comprehensive summary of the $\beta$-delayed $\gamma$ decay of $^{26}$P measured at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University during a fruitful experiment for which selected results have already been reported in two separate shorter papers [@Bennett2013; @Schwartz2015]. In the present work, the Gamow-Teller strength, $B(GT)$, and the experimental $ft$ values are compared to theoretical calculations and to the decay of the mirror nucleus $^{26}$Na to investigate the Gamow-Teller strength and mirror asymmetry, respectively. A potential relationship between the mirror asymmetry and the existence of a proton halo in $^{26}$P is also discussed. Finally, in the last section, the calculated thermonuclear $^{25}$Al$(p,\gamma)^{26}$Si reaction rate, which was used in Ref. [@Bennett2013] to estimate the contribution of novae to the abundance of galactic $^{26}$Al, is tabulated for completeness.
Experimental procedure\[sec:experiment\]
========================================
![\[fig:Setup\] Schematic view of the experimental setup. The thick arrow indicates the beam direction. One of the 16 SeGA detectors was removed to show the placement of the GeDSSD.](Fig1.eps){width="45.00000%"}
The experiment was carried out at the National Superconducting Cyclotron Laboratory (NSCL). A 150 MeV/u 75 pnA primary beam of $^{36}\mathrm{Ar}$ was delivered from the Coupled Cyclotron Facility and impinged upon a 1.55 g/cm$^2$ Be target. The $^{26}\mathrm{P}$ ions were in-flight separated from other fragmentation products according to their magnetic rigidity by the A1900 fragment separator [@Morrissey200390]. The Radio-Frequency Fragment Separator (RFFS) [@Bazin2009314] provided a further increase in beam purity before the beam was implanted into a 9-cm diameter, 1-cm thickness planar germanium double-sided strip detector (GeDSSD) [@Larson201359]. To detect signals produced by both the implanted ions and the $\beta$ particles emitted during the decay, the GeDSSD was connected to two parallel amplification chains. This allowed the different amounts of energy deposited in implantations (low gain) and decays (high gain) to be detected in the GeDSSD. The GeDSSD was surrounded by the high purity germanium detector array SeGA [@Mueller2001492] in its barrel configuration which was used to measure the $\beta$-delayed $\gamma$ rays (see Fig.\[fig:Setup\]).
![\[fig:PID\] Particle identification plot obtained for a selection of runs during the early portion of the experiment, before the beam tune was fully optimized. The energy loss was obtained from one of the PIN detectors and the time of flight between the same detector and the scintillator placed at the focal plane of the A1900 separator. A low-gain energy signal in the GeDSSD condition was used. The color scale corresponds to the number of ions.](Fig2.eps){width=".5\textwidth"}
The identification of the incoming beam ions was accomplished using time-of-flight and energy loss signals. The energy loss signals were provided by a pair of silicon PIN detectors placed slightly upstream of the decay station. The time of flight was measured between one of these PINs and a plastic scintillator placed 25 m upstream, at the A1900 focal plane. Figure \[fig:PID\] shows a two-dimensional cluster plot of the energy loss versus the time of flight for the incoming beam taken prior to a re-tune that improved the beam purity substantially for the majority of the experiment. A coincidence condition requiring a low-gain signal in the GeDSSD was applied to ensure the ions were implanted in the detector. It shows that the main contaminant in our beam was the radioactive isotone $^{24}\mathrm{Al}$ ($\sim$13%). During the early portion of the experiment, a small component of $^{25}\mathrm{Si}$ was also present in the beam. We estimated its ratio and it was on average 2.1%, but this value was diluted to 0.5% after incorporating the data acquired after the re-tune. Small traces of lighter isotones like $^{22}\mathrm{Na}$ and $^{20}\mathrm{F}$ were also present ($\sim$2.5%). The total secondary beam rate was on average 80 ions/s and the overall purity of the implanted beam was 84%. This value of the beam purity differs from the previous reported values in Ref. [@Bennett2013], in which the implant condition was not applied The $^{26}\mathrm{P}$ component was composed of the ground state and the known 164.4(1) keV isomeric state [@Nishimura2014; @DPL2016]. Because of the short half-life of the isomer \[120(9) ns\] [@Nishimura2014] and the fact that it decays completely to the ground state of $^{26}\mathrm{P}$, our $\beta$-decay measurements were not affected by it.
The data were collected event-by-event using the NSCL digital acquisition system [@Prokop2014]. Each channel provided its own time-stamp signal, which allowed coincidence gates to be built between the different detectors. To select $\beta$-$\gamma$ coincidence events, the high-gain energy signals from the GeDSSD were used to indicate that a $\beta$ decay occurred. The subsequent $\gamma$ rays emitted from excited states of the daughter nuclei were selected by setting a 1.5-$\mu$s coincidence window. The 16 spectra obtained by each of the elements of SeGA were then added together after they were gain matched run-by-run to account for possible gain drifts during the course of the experiment.
{width=".85\textwidth"}
Data Analysis and Experimental Results \[sec:data\]
===================================================
As mentioned in Sec. \[sec:intro\], the data presented in this paper are from the same experiment described in Refs. [@Bennett2013; @Schwartz2015], but independent sorting and analysis routines were developed and employed. The values extracted are therefore slightly different, but consistent within uncertainties. New values derived in the present work are not intended to supersede those from Refs. [@Bennett2013; @Schwartz2015], but rather to complement them. In this section, the analysis procedure is described in detail and the experimental results are presented.
Figure \[fig:spec\] shows the cumulative $\gamma$-ray spectrum observed in all the detectors of the SeGA array in coincidence with a $\beta$-decay signal in the GeDSSD. We have identified 48 photopeaks, of which 30 are directly related to the decay of $^{26}$P. Most of the other peaks were assigned to the $\beta$ decay of the main contaminant of the beam, $^{24}$Al. Peaks in the spectrum have been labeled by the $\gamma$-ray emitting nuclide. Twenty-two of the peaks correspond to $^{26}$Si, while eight of them correspond to $\beta$-delayed proton decays to excited states of $^{25}$Al followed by $\gamma$-ray emission. In this work we will focus on the decay to levels of $^{26}$Si as the $^{25}$Al levels have already been discussed in Ref. [@Schwartz2015].
 Energy calibration of SeGA $\gamma$-ray spectra using the $\beta$-delayed $\gamma$ rays emitted by $^{24}\mathrm{Al}$. The solid line is the result of a second degree polynomial fit. Energies and uncertainties are taken from [@Firestone20072319]. (Lower panel) Residuals of the calibration points with respect to the calibration line.](Fig4.eps){width=".5\textwidth"}
$\bm{\gamma}$-ray Energy Calibration
------------------------------------
The energies of the $\gamma$ rays emitted during the experiment were determined from a calibration of the SeGA array. As mentioned in Sect. \[sec:experiment\] and in Refs. [@Schwartz2015; @Bennett2013] a gain-matching procedure was performed to align all the signals coming from the 16 detectors comprising the array. This alignment was done with the strongest background peaks, namely the 1460.8-keV line (from the $^{40}\mathrm{K}$ decay) and the 2614.5-keV one (from the $^{208}\mathrm{Tl}$ decay). The gain-matched cumulative spectrum was then absolutely calibrated *in situ* using the well-known energies of the $^{24}$Al $\beta$-delayed $\gamma$ rays emitted by $^{24}\mathrm{Mg}$, which cover a wide range in energy from 511 keV to almost 10 MeV [@Firestone20072319]. To account for possible non-linearities in the response of the germanium detectors, a second degree polynomial fit was used as a calibration function. Results of the calibration are shown in Fig. \[fig:calibr\]. The standard deviation for this fit is 0.3 keV, which includes the literature uncertainties associated with the energies of $^{24}\mathrm{Mg}$. The systematic uncertainty was estimated from the residuals of room background peaks not included in the fit. The lower panel of Fig. \[fig:calibr\] shows that these deviations are below 0.6 keV, with an average of 0.2 keV. Based on this, the systematic uncertainty was estimated to be 0.3 keV.
![\[fig:eff\]SeGA photopeak efficiency. (Top panel) Results of a [Geant4]{} simulation \[solid line (red)\] compared to the efficiency measured with absolutely calibrated sources (black circles) and the known $^{24}\mathrm{Mg}$ lines (empty squares). The simulation and the $^{24}\mathrm{Mg}$ data have been scaled to match the source measurements. (Bottom panel) Ratio between the simulation and the experimental data. The shaded area (yellow) shows the adopted uncertainties.](Fig5.eps){width=".5\textwidth"}
Efficiencies
------------
### $\beta$-particle Efficiency \[sec:betaeff\]
The $\beta$-particle detection efficiency of the GeDSSD can be determined by taking the ratio between the number of counts under a certain photopeak in the $\beta$-gated $\gamma$-ray singles spectrum and the ungated one. In principle, the $\beta$ efficiency depends on $Q_\beta$. To investigate this effect, we calculated the ratios between the gated and the ungated spectra for all the $^{24}\mathrm{Mg}$ peaks, which have different combinations of $Q_\beta$, and found it to be independent of the end-point energy of the $\beta$ particles, with an average ratio of $\varepsilon_\beta(^{24}\mathrm{Mg})=(38.6\pm0.9)\%$. Because of the different implantation depths for $^{24}\mathrm{Al}$ and $^{26}\mathrm{P}$ ($^{24}\mathrm{Al}$ barely penetrates into the GeDSSD), we also calculated the gated to ungated ratios of the strongest peaks of $^{26}\mathrm{Si}$ (1797 keV) and its daughter $^{26}\mathrm{Al}$ (829 keV) obtaining a constant, average, value for the efficiency of $\varepsilon_\beta=(65.2\pm0.7)\%$. The singular value for $^{26}\mathrm{Si}$ and $^{26}\mathrm{Al}$ is explained by their common decay point in the GeDSSD.
### $\gamma$-ray Efficiency
To obtain precise measurements of the $\gamma$-ray intensities, we determined the photopeak efficiency of SeGA. The photopeak efficiency was studied over a wide energy range between 400 keV and 8 MeV. The results of a [Geant4]{} [@Agostinelli2003250] Monte-Carlo simulation were compared with the relative intensities of the well-known $^{24}\mathrm{Mg}$ lines used also in the energy calibration. The high energy lines of this beam contaminant made it possible to benchmark the simulation for energies higher than with standard sources. In addition, the comparison of the simulation to data taken offline with absolutely-calibrated $^{154,155}\mathrm{Eu}$ and $^{56}\mathrm{Co}$ sources allowed us to scale the simulation to determine the efficiency at any energy. The scaling factor was 0.91. The statistical uncertainty of this scaling factor was inflated by a scaling factor of $\sqrt{\chi^2/\nu}$ yielding an uncertainty of 1.5%, which was propagated into the efficiency. The magnitude of this factor is consistent with [Geant4]{} simulations of the scatter associated with coincidence summing effects [@Semkow1990]. Figure \[fig:eff\] shows the adopted efficiency curve compared to the source data, and the $^{24}\mathrm{Mg}$ peak intensities. The accuracy of this photopeak efficiency was estimated to be $\delta\varepsilon/\varepsilon=1.5\%$ for energies below 2800 keV and 5% above that energy.
$\bm{\gamma}$-ray intensities \[subsec:intensities\]
----------------------------------------------------
![\[fig:fit\] (Top panel) Example of a typical fit to the 1960-keV peak, using the function of Eq. (\[eq:EMG\]). The dashed line corresponds to the background component of the fit. (Bottom panel) Residuals of the fit in terms of the standard deviation $\sigma$.](Fig6.eps){width=".5\textwidth"}
The intensities of the $\gamma$ rays emitted in the $\beta$ decay of $^{26}\mathrm{P}$ were obtained from the areas of the photopeaks shown in the spectrum of Fig. \[fig:spec\]. We used an exponentially modified Gaussian (EMG) function to describe the peak shape together with a linear function to model the local background:
$$%F=B+\frac{N}{2\tau}\exp\left[\frac{1}{2\tau}\left (2\mu+\frac{\sigma^2}{\tau} -2x \right )\right ]\mathrm{erfc}\left[\frac{ \sigma^2+\tau(\mu-x)}{\sqrt{2}\sigma\tau}\right ],
F=B+\frac{N}{2\tau}e^{\frac{1}{2\tau}\left (2\mu+\frac{\sigma^2}{\tau} -2x \right )}\mathrm{erfc}\!\left[\frac{ \sigma^2+\tau(\mu-x)}{\sqrt{2}\sigma\tau}\right ],
\label{eq:EMG}$$
where $B$ is a linear background, $N$ is the area below the curve, $\mu$ and $\sigma$ are the centroid and the width of the Gaussian, respectively, and $\tau$ is the decay constant of the exponential; erfc is the complementary error function. The parameters describing the width of the Gaussian ($\sigma$) and the exponential constant ($\tau$) were determined by fitting narrow isolated peaks at various energies. The centroids and the areas below the peaks were obtained from the fits. When multiple peaks were very close, a multi-peak fitting function was applied using the same values for the $\tau$ and $\sigma$ parameters for all the peaks in the region. In general the fits were very good, with reduced chi-squared ($\chi^2/\nu$) close to unity. In those cases where $\chi^2/\nu$ was bigger than one, the statistical uncertainties were inflated by multiplying them by $\sqrt{\chi^2/\nu}$. Fig. \[fig:fit\] shows an example of the fit to the 1960-keV peak.
### Absolute normalization
The total number of $^{26}\mathrm{P}$ ions implanted and subsequently decaying in the GeDSSD is, in principle, needed to obtain an absolute normalization of the $\gamma$-ray intensities, and hence the $\beta$ branchings of $^{26}\mathrm{Si}$ levels. The number of $\gamma$ rays observed at energy $E$ is: $$N_\gamma(E)=N_0 \times \varepsilon_{\gamma}(E)\times \varepsilon_{\beta}(E) \times I_{\gamma}(E)
\label{eq:abs_intensity}$$
[d d c c d d]{} & & $_iJ_n^\pi$ & $_fJ_n^\pi$ & &\
1797.1(3) & & $2_1^+$ & $0_1^+$ & 1797.1(3) &\
2786.4(3) & <0.39 & $2_2^+$ & $2_1^+$ & 989.0(3) & 5.7(3)\
& & & $0_1^+$ & 2786.5(4) & 3.4(2)\
3756.8(3) & 1.9(2) & $3_1^+$ & $2_2^+$ & 970.3(3) & 1.15(9)\
& & & $2_1^+$ & 1959.8(4) & 1.7(1)\
4138.6(4) & 6.2(4) & $2_3^+$ & $2_2^+$ & 1352.2(4)& 0.48(7)\
& & & $2_1^+$ & 2341.2(4) & 4.7(3)\
& & & $0_1^+$ & 4138.0(5)& 1.0(1)\
4187.6(4) & 4.4(3) & $3_2^+$ & $2_2^+$ & 1401.3(3) & 3.8(2)\
& & & $2_1^+$ & 2390.1(4)& 2.2(1)\
4445.1(4) & 0.8(2)& $4_1^+$ & $2_2^+$ & & 0.08(6)\
& & & $2_1^+$ & 2647.7(5)& 1.7(1)\
4796.4(5) & 0.56(9)& $4_2^+$ & $2_2^+$ & 2999.1(5)& 0.56(9)\
4810.4(4) & 3.1(2)& $2_4^+$ & $2_2^+$ & 2023.9(3)& 3.1(2)\
5146.5(6) & 0.18(5)& $2_5^+$ & $2_2^+$ & 2360.0(6)& 0.18(5)\
5288.9(4) & 0.76(7)& $4_3^+$ & $4_1^+$ & 842.9(3)& 0.33(7)\
& & & $3_1^+$ & 1532.1(5)& 0.43(7)\
& & & $2_1^+$ & &<0.12\
5517.3(3) & 2.7(2)& $4_4^+$ & $4_1^+$ & 1072.1(5)& 0.69(9)\
& & & $3_2^+$ & 1329.9(3)& 1.4(1)\
& & & $3_1^+$ & 1759.7(5)& 0.47(6)\
& & & $2_2^+$ & 2729.9(5)& 0.29(5)\
5929.3(6) & 0.15(5) & $3_3^+$ & $3_2^+$ & 1741.7(9) & 0.15(5)\
where $N_0$ is the total number of ions decaying, $\varepsilon_{\gamma(\beta)}$ are the efficiencies to detect $\gamma$ rays ($\beta$ particles), and $I_{\gamma}$ is the absolute $\gamma$-ray intensity. To circumvent the uncertainty associated with the total number of ions decaying, we used the ratio of the number of $\beta$ decays of $^{26}\mathrm{P}$ to its daughter $^{26}\mathrm{Si}$ \[$61(2)\%$\] [@Thomas2004], and the absolute intensity of the 829-keV $\gamma$-rays emitted in the $\beta$ decay of $^{26}\mathrm{Si}$, $[21.9(5)\%]$ [@Endt19981], to calculate the intensity of the 1797-keV line, which is the most intense $\gamma$ ray emitted in the decay of $^{26}\mathrm{P}$ (see Table \[tab:levels\]). To do so, we applied Eq. (\[eq:abs\_intensity\]) to these two $\gamma$ rays :
$$\label{eq:intensity_Al}
N_\gamma(829) = N_{^{26}\mathrm{Si}} \varepsilon_{\gamma}(829) \varepsilon_{\beta}(829) I_{\gamma}(829)$$
$$N_\gamma(1797) = N_{^{26}\mathrm{P}} \varepsilon_{\gamma}(1797) \varepsilon_{\beta}(1797) I_{\gamma}(1797)
\label{eq:intensity_Si}$$
By taking the ratio between Eqs. (\[eq:intensity\_Al\]) and (\[eq:intensity\_Si\]), the only unknown is the intensity of the 1797-keV $\gamma$ ray, because the $\beta$ efficiencies can be obtained from the $\beta$-gated to ungated ratios discussed in Sec. \[sec:experiment\]. The value obtained for the intensity of the 1797-keV $\gamma$ ray is thus 58(3)%, which is in agreement with the value 52(11)% reported in Ref. [@Thomas2004] and more precise. The rest of the $\gamma$-ray intensities were determined with respect to this value by employing the efficiency curve and they are presented in Table \[tab:levels\]. We also report an upper limit on the intensity of one $\gamma$ ray which was expected to be near the theshold of our sensitivity given the intensity predicted by theory.
$\bm{\beta}$-$\bm{\gamma}$-$\bm{\gamma}$ coincidences \[subsec:coincidences\]
-----------------------------------------------------------------------------
![\[fig:Coincidences2\] (Color online) $\beta$-$\gamma$-$\gamma$ coincidence spectrum gating on the 1797 keV $\gamma$-rays (blue). The hatched histogram (green) shows coincidences with continuum background in a relatively broad region above the peak gate. The background bins are 16 keV wide and are normalized to the expected background per 2 keV from random coincidences. The strongest peaks corresponding to $\gamma$ rays emitted in coincidence are indicated.](Fig7.eps){width=".5\textwidth"}
The 16-fold granularity of SeGA allowed us to obtain $\beta$-$\gamma$-$\gamma$ coincidence spectra, which helped to interpret the $^{26}\mathrm{P}$ decay scheme. Fig. \[fig:Coincidences2\] shows the gamma coincident spectrum gated on the 1797-keV peak, where we can see several peaks corresponding to $\gamma$ rays detected in coincidence. To estimate the background from random coincidences, we have created another histogram gated on the background close to the peak and normalized to the number of counts within the gated regions. At some energies the background estimate is too high. This is because of a contribution from real $\gamma$-$\gamma$ coincidences involving Compton background, which should not be normalized according to the random assumption.
{width=".75\textwidth"}
Fig. \[fig:Coincidences\] presents a sample of peaks observed in coincidence when gating on some other intense $\gamma$ rays observed. From this sample we can see that the coincidence technique helps to cross-check the decay scheme. For example Fig. \[fig:Coincidences\](a) shows clearly that the 1401-keV $\gamma$ ray is emitted in coincidence with the 989-keV $\gamma$ ray, indicating that the former $\gamma$ ray comes from a higher-lying level. In the same way, we can see in Fig. \[fig:Coincidences\](b) that the 1330-keV $\gamma$-ray is emitted from a level higher than the 4187-keV level. From the gated spectra, some information can also be extracted from the missing peaks. As Fig. \[fig:Coincidences\](c) shows, by gating on the 2024-keV $\gamma$ ray the 970-keV peak disappears, displaying only the 989-keV peak, which means that the 970-keV $\gamma$ ray comes from a level which is not connected with these two levels by any $\gamma$-ray cascade. Fig. \[fig:Coincidences\](d) shows clearly the coincidence between the $\gamma$ ray emitted from the first $2^+$ state at 1797 keV to the ground state of $^{26}$Si and the 2341-keV $\gamma$ ray from the third $2^+$ state to the first excited state.
These coincidence procedures were systematically analyzed for all possible combinations of $\gamma$ rays and the results are summarized in Table \[tab:coincidence\] in the form of a 2D matrix, where a checkmark () means the $\gamma$ rays were detected in coincidence. The condition for a $\gamma$ ray to be listed in coincidence with another is for it to be at least 3$\sigma$ above the estimated random-coincidence background. It is worth noting that this background estimate is somewhat conservative, therefore the significance of some of the peaks is underestimated.
843 970 989 1072 1330 1352 1401 1532 1660 1742 1760 1797 1960 2024 2341 2360 2390 2648 2730 2787 2999 4138
------ ----- ----- ----- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
843 - - - - - - - - - - - - - - - - -
970 - - - - - - - - - - - - - - - - -
989 - - - - - - -
1072 - - - - - - - - - - - - - - - -
1330 - - - - - - - - - - - - - - - - -
1352 - - - - - - - - - - - - - - - - - -
1401 - - - - - - - - - - - - -
1532 – - - - - - - - - - - - - - - -
1660 - - - - - - - - - - - - - - - - -
1742 - - - - - - - - - - - - - - - - - - - -
1760 - - - - - - - - - - - - - - - - -
1797 - - - - - - - - - - - - - -
1960 - - - - - - - - - - - - - - - -
2024 - - - - - - - - - - - - - - - - -
2341 - - - - - - - - - - - - - - - - - - -
2360 - - - - - - - - - - - - - - - - - - -
2390 - - - - - - - - - - - - - - - - - - -
2648 - - - - - - - - - - - - - - - - - -
2730 - - - - - - - - - - - - - - - - - - -
2787 - - - - - - - - - - - - - -
2999 - - - - - - - - - - - - - - - - - - - -
4138 - - - - - - - - - - - - - - - - - - - - -
Decay scheme of $\bm{^{26}\mathrm{P}}$
--------------------------------------
Fig. \[fig:decay\] displays the $^{26}\mathrm{P}$ $\beta$-decay scheme deduced from the results obtained in this experiment. Only those levels populated in the $\beta$ decay are represented. This level scheme was built in a self-consistent way by taking into account the $\gamma$-ray energies and intensities observed in the singles spectrum of Fig. \[fig:spec\] and the $\beta$-$\gamma$-$\gamma$ coincidence spectra described in Sec. \[subsec:coincidences\].
{width="\textwidth"}
The excitation energies of $^{26}\mathrm{Si}$ bound levels, their $\beta$-feedings, the energies of the $\gamma$ rays, and the absolute intensities measured in this work are shown in Table \[tab:levels\].
### $^{26}\mathrm{Si}$ level energies, spins and parities
Level energies of $^{26}\mathrm{Si}$ populated in the $\beta$ delayed $\gamma$ decay of $^{26}\mathrm{P}$ were obtained from the measured $\gamma$-ray energies including a correction for the nuclear recoil. The excitation energy values of the levels listed in Table \[tab:levels\] were obtained from the weighted average of all the possible $\gamma$-ray cascades coming from that level. To assign spins and parities we compared the deduced level scheme with USDB shell-model calculations and took into account $\beta$-decay angular momentum selection rules, showing a 1 to 1 correspondence for all the levels populated by allowed transitions, with a fair agreement in the level energies within theoretical uncertainties of a few hundred keV (see Fig. \[fig:decay\] ).
### $\beta$-feedings
The $\beta$ branching ratio to the $i$-th excited energy level can be determined from the $\gamma$-ray intensities:
$$\label{eq:BR}
BR_i = I_{i,\text{out}}-I_{i,\text{in}},$$
where $I_{i,\text{out}}(I_{i,\text{in}})$ represents the total $\gamma$-ray intensity observed decaying out of (into) the $i$-th level. The $\beta$-decay branches deduced from this experiment are given in Table \[tab:BR\], where they are also compared to previous measurements of $^{26}\mathrm{P}$ $\beta$ decay [@Thomas2004]. To investigate the possible missing intensity from the Pandemonium effect [@Hardy1977], we have used a shell-model calculation to estimate the $\gamma$-ray intensities of all possible transitions from bound states feeding each particular level, and found them to be on the order of the uncertainty or (usually) much lower.
[l d d d d d d]{} & &\
& & & & & &\
1797 & 41(3) & 44(12) & 47.22 & 4.89(3) & 4.89(17) &4.81\
2786 &<0.39 & 3.3(20) & 0.37 & & 5.87(72) &6.77\
3757 & 1.9(2) & 2.68(68)& 1.17 & 5.94(4) & 5.81(15) & 6.135\
3842 & & 1.68(47)& & & 6.00(17) &\
4139 & 6.2(4) & 1.78(75)& 2.97 & 5.37(3) & 5.93(32) & 5.634\
4188 & 4.4(3) & 2.91(71)& 8.88 & 5.51(3) & 5.71(14) & 5.182\
4445 & 0.8(2) && 1.11 & 6.23(8) & & 6.071\
4796 & 0.56(9) && 0.06 & 6.31(7) & & 7.274\
4810 & 3.1(2) && 4.45 & 5.57(3) & & 5.934\
5147 & 0.18(5) && 0.03 & 6.7(1) & &7.474\
5289 & 0.76(7) && 0.60 & 6.09(6) & &6.158\
5517 & 2.7(2) && 3.96 & 5.51(4) & &5.262\
5929 & 0.15(5) & 17.96(90)&10.08 & 6.7(1)& 4.60(3)&4.810\
Discussion
===========
Comparison to previous values of $\bm{^{26}\mathrm{Si}}$ level energies
-----------------------------------------------------------------------
We compare in Table \[tab:energies\] the energies and the spins and parities deduced in this work with previous values available in the literature [@Thomas2004; @PhysRevC.75.062801; @Komatsubara2014; @Doherty2015]. The results of Ref. [@Thomas2004] correspond to $\beta$ decay, thus the same levels are expected to be populated. We observed six levels of $^{26}\mathrm{Si}$ for the first time in the $\beta$ decay of $^{26}\mathrm{P}$. These six levels were previously reported using nuclear reactions to populate them [@PhysRevC.75.062801; @Komatsubara2014; @Doherty2015]. The previously reported energies for these levels are in good agreement with the results obtained in this work. However, it is worth mentioning a significant discrepancy (up to 6 keV) with energies obtained in Refs. [@PhysRevC.75.062801; @Doherty2015] for the two $\gamma$ rays emitted from the $4_4^+$ state to the $3_1^+$ and $2_2^+$ states (1759.7 and 2729.9 keV, respectively). Despite these discrepancies in the $\gamma$-ray energies, the excitation energy of the level reported is in excellent agreement with our results. However, it should be noted that the $\gamma$-ray branching ratios are inconsistent for the 1759.7-keV transition.
The 3842-keV level reported in [@Thomas2004] was not observed in the present work. In agreement with [@PhysRevC.75.062801; @Komatsubara2014; @Doherty2015] we show that this level does not exist, as the 2045-keV $\gamma$ ray emitted from this level to the first excited state is not seen either in the spectrum of Fig. \[fig:spec\] nor the coincidence spectrum with the 1797-keV peak (Fig. \[fig:Coincidences2\]).
The 4810-keV level was previously tentatively assigned to be a $2^+$ state, but this assignment was not clear, because of the proximity to another level at 4830 keV assigned as a $0^+$. The fact that the 2024-keV line appears in the spectrum confirms that the spin and parity is $2^+,3^+$ or $4^+$. If this level was $0^+$, the $\beta$-decay transition which populates this level would be second forbidden ($\Delta J=3$,$\Delta\pi=0$) and highly suppressed.
We observed also the two levels located just above the proton separation energy ($S_p=5513.8$ keV). The first one corresponds to a $4^+$ state with an energy of 5517 keV. This level was also reported in Refs. [@PhysRevC.75.062801; @Komatsubara2014]. The second level at 5929 keV was previously observed in $\beta$-delayed proton emission by Thomas *et al.* [@Thomas2004] and more recently reported in our previous paper describing the present experiment [@Bennett2013]. The results presented here with the same set of data, but with an independent analysis, confirm the evidence for the observation of a $\gamma$ ray emitted from that level in the present experiment.
[c d c d c d c d c d c c]{} & & & & &\
&&& &&\
$J_n^\pi$ & &$J_n^\pi$ & &$J_n^\pi$ & & $J_n^\pi$ && $J_n^\pi$ && $J_n^\pi$ &\
$2_1^+$ & 1797.1(3) & $2_1^+$ & 1795.9(2) & $2_1^+$ & 1797.3(1) & $2_1^+$ & 1797.4(4)&$2_1^+$ &1797.3(1) &$2_1^+$ &1887\
$2_2^+$ & 2786.4(3) & $2_2^+$ & 2783.5(4) & $2_2^+$ & 2786.4(2) & $2_2^+$ & 2786.8(6)&$2_2^+$ &2786.4(2) &$2_2^+$ &2948\
&& && $0_2^+$ & 3336.4(6) & $0_2^+$ & 3335.3(4)&$0_2^+$ & 3336.4(2)&&\
$3_1^+$ & 3756.8(3) & $(3_1^+)$ & 3756(2) & $3_1^+$ & 3756.9(2) & $3_1^+$ & 3756.9(4)& $3_1^+$ & 3757.1(3)&$3_1^+$ &3784\
&& $(4_1^+)$ & 3842(2) &&&&&&&&\
$2_3^+$ & 4138.6(4) & $2_3^+$ & 4138(1) & $2_3^+$ & 4139.3(7) & $2_3^+$ & 4138.6(4)& $2_3^+$ & 4138.8(13) &$2_3^+$ &4401\
$3_2^+$ & 4187.6(4) & $3_2^+$ & 4184(1) & $3_2^+$ & 4187.1(3) & $3_2^+$ & 4187.4(4) & $3_2^+$ & 4187.2(4) &$3_2^+$ &4256\
$4_1^+$ & 4445.1(4) & & & $4_1^+$ & 4446.2(4) & $4_1^+$ & 4445.2(4)& $4_1^+$ & 4445.5(12) &$4_1^+$ &4346\
$4_2^+$ & 4796.4(5) & & & $4_2^+$ & 4798.5(5) & $4_2^+$ & 4795.6(4) & $4_2^+$ & 4796.7(4) &$4_2^+$ &4893\
$2_4^+$ & 4810.4(4) & & & $(2_4^+)$ & 4810.7(6) &$(2_4^+)$ & 4808.8(4)&$2_4^+$ & 4811.9(4)&$2_4^+$ &4853\
&& && $(0_3^+)$ & 4831.4(10) & $(0_3^+)$ & 4830.5(7) & $0_3^+$ & 4832.1(4)&&\
$2_5^+$ & 5146.5(6) & & & $2_5^+$ & 5146.7(9) & $2_5^+$ & 5144.5(4) & $2_5^+$ & 5147.4(8)&$2_5^+$ &5303\
$4_3^+$ & 5288.9(4) & & & $4_3^+$ & 5288.2(5) & $4_3^+$ & 5285.4(7)& $4_3^+$ & 5288.5(7) &$4_3^+$ &5418\
$4_4^+$ & 5517.3(3) & & & $4_4^+$ & 5517.2(5) & $4_4^+$ & 5517.8(11)& $4_4^+$ & 5517.0(5)&$4_4^+$ &5837\
&& && $1_1^+$ & 5677.0(17)& $1_1^+$ & 5673.6(10) & $1_1^+$ & 5675.9(11)&&\
&& &&& & $0_4^+$ & 5890.0(10)& $0_4^+$ & 5890.1(6)& &\
$3_3^+$ & 5929.3(6) & $3_1^+$ & 5929(5)[^1] & &&&&&&$3_3^+$ &6083\
$\bm{ft}$ values and Gamow-Teller strength
------------------------------------------
As mentioned in Sec. \[sec:intro\], the calculation of the experimental $ft$ values requires the measurement of three fundamental quantities: (a) the half-life, (b) the branching-ratio, and (c) the $Q$ value of the decay. The experimental value of the half-life and the semiempirical $Q$-value, are $t_{1/2}=43.7(6)$ ms and $Q_{EC}=18250(90)$ keV, respectively. Both values were taken from Ref. [@Thomas2004]. The branching ratios from the present work are listed in Table \[tab:levels\]. The partial half-lives $t_i$, are thus calculated as: $$\label{eq:partial_half_life}
t_i = \frac{t_{1/2}}{BR_i}(1+P_{EC}),$$
where $BR_i$ is the $\beta$-branching ratio of the $i$-th level and $P_{EC}$ the fraction of electron capture, which can be neglected for the light nuclide $^{26}$P. The statistical phase space factors $f$ were calculated with the parametrization reported in [@Wilkinson197458] including additional radiative [@WILKINSON1973_2] and diffuseness corrections [@PhysRevC.18.401]. The uncertainty associated with this calculation is 0.1%, which is added quadratically to the uncertainty derived from the 0.5% uncertainty of the $Q_{EC}$ value. Table \[tab:BR\] shows the $\beta$ branches and $\log\!ft$ values for the transitions to excited levels of $^{26}$Si compared to the previous values reported in [@Thomas2004]. For the first excited state, our estimation of the $\beta$ feeding is consistent with the previous result. In the case of the second excited state, the previous value is one order of magnitude larger than our upper limit. This is because of the new levels we observed. The large branching ratios observed for the $2_3^+$ and the $3_2^+$ states compared to previous results, 6.2(4)% and 4.4(3)%, respectively, are noteworthy. The reason for that difference is the observation of new $\gamma$ rays emitted by those levels which have now been accounted for. The new levels together with the unobserved state at 3842 keV explain all the discrepancies between the results reported here and literature values [@Thomas2004]. As far as the $\log\! ft$ values are concerned the agreement for the first excited state is very good, but when going to higher energies, the discrepancies in the $\log\! ft$ values are directly related to those in the branching ratios.
### Comparison to theory
Theoretical calculations were also performed using a shell model code. Wave functions of $^{26}$P were deduced using a full $sd$-shell model with the USDB interaction and their corresponding beta decay transitions to $^{26}$Si levels.
Fig. \[fig:decay\] shows the comparison between the $^{26}\mathrm{Si}$ level energies deduced in this $^{26}$P $\beta$-decay work to the same levels predicted by the calculation. We observe a fair agreement in the level energies, but the theoretical values are systematically higher. The r.m.s. and maximum deviations between theory and experimental results are 109 and 320 keV, respectively. From a direct comparison we also see that in this work we have measured all the states populated in the allowed transitions predicted by the shell-model calculation.
The experimental $\log\! ft$ values presented in Table \[tab:BR\] were determined from the measured branching ratios combined with the known values of $Q_{EC}$ and half-life [@Thomas2004]. Theoretical Gamow-Teller strengths were obtained from the matrix elements of the transitions to states of $^{26}$Si populated in the $\beta$ decay of $^{26}$P. To compare them to the experimental results, the experimental $B(GT)$ values were calculated from the $ft$ values through the expression,
$$\label{eq:BGT}
B(GT)=\frac{2\mathcal{F}t}{ft},$$
[d d c d d ]{} & &\
& & $I_n^\pi$ & &\
1797 & 0.048(3) &$2_1^+$ &1887 &0.0606\
2786 &<0.0007 &$2_2^+$ &2948 &0.0007\
3757 & 0.0044(4) &$3_1^+$ &3784 & 0.0029\
4139 & 0.016(1) &$2_3^+$ &4401 & 0.009\
4188 & 0.0117(1) &$3_2^+$ &4256 & 0.0256\
4445 & 0.0023(4) &$4_1^+$ &4346 & 0.0033\
4796 & 0.0018(3) &$4_2^+$ &4893 & 0.0002\
4810 & 0.0103(7) &$2_4^+$ &4853 & 0.0161\
5147 & 0.0007(2) &$2_5^+$ &5303 &0.0001\
5289 & 0.0031(4) &$4_3^+$ &5418 & 0.0027\
5517 & 0.012(1) &$4_4^+$ &5837 & 0.0213\
where $\mathcal{F}t= 3072.27\pm0.62$ s [@Hardy2015] is the average corrected $ft$ value from $T=1$ $0^+\rightarrow 0^+$ superallowed Fermi $\beta$ decays. Table \[tab:bgt\] shows the comparison between the experimental and theoretical $B(GT)$ values. A quenching factor $q=0.77$ ($q^2=0.6$) was applied to the shell-model calculation [@Wildenthal1983]. Theoretical predictions overestimate the experimental values for the transitions to the $2^+_1$, $3^+_2$, $4^+_1$, $2^+_4$, and $4^+_4$ states. Experimental $B(GT)$ values are slightly underestimated for the rest of the states up to 5.9 MeV. The most significant differences are in the $4_2^+$ and the $2_5^+$ levels for which the predicted $B(GT)$ values differ by almost one order of magnitude with the experimental ones. A possible explanation for this difference is the mixing between different levels.
![\[fig:BGT\] Summed Gamow-Teller strength distribution of the $\beta$ decay of $^{26}$P up to 5.9 MeV excitation energy. The results of the present experiment are compared to previous results [@Thomas2004] and Shell-Model calculations. A quenching factor $q^2=0.6$ was used in the theoretical calculation.](Fig10.eps){width=".45\textwidth"}
[d c c d c d d]{} & &&\
& & $I_n^\pi$& & & &\
1797 & 7.9(5)$\times 10^4$ & $2_1^+$ &1809 &5.23(2)$\times 10^4$ & 51(10) &50(60)\
3757 & 8.7(8)$\times 10^5$ & $3_1^+$ &3941 &7.5(2)$\times 10^5$ &16(11)&10(40)\
4139 & 2.4(2)$\times 10^5$ & $2_3^+$ &4332 &4.22(9)$\times 10^5$ &-43(5)&110(160)\
4188 & 3.2(2)$\times 10^5$ & $3_2^+$ &4350 &2.16(4)$\times 10^5$ & 50(10) &110(70)\
4445 & 1.7(7)$\times 10^6$ & $4_1^+$ &4319 &1.43(3)$\times 10^6$ &20(50)\
4796 & 2.1(3)$\times 10^6$ & $4_2^+$ &4901 &1.63(7)$\times 10^6$ & 29(18)\
4810 & 3.7(3)$\times 10^5$ & $2_4^+$ &4835 &1.85(2)$\times 10^5$ & 100(16)\
5147 & 5.6(20)$\times 10^6$ & $2_5^+$ &5291 &2.0(3)$\times 10^7$ & -72(11)\
5289 & 1.2(2)$\times 10^6$ & $4_3^+$ &5476 &7.9(40)$\times 10^7$ & -98(1)\
5517 & 3.2(3)$\times 10^5$ & $4_4^+$ &5716& 1.71(3)$\times 10^5$&87(18)\
Fig. \[fig:BGT\] shows the summed Gamow-Teller strength distribution of the decay of $^{26}$P for bound levels up to 5517 keV. In this figure we compare the results obtained in this work with the previous results and the shell-model calculation. We can see that the agreement with the previous experimental results is good for the first excited state, with a small difference that is consistent within uncertainties. As the energy increases the differences become more significant, with our results slightly below the previous ones until the contribution of the new levels is added. For energies above 4.1 MeV, the results from the previous experiment are clearly below our results. If we compare the present data with the theoretical prediction using the typical quenching factor of $q^2=0.6$, we see that the theoretical prediction overestimates the summed Gamow-Teller strength in the excitation energy region below 5.9 MeV. If a quenching factor of 0.47 were applied to the shell model calculations instead, the agreement would be almost perfect in this energy region. However, this does not necessarily imply that the value of $q^2=0.6$ is inapplicable because only a small energy range was considered for the normalization. In fact, most of the Gamow-Teller strength is to unbound states which have not been measured in the present work. Furthermore, according to shell model calculations, only $\sim$21% of the total Gamow-Teller strength is in the $Q$-value window.
Mirror asymmetry and $\bm{^{26}\mathrm{P}}$ proton halo
-------------------------------------------------------
The high precision data on the $\beta$ decay of the mirror-nucleus $^{26}$Na from Ref. [@PhysRevC.71.044309], together with the results obtained in the present work made it possible to calculate finite values of the mirror asymmetry for $\beta$-decay transitions from the $A=26$, $T_z=\pm2$ mirror nuclei to low lying states of their respective daughters. Table \[tab:mirror\] shows the results of the $ft$ values obtained for the $\beta$ decay of $^{26}$P and its mirror nucleus, and the corresponding asymmetry parameter, compared to the previous experimental results reported in Ref. [@Thomas2004]. We see that for the low lying states, the agreement between previous data and our results is good, but our results are more precise, yielding the first finite values for this system. For the higher energy states, we report the first values for the mirror asymmetry. We observe large and significant mirror asymmetries with values ranging from $-98\%$ up to $+100\%$. As mentioned in Sec. \[sec:intro\], mirror asymmetries can be related to isospin mixing and/or differences in the radial wavefunctions. It was also shown that halo states produce significant mirror asymmetries. The $51(10)\%$ asymmetry observed for the transition to the first excited state could be further evidence for a proton halo in $^{26}$P [@Navin1998]. Higher lying states are not as useful because of possible mixing between nearby states.
To investigate this effect more quantitatively, we performed two different shell model calculations with the USDA and USDB interactions. For the transition to the first excited state, these two interactions predict mirror asymmetries of 3% and 2.5%, respectively: far from experimental result. If we lower the energy of the $2s_{1/2}$ proton orbital by 1 MeV to account for the low proton separation energy of $^{26}$P, the mirror asymmetries we obtain for the first excited state are 60% and 50% for the USDA and USDB interactions, respectively, in agreement with the experimental result and supporting the hypothesis of a halo state [@Brown1996]. Before firm conclusions can be made, however, more detailed calculations are needed to evaluate the contributions of the other effects that may produce mirror asymmetries.
$\bm{^{25}\mathrm{Al}(\mathrm{p},\gamma)^{26}\mathrm{Si}}$ Reaction rate calculation
=====================================================================================
As reported in Ref. [@Wrede_2009], the $\beta$ decay of $^{26}$P to $^{26}$Si provides a convenient means for determining parameters of the astrophysically relevant reaction $^{25}$Al$(p,\gamma)^{26}$Si in novae. In these stellar environments, the nuclei are assumed to have a Maxwell-Boltzmann distribution of energies characterized by the temperature $T$ from which the resonant reaction rate can be described by a sum over the different resonances:
$$\label{eq:Reaction rate}
\langle \sigma v\rangle=\left (\frac{2\pi}{\mu kT}\right )^{3/2}\hbar^2\sum_r(\omega\gamma)_re^{-E_r/kT},$$
where $\hbar$ is the reduced Planck constant, $k$ is the Boltzmann constant, $\mu$ is the reduced mass, and $E_r$ is the energy of the resonance in the center-of-mass frame. $(\omega\gamma)_r$ is the resonance strength, which is defined as
$$\label{eq:Res_stength}
(\omega\gamma)_r=\frac{(2J_r+1)}{(2J_p+1)(2J_{\text{Al}}+1)}\left(\frac{\Gamma_p\Gamma_\gamma}{\Gamma} \right )_r.$$
[ d n[2]{}[2]{} n[2]{}[2]{} n[2]{}[2]{} ]{}
& & &\
0.01 & 1.10E-37 & 1.57E-37 & 2.04E-37\
0.015 & 7.00E-32 & 1.00E-31 & 1.30E-31\
0.02 & 3.19E-28 & 4.56E-28 & 5.93E-28\
0.03 & 1.23E-23 & 1.75E-23 & 2.28E-23\
0.04 & 9.42E-21 & 1.34E-20 & 1.75E-20\
0.05 & 1.40E-18 & 1.93E-18 & 2.88E-18\
0.06 & 1.16E-16 & 2.42E-16 & 6.17E-16\
0.07 & 5.64E-15 & 1.50E-14 & 4.30E-14\
0.08 & 1.27E-13 & 3.59E-13 & 1.06E-12\
0.09 & 1.46E-12 & 4.23E-12 & 1.25E-11\
0.1 & 1.03E-11 & 3.01E-11 & 8.95E-11\
0.11 & 5.06E-11 & 1.48E-10 & 4.40E-10\
0.12 & 1.99E-10 & 5.53E-10 & 1.64E-09\
0.13 & 5.80E-10 & 1.68E-09 & 4.98E-09\
0.14 & 1.55E-09 & 4.36E-09 & 1.28E-08\
0.15 & 4.04E-09 & 1.03E-08 & 2.92E-08\
0.16 & 1.14E-08 & 2.43E-08 & 6.24E-08\
0.17 & 3.46E-08 & 6.23E-08 & 1.34E-07\
0.18 & 1.02E-07 & 1.79E-07 & 3.14E-07\
0.19 & 2.84E-07 & 5.41E-07 & 8.44E-07\
0.2 & 7.80E-07 & 1.60E-06 & 2.42E-06\
0.21 & 2.07E-06 & 4.47E-06 & 6.75E-06\
0.22 & 5.21E-06 & 1.15E-05 & 1.75E-05\
0.23 & 1.23E-05 & 2.76E-05 & 4.21E-05\
0.24 & 2.72E-05 & 6.17E-05 & 9.40E-05\
0.25 & 5.67E-05 & 1.29E-04 & 1.97E-04\
0.26 & 1.12E-04 & 2.55E-04 & 3.89E-04\
0.27 & 2.09E-04 & 4.78E-04 & 7.30E-04\
0.28 & 3.74E-04 & 8.55E-04 & 1.31E-03\
0.29 & 6.42E-04 & 1.47E-03 & 2.24E-03\
0.3 & 1.06E-03 & 2.43E-03 & 3.71E-03\
0.31 & 1.70E-03 & 3.88E-03 & 5.93E-03\
0.32 & 2.63E-03 & 6.01E-03 & 9.19E-03\
0.33 & 3.96E-03 & 9.06E-03 & 1.39E-02\
0.34 & 5.82E-03 & 1.33E-02 & 2.04E-02\
0.35 & 8.36E-03 & 1.91E-02 & 2.92E-02\
0.36 & 1.18E-02 & 2.69E-02 & 4.10E-02\
0.37 & 1.62E-02 & 3.70E-02 & 5.66E-02\
0.38 & 2.19E-02 & 5.01E-02 & 7.66E-02\
0.39 & 2.92E-02 & 6.67E-02 & 1.02E-01\
0.4 & 3.83E-02 & 8.75E-02 & 1.34E-01\
0.42 & 6.32E-02 & 1.44E-01 & 2.21E-01\
0.44 & 9.94E-02 & 2.27E-01 & 3.47E-01\
0.46 & 1.50E-01 & 3.42E-01 & 5.22E-01\
0.48 & 2.17E-01 & 4.96E-01 & 7.58E-01\
0.5 & 3.06E-01 & 6.97E-01 & 1.06E+00\
$J_{r(p,\mathrm{Al})}$ are the spins of the resonance (reactants), $\Gamma_{p(\gamma)}$ are the proton ($\gamma$-ray) partial widths of the resonance and $\Gamma=\Gamma_p+\Gamma_\gamma$ is the total width. It was previously predicted [@Iliadis_96] that the levels corresponding to significant resonances at nova temperatures in the $^{25}$Al$(p,\gamma)^{26}$Si reaction are the $J^\pi = 1_1^+,4_4^+,0_4^+$, and $3_3^+$ levels. In our previous work [@Bennett2013] we reported the first evidence for the observation of $\gamma$ rays emitted from the $3_3^+$ level. The determination of the strength of the $3_3^+$ resonance in $^{25}$Al$(p,\gamma)^{26}$Si based on the experimental measurements of the partial proton width ($\Gamma_p$) [@Peplowski2009] and the $\gamma$-ray branching ratio ($\Gamma_\gamma/\Gamma$) [@Bennett2013] was also performed and used to determine the amount of $^{26}$Al ejected in novae. In this work, we have confirmed the evidence for the 1742-keV $\gamma$ ray emitted from the $3_3^+$ level to the $3_2^+$ level in $^{26}$Si with an intensity of $0.15(5)\%$. To some extent, the present paper is a follow-up of our previous work, thus we present here (see Table \[tab:rate\]) for completeness the results of the full reaction rate calculation used to obtain the astrophysical results published in [@Bennett2013]. The table shows the total thermonuclear $^{25}$Al$(p,\gamma)^{26}$Si reaction rate as a function of temperature including contributions from the relevant resonances, namely $1_1^+,0_4^+$, and $3_3^+$ and the direct capture. For the $1^+$ and $0^+$ resonances and the direct capture, values are adopted from Ref. [@Wrede_2009]. Our table includes the rate limits calculated from a 1 standard deviation variation of the parameters.
Conclusions
===========
We have measured the absolute $\gamma$-ray intensities and deduced the $\beta$-decay branches for the decay of $^{26}$P to bound states and low-lying resonances of $^{26}$Si. We have observed six new $\beta$-decay branches and 15 $\gamma$-ray lines never observed before in $^{26}$P $\beta$ decay, likely corresponding to most of all the allowed Gamow-Teller transitions between the ground state and 5.9 MeV. The energies measured for the excited states show good agreement with previous results obtained using various nuclear reactions to populate these states. We have calculated the $\log\! ft$ values of all these new transitions and compared them to USDB shell-model calculations. The reported values show good agreement with the theoretical calculations. In addition, the Gamow-Teller strength function was calculated and compared to theoretical values, showing that the summed Gamow Teller strength is locally overestimated with the standard $sd$ shell quenching of 0.6. The mirror asymmetry was also investigated by calculating the $\beta$-decay asymmetry parameter $\delta$ for 10 transitions. The significant asymmetries observed, particularly for the transition to the first excited states of $^{26}$Si and its mirror $^{26}$Mg ($\delta=(51\pm10)\%$) might be further evidence for the existence of a proton halo in the $^{26}$P. Finally, we have tabulated the total $^{25}$Al$(p,\gamma)^{26}$Si reaction rate at nova temperatures used to estimate the galactic production of $^{26}$Al in novae in Ref. [@Bennett2013].
The authors gratefully acknowledge the contributions of the NSCL staff. This work is supported by the U.S. National Science Foundation under grants PHY-1102511, PHY-0822648, PHY-1350234, PHY-1404442, the U.S. Department of Energy under contract No. DE-FG02-97ER41020, the U.S. National Nuclear Security Agency under contract No. DE-NA0000979 and the Natural Sciences and Engineering Research Council of Canada.
[58]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevC.91.025501) [****, ()](\doibase 10.1103/PhysRevC.7.930) [****, ()](\doibase
http://dx.doi.org/10.1016/0375-9474(73)90840-3) [****, ()](\doibase 10.1103/PhysRevC.47.163) [****, ()](\doibase 10.1103/PhysRevC.28.1343) [****, ()](\doibase 10.1103/PhysRevC.53.R2602) [****, ()](http://stacks.iop.org/1742-6596/20/i=1/a=025) [****, ()](\doibase
10.1140/epja/i2003-10218-8) [****, ()](\doibase
http://dx.doi.org/10.1016/0370-2693(70)90150-4) [****, ()](\doibase 10.1103/PhysRevLett.38.321) [****, ()](\doibase 10.1007/s100500050397) [****, ()](\doibase http://dx.doi.org/10.1016/S0375-9474(02)01392-1) @noop [****, ()]{} [****, ()](\doibase
http://dx.doi.org/10.1016/0370-2693(93)91564-4) [****, ()](http://stacks.iop.org/0954-3899/24/i=1/a=018) [****, ()](\doibase 10.1103/PhysRevC.55.R1633) [****, ()](\doibase
http://dx.doi.org/10.1016/0370-2693(94)90585-1) [****, ()](\doibase
http://dx.doi.org/10.1016/0375-9474(96)00241-2) [****, ()](\doibase
http://dx.doi.org/10.1016/j.physletb.2003.07.050) [****, ()](\doibase
http://dx.doi.org/10.1016/S0375-9474(01)00650-9) [****, ()](\doibase
http://dx.doi.org/10.1016/j.physletb.2003.09.073) [****, ()](\doibase
http://dx.doi.org/10.1016/S0375-9474(97)81837-4) @noop ) [****, ()](\doibase
http://dx.doi.org/10.1016/0375-9474(95)00115-H) [****, ()](http://stacks.iop.org/0256-307X/27/i=9/a=092101) [****, ()](\doibase 10.1103/PhysRevC.52.3013) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevC.79.035803) [****, ()](\doibase
10.1103/PhysRevC.53.475) [****, ()](\doibase 10.1088/1674-1137/36/12/003) [****, ()](\doibase
http://dx.doi.org/10.1016/0370-2693(83)90950-4) [****, ()](\doibase
10.1103/PhysRevC.30.1276) [****, ()](\doibase
10.1103/PhysRevC.92.031302) [****, ()](\doibase
10.1103/PhysRevLett.111.232503) [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(96)00634-X) [****, ()](\doibase
10.1103/PhysRevC.53.R572) [****, ()](http://stacks.iop.org/0256-307X/26/i=3/a=032102) [****, ()](\doibase 10.1103/PhysRevLett.81.5089) [****, ()](\doibase
http://dx.doi.org/10.1016/S0168-583X(02)01895-5) [****, ()](\doibase
http://dx.doi.org/10.1016/j.nima.2009.05.100) [****, ()](\doibase http://dx.doi.org/10.1016/j.nima.2013.06.027) [****, ()](\doibase http://dx.doi.org/10.1016/S0168-9002(01)00257-1) [****, ()](\doibase 10.1051/epjconf/20146602072) @noop [****, ()](\doibase
http://dx.doi.org/10.1016/j.nima.2013.12.044) [****, ()](\doibase http://dx.doi.org/10.1016/j.nds.2007.10.001) [****, ()](\doibase
http://dx.doi.org/10.1016/S0168-9002(03)01368-8) [****, ()](\doibase http://dx.doi.org/10.1016/0168-9002(90)90561-J) [****, ()](\doibase http://dx.doi.org/10.1016/S0375-9474(97)00613-1) [****, ()](\doibase
http://dx.doi.org/10.1016/0370-2693(77)90223-4) [****, ()](\doibase 10.1103/PhysRevC.75.062801) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevC.92.035808) [****, ()](\doibase
http://dx.doi.org/10.1016/0375-9474(74)90645-9) [****, ()](\doibase 10.1103/PhysRevC.18.401) [****, ()](\doibase 10.1103/PhysRevC.71.044309) [****, ()](http://link.aps.org/doi/10.1103/PhysRevC.79.032801)
[^1]: $^{26}\mathrm{P}(\beta\mathrm{p})$.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.'
author:
- 'Hitoe Ochi, Weiwei Wan, Yajue Yang, Natsuki Yamanobe, Jia Pan, and Kensuke Harada [^1]'
bibliography:
- 'reference.bib'
title: Deep Learning Scooping Motion using Bilateral Teleoperations
---
Introduction
============
Common household tasks require robots to act intelligently and adaptively in various unstructured environment, which makes it difficult to model control policies with explicit objectives and reward functions. One popular solution [@argall2009survey][@yang2017repeatable] is to circumvent the difficulties by learning from demonstration (LfD). LfD allows robots to learn skills from successful demonstrations performed by manual teaching. In order to take advantage of LfD, we develop a system which enables human operators to demonstrate with ease and enables robots to learn dexterous manipulation skills with multi-modal sensed data. Fig.\[teaser\] shows the system. The hardware platform of the system is a bi-lateral tele-operation systems composed of two same robot manipulators. The software of the system is a deep neural network made of a Deep Convolutional Auto-Encoder (DCAE) and a Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN). The deep neural network leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion, and the proposed system use the learnt model to generate new motion trajectories for similar tasks.
![The bilateral teleoperation system for task learning and robotic motion generation. The hardware platform of the system is a bi-lateral tele-operation systems composed of two same robot manipulators. The software of the system is a deep neural network made of a Deep Convolutional Auto-Encoder (DCAE) and a Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN). The deep learning models are trained by human demonstration, and used to generate new motion trajectories for similar tasks.[]{data-label="teaser"}](imgs/teaser.png){width=".47\textwidth"}
Using the system, we can conduct experiments of robot learning different basic behaviours using deep learning algorithms. Especially, we focus on a scooping task which is common in home kitchen. We demonstrate that our system has the adaptivity to predict motion for a broad range of scooping tasks. Meanwhile, we examine the ability of deep learning algorithms with target objects placed at different places and prepared in different conditions. We carry out detailed analysis on the results and analyzed the reasons that limited the ability of the proposed deep learning system. We reach to a conclusion that although LfD using deep learning is applicable to a wide range of objects, it still requires a large amount data to adapt to large varieties. Mixed learning and planning is suggested to be a better approach.
The paper is organized as follows. Section 2 reviews related work. Section 3 presents the entire LfD system including the demonstration method and the learning algorithm. Section 4 explains how the robot perform a task after learning. Section 5 describes and analyzes experiment setups and results. Section 6 draws conclusions and discusses possible methods to improve system performance.
Related Work
============
The learning method we used is DCAE and RNN. This section reviews their origins and state-of-the-art applications.
Auto-encoders were initially introduced by Rumelhart et al. [@rumelhart1985learning] to address the problem of unsupervised back propagation. The input data was used as the teacher data to minimize reconstruction errors [@olshausen1997sparse]. Auto-encoders were embedded into deep neural network as DCAE to explore deep features in [@hinton2006fast][@salakhutdinov2009semantic] [@bengio2007greedy][@torralba2008small], etc. DCAE helps to learn multiple levels of representation of high dimensional data.
RNN is the feed-backward version of conventional feed forward neural network. It allows the output of one neuron at time $t_i$ to be input of a neuron for time $t_{i+1}$. RNN may date back to the Hopfield network [@hopfield1982neural]. RNN is most suitable for learning and predicting sequential data. Some successful applications of RNN include handwritting recognition [@graves2009novel], speech recognition [@graves2013speech], visual tracking [@dequaire2017deep], etc. RNN has advantages over conventional mathematical models for sequential data like Hidden Markov Model (HMM) [@wan2007hybrid] in that it uses scalable historical characters and is applicable to sequences of varying time lengths.
A variation of RNN is Multiple Timescale RNN (MTRNN), which is the multiple timescale version of traditional RNN and was initially proposed by Yamashita and Tani [@yamashita2008emergence] to learn motion primitives and predict new actions by combining the learnt primitives. The MTRNN is composed of multiple Continuous Recurrent Neural Network (CTRNN) layers that allow to have different timescale activation speeds and thus enables scalability over time. Arie et al. [@arie2009creating] Jeong et al. [@jeong2012neuro] are some other studies that used MTRNN to generate robot motion.
RNN-based methods suffers from a vanishing gradient problem [@hochreiter2001gradient]. To overcome this problem, Hochereiter and Schmidhuber [@hochreiter1997long] developed the Long Short Term Memory (LSTM) network. The advantages of LSTM is it has an input gate, an output gate, and a forget gate which allow the cells to store and access information over long periods of time.
The recurrent neural network used by us in this paper is RNN with LSTM units. It is proved that RNN with LSTM units are effective and scalable in long-range sequence learning [@greff2017lstm][^2]. By introducing into each LSTM unit a memory cell which can maintain its state over time, LSTM network is able to overcome the vanishing gradient problem. LSTM is especially suitable for applications involving long-term dependencies [@karpathy2015visualizing].
Together with the DCAE, we build a system allowing predicting robot trajectories for diverse tasks using vision systems. The system uses bilateral teleoperation to collect data from human beings, like a general LfD system. It trains a DCAE as well as a LSTM-RNN model, and use the model to learn robot motions to perform similar tasks. We performed experiments by especially focusing on a scooping task that is common in home kitchen. Several previous studies like [@yang2017repeatable][@mayer2008system][@rahmatizadeh2017vision][@liu2017imitation] also studied learning to perform similar robotic tasks using deep learning models. Compared with them, we not only demonstrate the generalization of deep models in robotic task learning, but also carry out detailed analysis on the results and analyzed the reasons that limited the ability of the proposed deep learning system. Readers are encouraged to refer to the experiments and analysis section for details.
The system for LfD using deep learning
======================================
The bilateral teleoperation platform
------------------------------------
Our LfD system utilizes bilateral teleoperation to allow human operators to adaptively control the robot based on his control. Conventionally, teleoperation was done in master-slave mode by using a joystick [@rahmatizadeh2017vision], a haptic device [@abi2016visual], or a virtual environment [@liu2017imitation] as the master device. Unlike the conventional methods, we use a robot manipulator as the master. As Figure 2 shows, our system is composed of two identical robot systems comprising a Universal Robot 1 arm at the same joint configuration and a force-torque sensor attached to the arm’s end-effector. The human operator drags the master at its end-effector and the controller calculates 6 dimensional Cartesian velocity commands for robots to follow the human operator’s guidance. This dual-arm bilateral teleoperation system provides similar operation spaces for the master and slave, which makes it more convenient for the human operator to drag the master in a natural manner. In addition, we install a Microsoft Kinect 1 above the slave manipulator to capture depth and RGB images of the environment.
The bilateral teleoperation platform provides the human operator a virtual sense of the contact force to improve LfD [@hokayem2006bilateral]. While the human operator works on the master manipulator, the slave robot senses a contact force with a force-torque sensor installed at its wrist. A controller computes robot motions considering both the force exerted by human beings and the force feedback from the force sensor. Specifically, when the slave does not contact with the environment, both the master and slave move following human motion. When the slave has contact feedback, the master and slave react considering the impedance from force feedback. The human operator, meanwhile, would feel the impedance from the device he or she is working on (namely the master device) and react accordingly.
The deep learning software
--------------------------
LSTM-RNN supports both input and output sequences with variable length, which means that one network may be suitable for varied tasks with different length of time. Fig.\[lstmrnn\] illustrates a LSTM recurrent network which outputs prediction.
![LSTM-RNN: The subscripts in $Input_i$ and $Predict_i$ indicate the time of inputs and for which predictions are made. An LSTM unit receives both current input data and hidden states provided by previous LSTM units as inputs to predict the next step.[]{data-label="lstmrnn"}](imgs/lstmrnn.png){width=".47\textwidth"}
The data of our LfD system may include an image of the environment, force/torque data sensed by a F/T sensor installed at the slave’s end-effector, robot joint positions, etc. These data has high dimensionality which makes computation infeasible. To avoid the curse of dimensionality, we use DCAE to represent the data with auto-selected features. DCAE encodes the input data with a encoder and reconstructs the data from the encoded values with a decoder. Both the encoder and decoder could be multi-layer convolutional networks shown in Fig.\[dcae\]. DCAE is able to properly encode complex data through reconstruction, and extract data features and reduce data dimension.
![DCAE encodes the input data with a encoder and reconstructs the data from the encoded values with a decoder. The output of DCAE (the intermediate layer) is the extracted data features.[]{data-label="dcae"}](imgs/dcae.png){width=".35\textwidth"}
The software of the LfD system is a deep learning architecture composed of DCAE and LSTM-RNN. The LSTM-RNN model is fed with image features computed by the encoder and other data such as joint positions, and predicts a mixture of the next motion and surrounding situation. The combination of DCAE and LSTM-RNN is shown in Fig.\[arch\].
![The entire learning architecture. OD means Other Data. It could be joint positions, force/torque values, etc. Although drawn as a single box, the model may contain multiple LSTM layers.[]{data-label="arch"}](imgs/arch.png){width=".49\textwidth"}
Learning and predicting motions
===============================
Data collection and training
----------------------------
The data used to train DCAE and LSTM-RNN is collected by bilateral teleoperation. The components of the bilateral teleoperation platform and the control diagram of LfD using the bilateral teleoperatoin platform are shown in Fig.\[bicontrol\]. A human operator controls a master arm and performs a given task by considering force feedback ($F_h\bigoplus F_e$ in the figure) from the slave side. As the human operator moves the master arm, the Kinect camera installed at the middle of the two arms take a sequence of snapshots as the training images for DCAE. The motor encoders installed at each joint of the robot take a sequence of 6D joint angles as the training data of LSTM-RNN. The snapshots and changing joint angles are shown in Fig.\[sequences\]. Here, the left part shows three sequences of snapshots. Each sequence is taken with the bowl placed at a different position (denoted by $pos1$, $pos2$, and $pos3$ in the figure). The right part shows a sequence of changing joint angles taught by the human operator.
![Bilateral Teleoperation Diagram.[]{data-label="bicontrol"}](imgs/bicontrol.png){width=".49\textwidth"}
![The data used to train DCAE and LSTM-RNN. The left part shows the snapshot sequences taken by the Kinect camera. They are used to train DCAE. The right part shows the changing joint angles taught by human operators. They are used to further train LSTM-RNN.[]{data-label="sequences"}](imgs/sequences.png){width=".49\textwidth"}
Generating robot motion
-----------------------
After training the DCAE and the LSTM-RNN, the models are used online to generate robot motions for similar tasks. The trajectory generation involves a real-time loop of three phases: (1) sensor data collection, (2) motion prediction, and (3) execution. At each iteration, the current environment information and robot state are collected and processed and then attached to the sequence of previous data. The current robot state is fed to the pre-trained LSTM-RNN model to predict next motions that a manipulator uses to take action. In order to ensure computational efficiency, we keep each input sequence in a queue with fixed length.
The process of training and prediction is shown in Fig.\[teaser\]. Using the pre-trained DCAE and LSTM-RNN, the system is able to generate motion sequences to perform similar tasks.
Experiments and analysis
========================
We use the developed system to learn scooping tasks. The goal of this task is to scoop materials out from a bowl placed on a table in front of the robot (see Fig.\[teaser\]). Two different bowls filled with different amount of barley are used in experiments. The two bowls include a yellow bowl and a green bowl. The volumes of barley are set to “high” and “low” for variation. In total there are 2$\times$2=4 combinations, namely {“yellow” bowl-“low” barley, “yellow” bowl-“high” barley, “green” bowl-“low” barley, and “green” bowl-“high” barley}. Fig.\[expbowls\](a) shows the barley, the bowls, and the different volume settings. During experiments, a human operator performs teleoperated scooping as he/she senses the collision between the spoon and the bowl after the spoon is inserted into the materials. Although used for control, the F/T data is not fed into the learning system, which means the control policy is learned only based on robot states and 2D images.
![(a) Two different bowls filled with different amount of barley are used in experiments. In total, there are 2$\times$2=4 combinations. (b) One sequence of scooping motion.[]{data-label="expbowls"}](imgs/expbowls.png){width=".35\textwidth"}
The images used to train DCAE are cropped by a 130$\times$130 window to lower computational cost. The DCAE has 2 convolutional layers with filter sizes of 32 and 16, followed by 2 fully-connected layers of sizes 100 and 10. The decoder has exactly the same structure. LeakyReLU activation function is used for all layers. Dropout is applied afterwards to prevent over fitting. The computation is performed on a Dell T5810 workstation with Nvidia GTX980 GPU.
Experiment 1: Same position with RGB/Depth images
-------------------------------------------------
In the first group of experiments, we place the bowl at the same position, and test different bowls with different amounts of contents. In all, we collect 20 sequences of data with 5 for each bowl-content combination. Fig.\[expbowls\](b) shows one sequence of the scooping motion. We use 19 of the 20 sequences of data to train DCAE and LSTM-RNN and use the remaining one group to test the performance.
Parameters of DCAE is as follows: Optimization function: Adam; Dropout rate: 0.4; Batch size: 32, Epoch: 50. We use both RGB images and Depth images to train DCAE. The pre-trained models are named RGB-DCAE and Depth-DCAE respectively. Parameters of LSTM-RNN is: Optimization function: Adam; Batch size: 32; Iteration: 3000.
The results of DCAE is shown in Fig.\[dcaeresults\](a). The trained model is able to reconstruct the training data with high precision. Readers may compare the first and second rows of Fig.\[dcaeresults\](a.1) for details. Meanwhile, the trained model is able to reconstruct the test data with satisfying performance. Readers may compare the first and second row of Fig.\[dcaeresults\](a.2) to see the difference. Although there are noises on the second rows, they are acceptable.
{width=".99\textwidth"}
The results of LSTM-RNN show that the robot is able to perform scooping for similar tasks given the RGB-DCAE. However, it cannot precisely differ “high” and “low” volumes. The results of LSTM-RNN using Depth-DCAE is unstable. We failed to spot a successful execution. The reason depth data is unstable is probably due to the low resolution of Kinect’s depth sensor. The vision system cannot differ if the spoon is at a pre-scooping state or post-scooping state, which makes the robot hard to predict next motions.
Experiment 2: Different positions
---------------------------------
In the second group of experiments, we place the bowl at different positions to further examine the generalization ability of the trained models.
Similar to experiment 1, we use bowls with two different colors (“yellow” and “green”), and used two different volumes of contents (“high” and “low”). The bowls were placed at 7 different positions. At each position, we collect 20 sequences of data with 5 for each bowl-barley combination. In total, we collect 140 sequences of data. 139 of the 140 sequences of data are used to train DCAE and LSTM-RNN. The remaining 1 sequence are used for testing. The parameters of DCAE and LSTM-RNN are the same as experiment 1.
The results of DCAE are shown in Fig.\[dcaeresults\](b). The trained model is able to reconstruct the training data with high precision. Readers may compare the first and second rows of Fig.\[dcaeresults\](b.1) for details. In contrast, the reconstructed images show significant difference for the test data. It failed to reconstruct the test data. Readers may compare the first and second row of Fig.\[dcaeresults\](b.2) to see the difference. Especially for the first column of Fig.\[dcaeresults\](b.2), the bowl is wrongly considered to be at a totally different position.
The LSTM-RNN model is not able to generate scooping motion for either the training data or the test data. The motion is randomly changing from time to time. It doesn’t follow any pre-taught sequences. The reason is probably the bad reconstruction performance of DCAE. The system failed to correctly find the positions of the bowls using the encoded features. Based on the analysis, we increase the training data of DCAE in Experiment 3 to improve its reconstruction.
Experiment 3: Increasing the training data of DCAE
--------------------------------------------------
The third group of experiments has exactly the same scenario settings and parameter settings as Experiment 2, except that we use planning algorithms to generate scooping motion and collect more scooping images.
The new scooping images are collected following the work flow shows in Fig.\[sample\]. We divide the work space into around 100 grids, place bowl at these places, and sample arm boatswains and orientations at each of the grid. In total, we additionally generate 100$\times$45$\times$3=13500 (12726 exactly) extra training images to train DCAE. Here, “100” indicates the 100 grid positions. “45” and “3” indicate the 45 arm positions and 3 arm rotation angles sampled at each grid.
![Increase the training data of DCAE by automatically generating motions across a 10$\times$10 grids. In all, 100$\times$45$\times$3=13500 extra training images were generated. Here, “100” indicates the 100 grid positions. At each grid, 45 arm positions and 3 arm rotation angles are sampled.[]{data-label="sample"}](imgs/sample.png){width=".43\textwidth"}
The DCAE model is trained with the 140 sequences of data in experiment 2 (that is 140$\times$120=16800 images, 17714 exactly), together with the 13500 extra images collected using the planner. The parameters of DCAE are exactly the same as Experiment 1 and 2. The results of DCAE is shown in Fig.\[dcaeresults2\]. Compared with experiment 2, DCAE is more stable. It is able to reconstruct both the training images and the test images with satisfying performance, although the reconstructed spoon position in the sixth and seventh columns of Fig.\[dcaeresults2\](b) have relatively large offsets from the original image.
{width=".99\textwidth"}
The trained DCAE model is used together with LSTM-RNN to predict motions. The LSTM-RNN model is trained using different data to compare the performance. The results are shown in Table.\[conf0\]. Here, $A1$-$A7r\_s1$ indicates the data used to train DCAE and LSTM-RNN. The left side of “\_” shows the data used to train DCAE. $A1$-$A7$ means all the sequences collected at the seven bowl positions in experiments 2 are used. $r$ means the additional data collected in experiment 3 is used. The right side of “\_” shows the data used to train LSTM-RNN. $s1$ means only the sequences at bowl position $s1$ are used to train LSTM-RNN. $s1s4$ means both the sequences at bowl position $s1$ and position $s4$ are used to train LSTM-RNN.
The results show that the DCAE trained in experiment 3 is able to predict motion for bowls at the same positions. For example, (row position $s1$, column $A1$-$A7r\_s1$) is $\bigcirc$, (row position $s1$, column $A1$-$A7r\_s1s4$) is also $\bigcirc$. The result, however, is unstable. For example, (row position $s2$, column $A1$-$A7r\_s1s4$) is $\times$, (row position $s4$, column $A1$-$A7r\_s4$) is also $\times$. The last three columns of the table shows previous results: The $A1\_s1$ column and $A4\_s4$ column correspond to the results of experiment 1. The $A1A4\_s1s4$ column correspond to the result of experiment 2.
A1-A7r\_s1 A1-A7r\_s1s4 A1-A7r\_s4 A1\_s1 A4\_s4 A1A4\_s1s4
--------------- ------------ -------------- ------------ ------------ ------------ ------------
position $s1$ $\bigcirc$ $\bigcirc$ - $\bigcirc$ - $\times$
position $s4$ - $\times$ $\times$ - $\bigcirc$ $\times$
Results of the three experiments show that the proposed model heavily depends on training data. It can predict motion for different objects at the same positions, but is not able to adapt to objects at different positions. The small amount of training data is an important problem impairing the generalization of the trained models to different bowl positions. The experimental results tell us that a small amount of training data leads to bad results. A large amount of training data shows good prediction.
Conclusions and future work
===========================
This paper presented a bilateral teleoperation system for task learning and robotic motion generation. It trained DCAE and LSTM-RNN to learn scooping motion using data collected by human demonstration on the bilateral teleoperation system. The results showed the data collected using the bilateral teleoperation system was suitable for training deep learning models. The trained model was able to predict scooping motion for different objects at the same positions, showing some ability of generalization. The results also showed that the amount of data was an important issue that affect training good deep learning models.
One way to improve performance is to increase training data. However, increasing training data is not trivial for LfD applications since they require human operators to repeatedly work on teaching devices. Another method is to use a mixed learning and planning model. Practitioners may use planning to collect data and use learning to generalize the planned results. The mixed method is our future direction.
Acknowledgment {#acknowledgment .unnumbered}
==============
The paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
[^1]: Hitoe Ochi, Weiwei Wan, and Kensuke Harada are with Graduate School of Engineering Science, Osaka University, Japan. Natsuki Yamanobe, Weiwei Wan, and Kensuke Harada are also affiliated with Natioanl Institute of Advanced Industrial Science and Technology (AIST), Japan. Yajue Yang and Jia Pan are with City University of Hongkong, China. E-mail: [wan@sys.es.osaka-u.ac.jp]{}
[^2]: There are some work like [@yu2017continuous] that used MTRNN with LSTM units to enable multiple timescale scalability.
|
{
"pile_set_name": "arxiv"
}
|
Carabus albrechti awashimae
Carabus albrechti awashimae is a subspecies of ground beetle in the subfamily Carabinae that is endemic to Japan.
References
albrechti awashimae
Category:Beetles described in 1996
Category:Endemic fauna of Japan
|
{
"pile_set_name": "wikipedia_en"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.