Dataset Viewer
Auto-converted to Parquet
text
stringlengths
18
160k
meta
dict
Dymas In Greek mythology, Dymas (Ancient Greek: Δύμας) is the name attributed to the following individuals: Dymas, a Mariandynian who warned the Argonauts about the cruelty of Amycus, king of the Bebrycians. Both Mariandynians and Bebrycians lived in northwestern Asia Minor. Dymas, a soldier who fought on the side of the Seven Against Thebes. He took part in the foot-race at Opheltes' funeral games in Nemea. Dymas was wounded in battle and killed himself when the enemy started questioning him. Dymas, a Dorian and the ancestor of the Dymanes. His father, Aegimius, adopted Heracles' son, Hyllas. Dymas and his brother, Pamphylus, submitted to Hyllas. Dymas, king of Phrygia and father of Hecuba. Dymas, perhaps the same as the first. According to Quintus Smyrnaeus this Dymas was the father of Meges, a Trojan whose sons fought at Troy. Dymas, an Aulian warrior, who came to fight at Troy under the leadership of Archesilaus. He died at the hands of Aeneas. Dymas, a Trojan soldier who fought with Aeneas and was killed at Troy. Dymas, was mentioned in Homer's Odyssey as a Phaeacian captain, whose daughter was a friend to the princess Nausicaa. References Category:Kings of Phrygia Category:Characters in Greek mythology Category:Dorian mythology
{ "pile_set_name": "wikipedia_en" }
Sand Ridge State Forest Sand Ridge State Forest is a conservation area located in the U.S. state of Illinois. Containing , it is the largest state forest in Illinois. It is located in northern Mason County. The nearest town is Manito, Illinois and the nearest numbered highway is U.S. Highway 136. It is located on a low bluff, or "sand ridge", overlooking the Illinois River, hence the name. The sand ridge is believed to be an artifact of the post-glacial Kankakee Torrent. The Sand Ridge State Forest largely dates back to 1939, when the state of Illinois purchased parcels of submarginal sandy farmland for conservation purposes. The Civilian Conservation Corps planted pine trees on much of the land. Today, the state forest contains of dryland oak-hickory woodlands, of pine woodlands, and of open fields and sand prairies. Endemic species include the prickly pear cactus, Opuntia, more familiar to Mexicans and residents of the U.S. Southwest. The Sand Ridge State Forest contains the Clear Lake Site, an archeological site listed on the National Register of Historic Places. Current status In the 2010s, Sand Ridge is managed by the Illinois Department of Natural Resources (IDNR) as open space for active recreational purposes, especially whitetail deer hunting. Revis Hill Prairie, also located within Mason County, is operated by IDNR as a disjunct area of Sand Ridge State Forest. In early 2012, Sand Ridge State Forest lost about to a fire caused by a man burning brush in high winds which sparked the trees. External links Illinois DNR Sand Ridge State Forest site Category:1939 establishments in Illinois Category:Civilian Conservation Corps in Illinois Category:Illinois River Category:Illinois state forests Category:Protected areas established in 1939 Category:Protected areas of Mason County, Illinois
{ "pile_set_name": "wikipedia_en" }
PREFIX dc: <http://purl.org/dc/elements/1.1/> PREFIX ns: <http://example.org/ns#> SELECT ?title ?price { ?x ns:price ?p . ?x ns:discount ?discount BIND (?p*(1-?discount) AS ?price) FILTER(?price < 20) ?x dc:title ?title . }
{ "pile_set_name": "github" }
#include <bits/stdc++.h> #define sd(x) scanf("%d",&x) #define sd2(x,y) scanf("%d%d",&x,&y) #define sd3(x,y,z) scanf("%d%d%d",&x,&y,&z) #define fi first #define se second #define pb(x) push_back(x) #define mp(x,y) make_pair(x,y) #define LET(x, a) __typeof(a) x(a) #define foreach(it, v) for(LET(it, v.begin()); it != v.end(); it++) #define _ ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL); #define __ freopen("input.txt","r",stdin);freopen("output.txt","w",stdout); #define func __FUNCTION__ #define line __LINE__ using namespace std; template<typename S, typename T> ostream& operator<<(ostream& out, pair<S, T> const& p){out<<'('<<p.fi<<", "<<p.se<<')'; return out;} template<typename T> ostream& operator<<(ostream& out, vector<T> const & v){ int l = v.size(); for(int i = 0; i < l-1; i++) out<<v[i]<<' '; if(l>0) out<<v[l-1]; return out;} void tr(){cout << endl;} template<typename S, typename ... Strings> void tr(S x, const Strings&... rest){cout<<x<<' ';tr(rest...);} const int N = 100100; int n, p; int l[N], r[N]; int main(){ sd2(n,p); for(int i = 0; i < n; i++){ sd2(l[i], r[i]); } l[n] = l[0]; r[n] = r[0]; long double res = 0; for(int i = 1; i <= n; i++){ long long v1 = (r[i]/p) - ((l[i]-1)/p); long long v2 = (r[i-1]/p) - ((l[i-1]-1)/p); long long l1 = r[i]-l[i]+1; long long l2 = r[i-1]-l[i-1]+1; long long t = (l1-v1)*(l2-v2); long double p = (long double) t / (long double) (l1*l2); p = 1.0f-p; res += p*2000; } printf("%.9lf\n", (double)res); return 0; }
{ "pile_set_name": "github" }
log.level=${log.level} log.path=${log.path} dubbo.registry.address=${dubbo.registry.address} dubbo.protocal.port=${dubbo.protocal.port} dubbo.service.version=${dubbo.service.version} ws.connect.path=${ws.connect.path} ws.connect.port=${ws.connect.port} ws.connect.bus.port=${ws.connect.bus.port} service.name=ws_server service.version=1.0 service.bus.name=bus_ws_server service.bus.version=1.0 consul.host=${consul.host} consul.port=${consul.port}
{ "pile_set_name": "github" }
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Fact]{} \[theorem\][Problem]{} @addtoreset ¶[[**P**]{}]{} **A subdiffusive behaviour of recurrent random walk** **in random environment on a regular tree** by Yueyun Hu $\;$and$\;$ Zhan Shi *Université Paris XIII & Université Paris VI* This version: March 11, 2006 =2truecm =2truecm [***Summary.***]{} We are interested in the random walk in random environment on an infinite tree. Lyons and Pemantle [@lyons-pemantle] give a precise recurrence/transience criterion. Our paper focuses on the almost sure asymptotic behaviours of a recurrent random walk $(X_n)$ in random environment on a regular tree, which is closely related to Mandelbrot [@mandelbrot]’s multiplicative cascade. We prove, under some general assumptions upon the distribution of the environment, the existence of a new exponent $\nu\in (0, {1\over 2}]$ such that $\max_{0\le i \le n} |X_i|$ behaves asymptotically like $n^{\nu}$. The value of $\nu$ is explicitly formulated in terms of the distribution of the environment. [***Keywords.***]{} Random walk, random environment, tree, Mandelbrot’s multiplicative cascade. [***2000 Mathematics Subject Classification.***]{} 60K37, 60G50. Introduction {#s:intro} ============ Random walk in random environment (RWRE) is a fundamental object in the study of random phenomena in random media. RWRE on $\z$ exhibits rich regimes in the transient case (Kesten, Kozlov and Spitzer [@kesten-kozlov-spitzer]), as well as a slow logarithmic movement in the recurrent case (Sinai [@sinai]). On $\z^d$ (for $d\ge 2$), the study of RWRE remains a big challenge to mathematicians (Sznitman [@sznitman], Zeitouni [@zeitouni]). The present paper focuses on RWRE on a regular rooted tree, which can be viewed as an infinite-dimensional RWRE. Our main result reveals a rich regime à la Kesten–Kozlov–Spitzer, but this time even in the recurrent case; it also strongly suggests the existence of a slow logarithmic regime à la Sinai. Let $\T$ be a $\deg$-ary tree ($\deg\ge 2$) rooted at $e$. For any vertex $x\in \T \backslash \{ e\}$, let ${\buildrel \leftarrow \over x}$ denote the first vertex on the shortest path from $x$ to the root $e$, and $|x|$ the number of edges on this path (notation: $|e|:= 0$). Thus, each vertex $x\in \T \backslash \{ e\}$ has one parent ${\buildrel \leftarrow \over x}$ and $\deg$ children, whereas the root $e$ has $\deg$ children but no parent. We also write ${\buildrel \Leftarrow \over x}$ for the parent of ${\buildrel \leftarrow \over x}$ (for $x\in \T$ such that $|x|\ge 2$). Let $\omega:= (\omega(x,y), \, x,y\in \T)$ be a family of non-negative random variables such that $\sum_{y\in \T} \omega(x,y)=1$ for any $x\in \T$. Given a realization of $\omega$, we define a Markov chain $X:= (X_n, \, n\ge 0)$ on $\T$ by $X_0 =e$, and whose transition probabilities are $$P_\omega(X_{n+1}= y \, | \, X_n =x) = \omega(x, y) .$$ Let $\P$ denote the distribution of $\omega$, and let $\p (\cdot) := \int P_\omega (\cdot) \P(\! \d \omega)$. The process $X$ is a $\T$-valued RWRE. (By informally taking $\deg=1$, $X$ would become a usual RWRE on the half-line $\z_+$.) For general properties of tree-valued processes, we refer to Peres [@peres] and Lyons and Peres [@lyons-peres]. See also Duquesne and Le Gall [@duquesne-le-gall] and Le Gall [@le-gall] for continuous random trees. For a list of motivations to study RWRE on a tree, see Pemantle and Peres [@pemantle-peres1], p. 106. We define $$A(x) := {\omega({\buildrel \leftarrow \over x}, x) \over \omega({\buildrel \leftarrow \over x}, {\buildrel \Leftarrow \over x})} , \qquad x\in \T, \; |x|\ge 2. \label{A}$$ Following Lyons and Pemantle [@lyons-pemantle], we assume throughout the paper that $(\omega(x,\bullet))_{x\in \T\backslash \{ e\} }$ is a family of i.i.d. [*non-degenerate*]{} random vectors and that $(A(x), \; x\in \T, \; |x|\ge 2)$ are identically distributed. We also assume the existence of $\varepsilon_0>0$ such that $\omega(x,y) \ge \varepsilon_0$ if either $x= {\buildrel \leftarrow \over y}$ or $y= {\buildrel \leftarrow \over x}$, and $\omega(x,y) =0$ otherwise; in words, $(X_n)$ is a nearest-neighbour walk, satisfying an ellipticity condition. Let $A$ denote a generic random variable having the common distribution of $A(x)$ (for $|x| \ge 2$). Define $$p := \inf_{t\in [0,1]} \E (A^t) . \label{p}$$ We recall a recurrence/transience criterion from Lyons and Pemantle ([@lyons-pemantle], Theorem 1 and Proposition 2). [**Theorem A (Lyons and Pemantle [@lyons-pemantle])**]{} [*With $\p$-probability one, the walk $(X_n)$ is recurrent or transient, according to whether $p\le {1\over \deg}$ or $p>{1\over \deg}$. It is, moreover, positive recurrent if $p<{1\over \deg}$.*]{} We study the recurrent case $p\le {1\over \deg}$ in this paper. Our first result, which is not deep, concerns the positive recurrent case $p< {1\over \deg}$. \[t:posrec\] If $p<{1\over \deg}$, then $$\lim_{n\to \infty} \, {1\over \log n} \, \max_{0\le i\le n} |X_i| = {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $\p$-a.s.}, \label{posrec}$$ where the constant $q$ is defined in $(\ref{q})$, and lies in $(0, {1\over \deg})$ when $p<{1\over \deg}$. Despite the warning of Pemantle [@pemantle] (“there are many papers proving results on trees as a somewhat unmotivated alternative …to Euclidean space"), it seems to be of particular interest to study the more delicate situation $p={1\over \deg}$ that turns out to possess rich regimes. We prove that, similarly to the Kesten–Kozlov–Spitzer theorem for [*transient*]{} RWRE on the line, $(X_n)$ enjoys, even in the recurrent case, an interesting subdiffusive behaviour. To state our main result, we define $$\begin{aligned} \kappa &:=& \inf\left\{ t>1: \; \E(A^t) = {1\over \deg} \right\} \in (1, \infty], \qquad (\inf \emptyset=\infty) \label{kappa} \\ \psi(t) &:=& \log \E \left( A^t \right) , \qquad t\ge 0. \label{psi}\end{aligned}$$ We use the notation $a_n \approx b_n$ to denote $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. \[t:nullrec\] If $p={1\over \deg}$ and if $\psi'(1)<0$, then $$\max_{0\le i\le n} |X_i| \; \approx\; n^\nu, \qquad \hbox{\rm $\p$-a.s.}, \label{nullrec}$$ where $\nu=\nu(\kappa)$ is defined by $$\nu := 1- {1\over \min\{ \kappa, 2\} } = \left\{ \begin{array}{ll} (\kappa-1)/\kappa, & \mbox{if $\;\kappa \in (1,2]$}, \\ \\ 1/2 & \mbox{if $\;\kappa \in (2, \infty].$} \end{array} \right. \label{theta}$$ [**Remark.**]{} (i) It is known (Menshikov and Petritis [@menshikov-petritis]) that if $p={1\over \deg}$ and $\psi'(1)<0$, then for $\P$-almost all environment $\omega$, $(X_n)$ is null recurrent. \(ii) For the value of $\kappa$, see Figure 1. Under the assumptions $p={1\over \deg}$ and $\psi'(1)<0$, the value of $\kappa$ lies in $(2, \infty]$ if and only if $\E (A^2) < {1\over \deg}$; and $\kappa=\infty$ if moreover $\hbox{ess sup}(A) \le 1$. \(iii) Since the walk is recurrent, $\max_{0\le i\le n} |X_i|$ cannot be replaced by $|X_n|$ in (\[posrec\]) and (\[nullrec\]). \(iv) Theorem \[t:nullrec\], which could be considered as a (weaker) analogue of the Kesten–Kozlov–Spitzer theorem, shows that tree-valued RWRE has even richer regimes than RWRE on $\z$. In fact, recurrent RWRE on $\z$ is of order of magnitude $(\log n)^2$, and has no $n^a$ (for $0<a<1$) regime. \(v) The case $\psi'(1)\ge 0$ leads to a phenomenon similar to Sinai’s slow movement, and is studied in a forthcoming paper. The rest of the paper is organized as follows. Section \[s:posrec\] is devoted to the proof of Theorem \[t:posrec\]. In Section \[s:proba\], we collect some elementary inequalities, which will be of frequent use later on. Theorem \[t:nullrec\] is proved in Section \[s:nullrec\], by means of a result (Proposition \[p:beta-gamma\]) concerning the solution of a recurrence equation which is closely related to Mandelbrot’s multiplicative cascade. We prove Proposition \[p:beta-gamma\] in Section \[s:beta-gamma\]. Throughout the paper, $c$ (possibly with a subscript) denotes a finite and positive constant; we write $c(\omega)$ instead of $c$ when the value of $c$ depends on the environment $\omega$. Proof of Theorem \[t:posrec\] {#s:posrec} ============================= We first introduce the constant $q$ in the statement of Theorem \[t:posrec\], which is defined without the assumption $p< {1\over \deg}$. Let $$\varrho(r) := \inf_{t\ge 0} \left\{ r^{-t} \, \E(A^t) \right\} , \qquad r>0.$$ Let $\underline{r} >0$ be such that $$\log \underline{r} = \E(\log A) .$$ We mention that $\varrho(r)=1$ for $r\in (0, \underline{r}]$, and that $\varrho(\cdot)$ is continuous and (strictly) decreasing on $[\underline{r}, \, \Theta)$ (where $\Theta:= \hbox{ess sup}(A) < \infty$), and $\varrho(\Theta) = \P (A= \Theta)$. Moreover, $\varrho(r)=0$ for $r> \Theta$. See Chernoff [@chernoff]. We define $$\overline{r} := \inf\left\{ r>0: \; \varrho(r) \le {1\over \deg} \right\}.$$ Clearly, $\underline{r} < \overline{r}$. We define $$q:= \sup_{r\in [\underline{r}, \, \overline{r}]} r \varrho(r). \label{q}$$ The following elementary lemma tells us that, instead of $p$, we can also use $q$ in the recurrence/transience criterion of Lyons and Pemantle. \[l:pq\] We have $q>{1\over \deg}$ $($resp., $q={1\over \deg}$, $q<{1\over \deg})$ if and only if $p>{1\over \deg}$ $($resp., $p={1\over \deg}$, $p<{1\over \deg})$. [*Proof of Lemma \[l:pq\].*]{} By Lyons and Pemantle ([@lyons-pemantle], p. 129), $p= \sup_{r\in (0, \, 1]} r \varrho (r)$. Since $\varrho(r) =1$ for $r\in (0, \, \underline{r}]$, there exists $\min\{\underline{r}, 1\}\le r^* \le 1$ such that $p= r^* \varrho (r^*)$. \(i) Assume $p<{1\over \deg}$. Then $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p < {1\over \deg}$, which, by definition of $\overline{r}$, implies $\overline{r} < 1$. Therefore, $q \le p <{1\over \deg}$. \(ii) Assume $p\ge {1\over \deg}$. We have $\varrho (r^*) \ge p \ge {1\over \deg}$, which yields $r^* \le \overline{r}$. If $\underline{r} \le 1$, then $r^*\ge \underline{r}$, and thus $p=r^* \varrho (r^*) \le q$. If $\underline{r} > 1$, then $p=1$, and thus $q\ge \underline{r}\, \varrho (\underline{r}) = \underline{r} > 1=p$. We have therefore proved that $p\ge {1\over \deg}$ implies $q\ge p$. If moreover $p>{1\over \deg}$, then $q \ge p>{1\over \deg}$. \(iii) Assume $p={1\over \deg}$. We already know from (ii) that $q \ge p$. On the other hand, $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p = {1\over \deg}$, implying $\overline{r} \le 1$. Thus $q \le p$. As a consequence, $q=p={1\over \deg}$.$\Box$ Having defined $q$, the next step in the proof of Theorem \[t:posrec\] is to compute invariant measures $\pi$ for $(X_n)$. We first introduce some notation on the tree. For any $m\ge 0$, let $$\T_m := \left\{x \in \T: \; |x| = m \right\} .$$ For any $x\in \T$, let $\{ x_i \}_{1\le i\le \deg}$ be the set of children of $x$. If $\pi$ is an invariant measure, then $$\pi (x) = {\omega ({\buildrel \leftarrow \over x}, x) \over \omega (x, {\buildrel \leftarrow \over x})} \, \pi({\buildrel \leftarrow \over x}), \qquad \forall \, x\in \T \backslash \{ e\}.$$ By induction, this leads to (recalling $A$ from (\[A\])): for $x\in \T_m$ ($m\ge 1$), $$\pi (x) = {\pi(e)\over \omega (x, {\buildrel \leftarrow \over x})} {\omega (e, x^{(1)}) \over A(x^{(1)})} \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) ,$$ where $]\! ] e, x]\! ]$ denotes the shortest path $x^{(1)}$, $x^{(2)}$, $\cdots$, $x^{(m)} =: x$ from the root $e$ (but excluded) to the vertex $x$. The identity holds for [*any*]{} choice of $(A(e_i), \, 1\le i\le \deg)$. We choose $(A(e_i), \, 1\le i\le \deg)$ to be a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. By the ellipticity condition on the environment, we can take $\pi(e)$ to be sufficiently small so that for some $c_0\in (0, 1]$, $$c_0\, \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) \le \pi (x) \le \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) . \label{pi}$$ By Chebyshev’s inequality, for any $r>\underline{r}$, $$\max_{x\in \T_n} \P \left\{ \pi (x) \ge r^n\right\} \le \varrho(r)^n. \label{chernoff}$$ Since $\# \T_n = \deg^n$, this gives $\E (\#\{ x\in \T_n: \; \pi (x)\ge r^n \} ) \le \deg^n \varrho(r)^n$. By Chebyshev’s inequality and the Borel–Cantelli lemma, for any $r>\underline{r}$ and $\P$-almost surely for all large $n$, $$\#\left\{ x\in \T_n: \; \pi (x) \ge r^n \right\} \le n^2 \deg^n \varrho(r)^n. \label{Jn-ub1}$$ On the other hand, by (\[chernoff\]), $$\P \left\{ \exists x\in \T_n: \pi (x) \ge r^n\right\} \le \deg^n \varrho (r)^n.$$ For $r> \overline{r}$, the expression on the right-hand side is summable in $n$. By the Borel–Cantelli lemma, for any $r>\overline{r}$ and $\P$-almost surely for all large $n$, $$\max_{x\in \T_n} \pi (x) < r^n. \label{Jn-ub}$$ [*Proof of Theorem \[t:posrec\]: upper bound.*]{} Fix $\varepsilon>0$ such that $q+ 3\varepsilon < {1\over \deg}$. We follow the strategy given in Liggett ([@liggett], p. 103) by introducing a positive recurrent birth-and-death chain $(\widetilde{X_j}, \, j\ge 0)$, starting from $0$, with transition probability from $i$ to $i+1$ (for $i\ge 1$) equal to $${1\over \widetilde{\pi} (i)} \, \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x})) ,$$ where $\widetilde{\pi} (i) := \sum_{x\in \T_i} \pi(x)$. We note that $\widetilde{\pi}$ is a finite invariant measure for $(\widetilde{X_j})$. Let $$\tau_n := \inf \left\{ i\ge 1: \, X_i \in \T_n\right\}, \qquad n\ge 0.$$ By Liggett ([@liggett], Theorem II.6.10), for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le \widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0),$$ where $\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0)$ is the probability that $(\widetilde{X_j})$ hits $n$ before returning to $0$. According to Hoel et al. ([@hoel-port-stone], p. 32, Formula (61)), $$\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0) = c_1(\omega) \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x}))}Ê\right)^{\! \! -1} ,$$ where $c_1(\omega)\in (0, \infty)$ depends on $\omega$. We arrive at the following estimate: for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le c_1(\omega) \, \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}Ê\right)^{\! \! -1} . \label{liggett}$$ We now estimate $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}$. For any fixed $0=r_0< \underline{r} < r_1 < \cdots < r_\ell = \overline{r} <r_{\ell +1}$, $$\sum_{x\in \T_i} \pi(x) \le \sum_{j=1}^{\ell+1} (r_j)^i \# \left\{ x\in \T_i: \pi(x) \ge (r_{j-1})^i \right\} + \sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x).$$ By (\[Jn-ub\]), $\sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x) =0$ $\P$-almost surely for all large $i$. It follows from (\[Jn-ub1\]) that $\P$-almost surely, for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} (r_j)^i i^2 \, \deg^i \varrho (r_{j-1})^i.$$ Recall that $q= \sup_{r\in [\underline{r}, \, \overline{r}] } r \, \varrho(r) \ge \underline{r} \, \varrho (\underline{r}) = \underline{r}$. We choose $r_1:= \underline{r} + \varepsilon \le q+\varepsilon$. We also choose $\ell$ sufficiently large and $(r_j)$ sufficiently close to each other so that $r_j \, \varrho(r_{j-1}) < q+\varepsilon$ for all $2\le j\le \ell+1$. Thus, $\P$-almost surely for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} i^2 \, \deg^i (q+\varepsilon)^i = (r_1)^i \deg^i + \ell \, i^2 \, \deg^i (q+\varepsilon)^i,$$ which implies (recall: $\deg(q+\varepsilon)<1$) that $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)} \ge {c_2\over n^2\, \deg^n (q+\varepsilon)^n}$. Plugging this into (\[liggett\]) yields that, $\P$-almost surely for all large $n$, $$P_\omega (\tau_n< \tau_0) \le c_3(\omega)\, n^2\, \deg^n (q+\varepsilon)^n \le [(q+2\varepsilon)\deg]^n.$$ In particular, by writing $L(\tau_n):= \# \{ 1\le i \le \tau_n: \, X_i = e\}$, we obtain: $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \ge \left\{ 1- [(q+2\varepsilon)\deg]^n \right\}^j ,$$ which, by the Borel–Cantelli lemma, yields that, $\P$-almost surely for all large $n$, $$L(\tau_n) \ge {1\over [(q+3\varepsilon) \deg]^n} , \qquad \hbox{\rm $P_\omega$-a.s.}$$ Since $\{ L(\tau_n) \ge j \} \subset \{ \max_{0\le k \le 2j} |X_k| < n\}$, and since $\varepsilon$ can be as close to 0 as possible, we obtain the upper bound in Theorem \[t:posrec\].$\Box$ [*Proof of Theorem \[t:posrec\]: lower bound.*]{} Assume $p< {1\over \deg}$. Recall that in this case, we have $\overline{r}<1$. Let $\varepsilon>0$ be small. Let $r \in (\underline{r}, \, \overline{r})$ be such that $\varrho(r) > {1\over \deg} \ee^\varepsilon$ and that $r\varrho(r) \ge q\ee^{-\varepsilon}$. Let $L$ be a large integer with $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and satisfying (\[GW\]) below. We start by constructing a Galton–Watson tree $\G$, which is a certain subtree of $\T$. The first generation of $\G$, denoted by $\G_1$ and defined below, consists of vertices $x\in \T_L$ satisfying a certain property. The second generation of $\G$ is formed by applying the same procedure to each element of $\G_1$, and so on. To be precise, $$\G_1 = \G_1 (L,r) := \left\{ x\in \T_L: \, \min_{z\in ]\! ] e, \, x ]\! ]} \prod_{y\in ]\! ] e, \, z]\! ]} A(y) \ge r^L \right\} ,$$ where $]\! ]e, \, x ]\! ]$ denotes as before the set of vertices (excluding $e$) lying on the shortest path relating $e$ and $x$. More generally, if $\G_i$ denotes the $i$-th generation of $\G$, then $$\G_{n+1} := \bigcup_{u\in \G_n } \left\{ x\in \T_{(n+1)L}: \, \min_{z\in ]\! ] u, \, x ]\! ]} \prod_{y\in ]\! ] u, \, z]\! ]} A(y) \ge r^L \right\} , \qquad n=1,2, \dots$$ We claim that it is possible to choose $L$ sufficiently large such that $$\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L . \label{GW}$$ Note that $\ee^{-\varepsilon L} \deg^L \varrho(r)^L>1$, since $\varrho(r) > {1\over \deg} \ee^\varepsilon$. We admit (\[GW\]) for the moment, which implies that $\G$ is super-critical. By theory of branching processes (Harris [@harris], p. 13), when $n$ goes to infinity, ${\# \G_{n/L} \over [\E(\# \G_1)]^{n/L} }$ converges almost surely (and in $L^2$) to a limit $W$ with $\P(W>0)>0$. Therefore, on the event $\{ W>0\}$, for all large $n$, $$\# (\G_{n/L}) \ge c_4(\omega) [\E(\# \G_1)]^{n/L}. \label{GnL}$$ (For notational simplification, we only write our argument for the case when $n$ is a multiple of $L$. It is clear that our final conclusion holds for all large $n$.) Recall that according to the Dirichlet principle (Griffeath and Liggett [@griffeath-liggett]), $$\begin{aligned} 2\pi(e) P_\omega \left\{ \tau_n < \tau_0 \right\} &=&\inf_{h: \, h(e)=1, \, h(z)=0, \, \forall |z| \ge n} \sum_{x,y\in \T} \pi(x) \omega(x,y) (h(x)- h(y))^2 \nonumber \\ &\ge& c_5\, \inf_{h: \, h(e)=1, \, h(z)=0, \, \forall z\in \T_n} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2, \label{durrett}\end{aligned}$$ the last inequality following from ellipticity condition on the environment. Clearly, $$\begin{aligned} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 &=&\sum_{i=0}^{(n/L)-1} \sum_{x: \, iL \le |x| < (i+1) L} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 \\ &:=&\sum_{i=0}^{(n/L)-1} I_i,\end{aligned}$$ with obvious notation. For any $i$, $$I_i \ge \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2,$$ where $v^\uparrow \in \G_i$ denotes the unique element of $\G_i$ lying on the path $[ \! [ e, v ]\! ]$ (in words, $v^\uparrow$ is the parent of $v$ in the Galton–Watson tree $\G$), and the factor $\deg^{-L}$ comes from the fact that each term $\pi(x) (h(x)- h(y))^2$ is counted at most $\deg^L$ times in the sum on the right-hand side. By (\[pi\]), for $x\in [\! [ v^\uparrow, v[\! [$, $\pi(x) \ge c_0 \, \prod_{u\in ]\! ]e, x]\! ]} A(u)$, which, by the definition of $\G$, is at least $c_0 \, r^{(i+1)L}$. Therefore, $$\begin{aligned} I_i &\ge& c_0 \, \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} r^{(i+1)L} (h(x)- h(y))^2 \\ &\ge&c_0 \, \deg^{-L} r^{(i+1)L} \sum_{v\in \G_{i+1}} \, \sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 .\end{aligned}$$ By the Cauchy–Schwarz inequality, $\sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 \ge {1\over L} (h(v^\uparrow)-h(v))^2$. Accordingly, $$I_i \ge c_0 \, {\deg^{-L} r^{(i+1)L}\over L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-h(v))^2 ,$$ which yields $$\begin{aligned} \sum_{i=0}^{(n/L)-1} I_i &\ge& c_0 \, {\deg^{-L}\over L} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)- h(v))^2 \\ &\ge& c_0 \, {\deg^{-L}\over L} \deg^{-n/L} \sum_{v\in \G_{n/L}} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 ,\end{aligned}$$ where, $e=: v^{(0)}$, $v^{(1)}$, $v^{(2)}$, $\cdots$, $v^{(n/L)} := v$, is the shortest path (in $\G$) from $e$ to $v$, and the factor $\deg^{-n/L}$ results from the fact that each term $r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2$ is counted at most $\deg^{n/L}$ times in the sum on the right-hand side. By the Cauchy–Schwarz inequality, for all $h: \T\to \r$ with $h(e)=1$ and $h(z)=0$ ($\forall z\in \T_n$), we have $$\begin{aligned} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 &\ge&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \, \left( \sum_{i=0}^{(n/L)-1} (h(v^{(i)})- h(v^{(i+1)})) \right)^{\! \! 2} \\ &=&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \ge c_6 \, r^n.\end{aligned}$$ Therefore, $$\sum_{i=0}^{(n/L)-1} I_i \ge c_0c_6 \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \# (\G_{n/L}) \ge c_0 c_6 c_4(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} },$$ the last inequality following from (\[GnL\]). Plugging this into (\[durrett\]) yields that for all large $n$, $$P_\omega \left\{ \tau_n < \tau_0 \right\} \ge c_7(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} } .$$ Recall from (\[GW\]) that $\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L$. Therefore, on $\{W>0\}$, for all large $n$, $P_\omega \{ \tau_n < \tau_0 \} \ge c_8(\omega) (\ee^{-\varepsilon} \deg^{-1/L} \deg r \varrho(r))^n$, which is no smaller than $c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n$ (since $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and $r \varrho(r) \ge q \ee^{-\varepsilon}$ by assumption). Thus, by writing $L(\tau_n) := \#\{ 1\le i\le n: \; X_i = e \}$ as before, we have, on $\{ W>0 \}$, $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \le [1- c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n ]^j.$$ By the Borel–Cantelli lemma, for $\P$-almost all $\omega$, on $\{W>0\}$, we have, $P_\omega$-almost surely for all large $n$, $L(\tau_n) \le 1/(\ee^{-4\varepsilon} q \deg)^n$, i.e., $$\max_{0\le k\le \tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor )} |X_k| \ge n ,$$ where $0<\tau_0(1)<\tau_0(2)<\cdots$ are the successive return times to the root $e$ by the walk (thus $\tau_0(1) = \tau_0$). Since the walk is positive recurrent, $\tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor ) \sim {1\over (\ee^{-4\varepsilon} q \deg)^n} E_\omega [\tau_0]$ (for $n\to \infty$), $P_\omega$-almost surely ($a_n \sim b_n$ meaning $\lim_{n\to \infty}Ê{a_n \over b_n} =1$). Therefore, for $\P$-almost all $\omega \in \{ W>0\}$, $$\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n} \ge {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $P_\omega$-a.s.}$$ Recall that $\P\{ W>0\}>0$. Since modifying a finite number of transition probabilities does not change the value of $\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n}$, we obtain the lower bound in Theorem \[t:posrec\]. It remains to prove (\[GW\]). Let $(A^{(i)})_{i\ge 1}$ be an i.i.d. sequence of random variables distributed as $A$. Clearly, for any $\delta\in (0,1)$, $$\begin{aligned} \E( \# \G_1) &=& \deg^L \, \P\left( \, \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) \\ &\ge& \deg^L \, \P \left( \, (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) .\end{aligned}$$ We define a new probability $\Q$ by $${\mathrm{d} \Q \over \mathrm{d}\P} := {\ee^{t \log A} \over \E(\ee^{t \log A})} = {A^t \over \E(A^t)},$$ for some $t\ge 0$. Then $$\begin{aligned} \E(\# \G_1) &\ge& \deg^L \, \E_\Q \left[ \, {[\E(A^t)]^L \over \exp\{ t \sum_{i=1}^L \log A^{(i)}\} }\, {\bf 1}_{\{ (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} } \right] \\ &\ge& \deg^L \, {[\E(A^t)]^L \over r^{t (1- \delta) L} } \, \Q \left( (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L \right).\end{aligned}$$ To choose an optimal value of $t$, we fix $\widetilde{r}\in (r, \, \overline{r})$ with $\widetilde{r} < r^{1-\delta}$. Our choice of $t=t^*$ is such that $\varrho(\widetilde{r}) = \inf_{t\ge 0} \{ \widetilde{r}^{-t} \E(A^t)\} = \widetilde{r}^{-t^*} \E(A^{t^*})$. With this choice, we have $\E_\Q(\log A)=\log \widetilde{r}$, so that $\Q \{ (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} \ge c_9$. Consequently, $$\E(\# \G_1) \ge c_9 \, \deg^L \, {[\E(A^{t^*})]^L \over r^{t^* (1- \delta) L} }= c_9 \, \deg^L \, {[ \widetilde{r}^{\,t^*} \varrho(\widetilde{r})]^L \over r^{t^* (1- \delta) L} } \ge c_9 \, r^{\delta t^* L} \deg^L \varrho(\widetilde{r})^L .$$ Since $\delta>0$ can be as close to $0$ as possible, the continuity of $\varrho(\cdot)$ on $[\underline{r}, \, \overline{r})$ yields (\[GW\]), and thus completes the proof of Theorem \[t:posrec\].$\Box$ Some elementary inequalities {#s:proba} ============================ We collect some elementary inequalities in this section. They will be of use in the next sections, in the study of the null recurrence case. \[l:exp\] Let $\xi\ge 0$ be a random variable. [(i)]{} Assume that $\e(\xi^a)<\infty$ for some $a>1$. Then for any $x\ge 0$, $${\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a} \le {\e (\xi^a) \over [\e \xi]^a} . \label{RSD}$$ [(ii)]{} If $\e (\xi) < \infty$, then for any $0 \le \lambda \le 1$ and $t \ge 0$, $$\e \left\{ \exp \left( - t\, { (\lambda+\xi)/ (1+\xi) \over \e [(\lambda+\xi)/ (1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t\, { \xi \over \e (\xi)} \right) \right\} . \label{exp}$$ [**Remark.**]{} When $a=2$, (\[RSD\]) is a special case of Lemma 6.4 of Pemantle and Peres [@pemantle-peres2]. [*Proof of Lemma \[l:exp\].*]{} We actually prove a very general result, stated as follows. Let $\varphi : (0, \infty) \to \r$ be a convex ${\cal C}^1$-function. Let $x_0 \in \r$ and let $I$ be an open interval containing $x_0$. Assume that $\xi$ takes values in a Borel set $J \subset \r$ (for the moment, we do not assume $\xi\ge 0$). Let $h: I \times J \to (0, \infty)$ and ${\partial h\over \partial x}: I \times J \to \r$ be measurable functions such that - $\e \{ h(x_0, \xi)\} <\infty$ and $\e \{ |\varphi ({ h(x_0,\xi) \over \e h(x_0, \xi)} )| \} < \infty$; - $\e[\sup_{x\in I} \{ | {\partial h\over \partial x} (x, \xi)| + |\varphi' ({h(x, \xi) \over \e h(x, \xi)} ) | \, ({| {\partial h\over \partial x} (x, \xi) | \over \e \{ h(x, \xi)\} } + {h(x, \xi) \over [\e \{ h(x, \xi)\}]^2 } | \e \{ {\partial h\over \partial x} (x, \xi) \} | )\} ] < \infty$; - both $y \to h(x_0, y)$ and $y \to { \partial \over \partial x} \log h(x,y)|_{x=x_0}$ are monotone on $J$. Then $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x, \xi)}\right) \right\} \Big|_{x=x_0} \ge 0, \qquad \hbox{\rm or}\qquad \le 0, \label{monotonie}$$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity. To prove (\[monotonie\]), we observe that by the integrability assumptions, $$\begin{aligned} & &{\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \\ &=&{1 \over ( \e h(x_0, \xi))^2}\, \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) \e h(x_0, \xi) - h(x_0, \xi) \e {\partial h \over \partial x} (x_0, \xi) \right] \right) .\end{aligned}$$ Let $\widetilde \xi$ be an independent copy of $\xi$. The expectation expression $\e(\varphi'( h(x_0, \xi) ) [\cdots])$ on the right-hand side is $$\begin{aligned} &=& \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) )\right] \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) ,\end{aligned}$$ where $$\eta := \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) ) \right] \, \left[ {\partial \log h \over \partial x} (x_0, \xi) - {\partial \log h \over \partial x} (x_0, \widetilde\xi) \right] .$$ Therefore, $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \; = \; {1 \over 2( \e h(x_0, \xi))^2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) .$$ Since $\eta \ge 0$ or $\le 0$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity, this yields (\[monotonie\]). To prove (\[RSD\]) in Lemma \[l:exp\], we take $x_0\in (0,\, \infty)$, $J= \r_+$, $I$ a finite open interval containing $x_0$ and away from 0, $\varphi(z)= z^a$, and $h(x,y)= { y \over x+ y}$, to see that the function $x\mapsto {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}$ is non-decreasing on $(0, \infty)$. By dominated convergence, $$\lim_{x \to\infty} {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}= \lim_{x \to\infty} {\e[({\xi\over 1+\xi/x})^a] \over [\e ( {\xi\over 1+\xi/x})]^a} = {\e (\xi^a) \over [\e \xi]^a} ,$$ yielding (\[RSD\]). The proof of (\[exp\]) is similar. Indeed, applying (\[monotonie\]) to the functions $\varphi(z)= \ee^{-t z}$ and $ h(x, y) = {x + y \over 1+ y}$ with $x\in (0,1)$, we get that the function $x \mapsto \e \{ \exp ( - t { (x+\xi)/(1+\xi) \over \e [(x+\xi)/(1+\xi)]} )\}$ is non-increasing on $(0,1)$; hence for $\lambda \in [0,\, 1]$, $$\e \left\{ \exp \left( - t { (\lambda+\xi)/(1+\xi) \over \e [(\lambda+\xi)/(1+\xi)] } \right) \right\} \le \e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\}.$$ On the other hand, we take $\varphi(z)= \ee^{-t z}$ and $h(x,y) = {y \over 1+ xy}$ (for $x\in (0, 1)$) in (\[monotonie\]) to see that $x \mapsto \e \{ \exp ( - t { \xi /(1+x \xi) \over \e [\xi /(1+x\xi)] } ) \}$ is non-increasing on $(0,1)$. Therefore, $$\e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t \, { \xi \over \e (\xi)}\right) \right\} ,$$ which implies (\[exp\]).$\Box$ \[l:moment\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent non-negative random variables such that for some $a\in [1,\, 2]$, $\e(\xi_i^a)<\infty$ $(1\le i\le k)$. Then $$\e \left[ (\xi_1 + \cdots + \xi_k)^a \right] \le \sum_{k=1}^k \e(\xi_i^a) + (k-1) \left( \sum_{i=1}^k \e \xi_i \right)^a.$$ [*Proof.*]{} By induction on $k$, we only need to prove the lemma in case $k=2$. Let $$h(t) := \e \left[ (\xi_1 + t\xi_2)^a \right] - \e(\xi_1^a) - t^a \e(\xi_2^a) - (\e \xi_1 + t \e \xi_2)^a, \qquad t\in [0,1].$$ Clearly, $h(0) = - (\e \xi_1)^a \le 0$. Moreover, $$h'(t) = a \e \left[ (\xi_1 + t\xi_2)^{a-1} \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1 + t \e \xi_2)^{a-1} \e(\xi_2) .$$ Since $(x+y)^{a-1} \le x^{a-1} + y^{a-1}$ (for $1\le a\le 2$), we have $$\begin{aligned} h'(t) &\le& a \e \left[ (\xi_1^{a-1} + t^{a-1}\xi_2^{a -1}) \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1)^{a-1} \e(\xi_2) \\ &=& a \e (\xi_1^{a-1}) \e(\xi_2) - a(\e \xi_1)^{a -1} \e(\xi_2) \le 0,\end{aligned}$$ by Jensen’s inequality (for $1\le a\le 2$). Therefore, $h \le 0$ on $[0,1]$. In particular, $h(1) \le 0$, which implies Lemma \[l:moment\].$\Box$ The following inequality, borrowed from page 82 of Petrov [@petrov], will be of frequent use. \[f:petrov\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent random variables. We assume that for any $i$, $\e(\xi_i)=0$ and $\e(|\xi_i|^a) <\infty$, where $1\le a\le 2$. Then $$\e \left( \, \left| \sum_{i=1}^k \xi_i \right| ^a \, \right) \le 2 \sum_{i=1}^k \e( |\xi_i|^a).$$ \[l:abc\] Fix $a >1$. Let $(u_j)_{j\ge 1}$ be a sequence of positive numbers, and let $(\lambda_j)_{j\ge 1}$ be a sequence of non-negative numbers. [(i)]{} If there exists some constant $c_{10}>0$ such that for all $n\ge 2$, $$u_{j+1} \le \lambda_n + u_j - c_{10}\, u_j^{a}, \qquad \forall 1\le j \le n-1,$$ then we can find a constant $c_{11}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$, such that $$u_n \le c_{11} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)}), \qquad \forall n\ge 1.$$ [(ii)]{} Fix $K>0$. Assume that $\lim_{j\to\infty} u_j=0$ and that $\lambda_n \in [0, \, {K\over n}]$ for all $n\ge 1$. If there exist $c_{12}>0$ and $c_{13}>0$ such that for all $n\ge 2$, $$u_{j+1} \ge \lambda_n + (1- c_{12} \lambda_n) u_j - c_{13} \, u_j^a , \qquad \forall 1 \le j \le n-1,$$ then for some $c_{14}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$ $(c_{14}$ may depend on $K)$, $$u_n \ge c_{14} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)} ), \qquad \forall n\ge 1.$$ [*Proof.*]{} (i) Put $\ell = \ell(n) := \min\{n, \, \lambda_n^{- (a-1)/a} \}$. There are two possible situations. First situation: there exists some $j_0 \in [n- \ell, n-1]$ such that $u_{j_0} \le ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$. Since $u_{j+1} \le \lambda_n + u_j$ for all $j\in [j_0, n-1]$, we have $$u_n \le (n-j_0 ) \lambda_n + u_{j_0} \le \ell \lambda_n + ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a} \le (1+ ({2 \over c_{10}})^{1/a})\, \lambda_n^{1/a},$$ which implies the desired upper bound. Second situation: $u_j > ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$, $\forall \, j \in [n- \ell, n-1]$. Then $c_{10}\, u_j^{a} > 2\lambda_n$, which yields $$u_{j+1} \le u_j - {c_{10} \over 2} u_j^a, \qquad \forall \, j \in [n- \ell, n-1].$$ Since $a>1$ and $(1-y)^{1-a} \ge 1+ (a-1) y$ (for $0< y< 1$), this yields, for $j \in [n- \ell, n-1]$, $$u_{j+1}^{1-a} \ge u_j^{1-a} \, \left( 1 - {c_{10} \over 2} u_j^{a-1} \right)^{ 1-a} \ge u_j^{ 1-a} \, \left( 1 + {c_{10} \over 2} (a-1)\, u_j^{a-1} \right) = u_j^{1-a} + {c_{10} \over 2} (a-1) .$$ Therefore, $u_n^{1-a} \ge c_{15}\, \ell$ with $c_{15}:= {c_{10} \over 2} (a-1)$. As a consequence, $u_n \le (c_{15}\, \ell)^{- 1/(a-1)} \le (c_{15})^{- 1/(a-1)} \, ( n^{- 1/(a-1)} + \lambda_n^{1/a} )$, as desired. \(ii) Let us first prove: $$\label{c7} u_n \ge c_{16}\, n^{- 1/(a-1)}.$$ To this end, let $n$ be large and define $v_j := u_j \, (1- c_{12} \lambda_n)^{ -j} $ for $1 \le j \le n$. Since $u_{j+1} \ge (1- c_{12} \lambda_n) u_j - c_{13} u_j^a $ and $\lambda_n \le K/n$, we get $$v_{j+1} \ge v_j - c_{13} (1- c_{12} \lambda_n)^{(a-1)j-1}\, v_j^a\ge v_j - c_{17} \, v_j^a, \qquad \forall\, 1\le j \le n-1.$$ Since $u_j \to 0$, there exists some $j_0>0$ such that for all $n>j \ge j_0$, we have $c_{17} \, v_j^{a-1} < 1/2$, and $$v_{j+1}^{1-a} \le v_j^{1-a}\, \left( 1- c_{17} \, v_j^{a-1}\right)^{1-a} \le v_j^{1-a}\, \left( 1+ c_{18} \, v_j^{a-1}\right) = v_j^{1-a} + c_{18}.$$ It follows that $v_n^{1-a} \le c_{18}\, (n-j_0) + v_{j_0}^{1-a}$, which implies (\[c7\]). It remains to show that $u_n \ge c_{19} \, \lambda_n^{1/a}$. Consider a large $n$. The function $h(x):= \lambda_n + (1- c_{12} \lambda_n) x - c_{13} x^a$ is increasing on $[0, c_{20}]$ for some fixed constant $c_{20}>0$. Since $u_j \to 0$, there exists $j_0$ such that $u_j \le c_{20}$ for all $j \ge j_0$. We claim there exists $j \in [j_0, n-1]$ such that $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$: otherwise, we would have $c_{13}\, u_j^a \le {\lambda_n\over 2} \le \lambda_n$ for all $j \in [j_0, n-1]$, and thus $$u_{j+1} \ge (1- c_{12}\, \lambda_n) u_j \ge \cdots \ge (1- c_{12}\,\lambda_n)^{j-j_0} \, u_{j_0} ;$$ in particular, $u_n \ge (1- c_{12}\, \lambda_n)^{n-j_0} \, u_{j_0}$ which would contradict the assumption $u_n \to 0$ (since $\lambda_n \le K/n$). Therefore, $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$ for some $j\ge j_0$. By monotonicity of $h(\cdot)$ on $[0, c_{20}]$, $$u_{j+1} \ge h(u_j) \ge h\left(({\lambda_n\over 2 c_{13}})^{1/a}\right) \ge ({\lambda_n\over 2 c_{13}})^{1/a},$$ the last inequality being elementary. This leads to: $u_{j+2} \ge h(u_{j+1}) \ge h(({\lambda_n\over 2 c_{13}})^{1/a} ) \ge ({\lambda_n\over 2 c_{13}})^{1/a}$. Iterating the procedure, we obtain: $u_n \ge ({\lambda_n\over 2 c_{13}})^{1/a}$ for all $n> j_0$, which completes the proof of the Lemma.$\Box$ Proof of Theorem \[t:nullrec\] {#s:nullrec} ============================== Let $n\ge 2$, and let as before $$\tau_n := \inf\left\{ i\ge 1: X_i \in \T_n \right\} .$$ We start with a characterization of the distribution of $\tau_n$ via its Laplace transform $\e ( \ee^{- \lambda \tau_n} )$, for $\lambda \ge 0$. To state the result, we define $\alpha_{n,\lambda}(\cdot)$, $\beta_{n,\lambda}(\cdot)$ and $\gamma_n(\cdot)$ by $\alpha_{n,\lambda}(x) = \beta_{n,\lambda} (x) = 1$ and $\gamma_n(x)=0$ (for $x\in \T_n$), and $$\begin{aligned} \alpha_{n,\lambda}(x) &=& \ee^{-\lambda} \, {\sum_{i=1}^\deg A(x_i) \alpha_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{alpha} \\ \beta_{n,\lambda}(x) &=& {(1-\ee^{-2\lambda}) + \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{beta} \\ \gamma_n(x) &=& {[1/\omega(x, {\buildrel \leftarrow \over x} )] + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_n(x_i)} , \qquad 1\le |x| < n, \label{gamma}\end{aligned}$$ where $\beta_n(\cdot) := \beta_{n,0}(\cdot)$, and for any $x\in \T$, $\{x_i\}_{1\le i\le \deg}$ stands as before for the set of children of $x$. \[p:tau\] We have, for $n\ge 2$, $$\begin{aligned} E_\omega\left( \ee^{- \lambda \tau_n} \right) &=&\ee^{-\lambda} \, {\sum_{i=1}^\deg \omega (e, e_i) \alpha_{n,\lambda} (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)}, \qquad \forall \lambda \ge 0, \label{Laplace-tau} \\ E_\omega(\tau_n) &=& {1+ \sum_{i=1}^\deg \omega(e,e_i) \gamma_n (e_i) \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)}. \label{E(tau)} \end{aligned}$$ [*Proof of Proposition \[p:tau\].*]{} Identity (\[E(tau)\]) can be found in Rozikov [@rozikov]. The proof of (\[Laplace-tau\]) is along similar lines; so we feel free to give an outline only. Let $g_{n, \lambda}(x) := E_\omega (\ee^{- \lambda \tau_n} \, | \, X_0=x)$. By the Markov property, $g_{n, \lambda}(x) = \ee^{-\lambda} \sum_{i=1}^\deg \omega(x, x_i)g_{n, \lambda}(x_i) + \ee^{-\lambda} \omega(x, {\buildrel \leftarrow \over x}) g_{n, \lambda}({\buildrel \leftarrow \over x})$, for $|x| < n$. By induction on $|x|$ (such that $1\le |x| \le n-1$), we obtain: $g_{n, \lambda}(x) = \ee^\lambda (1- \beta_{n, \lambda} (x)) g_{n, \lambda}({\buildrel \leftarrow \over x}) + \alpha_{n, \lambda} (x)$, from which (\[Laplace-tau\]) follows. Probabilistic interpretation: for $1\le |x| <n$, if $T_{\buildrel \leftarrow \over x} := \inf \{ k\ge 0: X_k= {\buildrel \leftarrow \over x} \}$, then $\alpha_{n, \lambda} (x) = E_\omega [ \ee^{-\lambda \tau_n} {\bf 1}_{ \{ \tau_n < T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, $\beta_{n, \lambda} (x) = 1- E_\omega [ \ee^{-\lambda (1+ T_{\buildrel \leftarrow \over x}) } {\bf 1}_{ \{ \tau_n > T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, and $\gamma_n (x) = E_\omega [ (\tau_n \wedge T_{\buildrel \leftarrow \over x}) \, | \, X_0=x]$. We do not use these identities in the paper.$\Box$ It turns out that $\beta_{n,\lambda}(\cdot)$ is closely related to Mandelbrot’s multiplicative cascade [@mandelbrot]. Let $$M_n := \sum_{x\in \T_n} \prod_{y\in ] \! ] e, \, x] \! ] } A(y) , \qquad n\ge 1, \label{Mn}$$ where $] \! ] e, \,x] \! ]$ denotes as before the shortest path relating $e$ to $x$. We mention that $(A(e_i), \, 1\le i\le \deg)$ is a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and is distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. Let us recall some properties of $(M_n)$ from Theorem 2.2 of Liu [@liu00] and Theorem 2.5 of Liu [@liu01]: under the conditions $p={1\over \deg}$ and $\psi'(1)<0$, $(M_n)$ is a martingale, bounded in $L^a$ for any $a\in [1, \kappa)$; in particular, $$M_\infty := \lim_{n\to \infty} M_n \in (0, \infty), \label{cvg-M}$$ exists $\P$-almost surely and in $L^a(\P)$, and $$\E\left( \ee^{-s M_\infty} \right) \le \exp\left( - c_{21} \, s^{c_{22}}\right), \qquad \forall s\ge 1; \label{M-lowertail}$$ furthermore, if $1<\kappa< \infty$, then we also have $${c_{23}\over x^\kappa} \le \P\left( M_\infty > x\right) \le {c_{24}\over x^\kappa}, \qquad x\ge 1. \label{M-tail}$$ We now summarize the asymptotic properties of $\beta_{n,\lambda}(\cdot)$ which will be needed later on. \[p:beta-gamma\] Assume $p= {1\over \deg}$ and $\psi'(1)<0$. [(i)]{} For any $1\le i\le \deg$, $n\ge 2$, $t\ge 0$ and $\lambda \in [0, \, 1]$, we have $$\E \left\{ \exp \left[ -t \, {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{\E \left( \ee^{-t\, M_n/\Theta} \right) \right\}^{1/\deg} , \label{comp-Laplace}$$ where, as before, $\Theta:= \hbox{\rm ess sup}(A) < \infty$. [(ii)]{} If $\kappa\in (2, \infty]$, then for any $1\le i\le \deg$ and all $n\ge 2$ and $\lambda \in [0, \, {1\over n}]$, $$c_{25} \left( \sqrt {\lambda} + {1\over n} \right) \le \E[\beta_{n, \lambda}(e_i)] \le c_{26} \left( \sqrt {\lambda} + {1\over n} \right). \label{E(beta):kappa>2}$$ [(iii)]{} If $\kappa\in (1,2]$, then for any $1\le i\le \deg$, when $n\to \infty$ and uniformly in $\lambda \in [0, {1\over n}]$, $$\E[\beta_{n, \lambda}(e_i)] \; \approx \; \lambda^{1/\kappa} + {1\over n^{1/(\kappa-1)}} , \label{E(beta):kappa<2}$$ where $a_n \approx b_n$ denotes as before $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. The proof of Proposition \[p:beta-gamma\] is postponed until Section \[s:beta-gamma\]. By admitting it for the moment, we are able to prove Theorem \[t:nullrec\]. [*Proof of Theorem \[t:nullrec\].*]{} Assume $p= {1\over \deg}$ and $\psi'(1)<0$. Let $\pi$ be an invariant measure. By (\[pi\]) and the definition of $(M_n)$, $\sum_{x\in \T_n} \pi(x) \ge c_0 \, M_n$. Therefore by (\[cvg-M\]), we have $\sum_{x\in \T} \pi(x) =\infty$, $\P$-a.s., implying that $(X_n)$ is null recurrent. We proceed to prove the lower bound in (\[nullrec\]). By (\[gamma\]) and the ellipticity condition on the environment, $\gamma_n (x) \le {1\over \omega(x, {\buildrel \leftarrow \over x} )} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \le c_{27} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i)$. Iterating the argument yields $$\gamma_n (e_i) \le c_{27} \left( 1+ \sum_{j=2}^{n-1} M_j^{(e_i)}\right), \qquad n\ge 3,$$ where $$M_j^{(e_i)} := \sum_{x\in \T_j} \prod_{y\in ] \! ] e_i, x] \! ]} A(y).$$ For future use, we also observe that $$\label{defMei1} M_n= \sum_{i=1}^\deg \, A(e_i) \, M^{(e_i)}_n, \qquad n\ge 2.$$ Let $1\le i\le \deg$. Since $(M_j^{(e_i)}, \, j\ge 2)$ is distributed as $(M_{j-1}, \, j\ge 2)$, it follows from (\[cvg-M\]) that $M_j^{(e_i)}$ converges (when $j\to \infty$) almost surely, which implies $\gamma_n (e_i) \le c_{28}(\omega) \, n$. Plugging this into (\[E(tau)\]), we see that for all $n\ge 3$, $$E_\omega \left( \tau_n \right) \le {c_{29}(\omega) \, n \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)} \le {c_{30}(\omega) \, n \over \beta_n(e_1)}, \label{toto2}$$ the last inequality following from the ellipticity assumption on the environment. We now bound $\beta_n(e_1)$ from below (for large $n$). Let $1\le i\le \deg$. By (\[comp-Laplace\]), for $\lambda \in [0,\, 1]$ and $s\ge 0$, $$\E \left\{ \exp \left[ -s \, {\beta_{n, \lambda} (e_i) \over \E [\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{ \E \left( \ee^{-s \, M_n/\Theta} \right) \right\}^{1/\deg} \le \left\{ \E \left(\ee^{-s \, M_\infty/\Theta} \right) \right\}^{1/\deg} ,$$ where, in the last inequality, we used the fact that $(M_n)$ is a uniformly integrable martingale. Let $\varepsilon>0$. Applying (\[M-lowertail\]) to $s:= n^{\varepsilon}$, we see that $$\sum_n \E \left\{ \exp \left[ -n^{\varepsilon} {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} <\infty . \label{toto3}$$ In particular, $\sum_n \exp [ -n^{\varepsilon} {\beta_n (e_1) \over \E [\beta_n (e_1)]} ]$ is $\P$-almost surely finite (by taking $\lambda=0$; recalling that $\beta_n (\cdot) := \beta_{n, 0} (\cdot)$). Thus, for $\P$-almost all $\omega$ and all sufficiently large $n$, $\beta_n (e_1) \ge n^{-\varepsilon} \, \E [\beta_n (e_1)]$. Going back to (\[toto2\]), we see that for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega \left( \tau_n \right) \le {c_{30}(\omega) \, n^{1+\varepsilon} \over \E [\beta_n (e_1)]}.$$ Let $m(n):= \lfloor {n^{1+2\varepsilon} \over \E [\beta_n (e_1)]} \rfloor$. By Chebyshev’s inequality, for $\P$-almost all $\omega$ and all sufficiently large $n$, $P_\omega ( \tau_n \ge m(n) ) \le c_{31}(\omega) \, n^{-\varepsilon}$. Considering the subsequence $n_k:= \lfloor k^{2/\varepsilon}\rfloor$, we see that $\sum_k P_\omega ( \tau_{n_k} \ge m(n_k) )< \infty$, $\P$-a.s. By the Borel–Cantelli lemma, for $\P$-almost all $\omega$ and $P_\omega$-almost all sufficiently large $k$, $\tau_{n_k} < m(n_k)$, which implies that for $n\in [n_{k-1}, n_k]$ and large $k$, we have $\tau_n < m(n_k) \le {n_k^{1+2\varepsilon} \over \E [\beta_{n_k} (e_1)]} \le {n^{1+3\varepsilon} \over \E [\beta_n(e_1)]}$ (the last inequality following from the estimate of $\E [\beta_n(e_1)]$ in Proposition \[p:beta-gamma\]). In view of Proposition \[p:beta-gamma\], and since $\varepsilon$ can be as small as possible, this gives the lower bound in (\[nullrec\]) of Theorem \[t:nullrec\]. To prove the upper bound, we note that $\alpha_{n,\lambda}(x) \le \beta_n(x)$ for any $\lambda\ge 0$ and any $0<|x|\le n$ (this is easily checked by induction on $|x|$). Thus, by (\[Laplace-tau\]), for any $\lambda\ge 0$, $$E_\omega\left( \ee^{- \lambda \tau_n} \right) \le {\sum_{i=1}^\deg \omega (e, e_i) \beta_n (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)} \le \sum_{i=1}^\deg {\beta_n (e_i) \over \beta_{n,\lambda} (e_i)}.$$ We now fix $r\in (1, \, {1\over \nu})$, where $\nu:= 1- {1\over \min\{ \kappa, \, 2\} }$ is defined in (\[theta\]). It is possible to choose a small $\varepsilon>0$ such that $${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon \quad \hbox{if }\kappa \in (1, \, 2], \qquad 1 - {r\over 2}> 3\varepsilon \quad \hbox{if }\kappa \in (2, \, \infty].$$ Let $\lambda = \lambda(n) := n^{-r}$. By (\[toto3\]), we have $\beta_{n,n^{-r}} (e_i) \ge n^{-\varepsilon}\, \E [\beta_{n,n^{-r}} (e_i)]$ for $\P$-almost all $\omega$ and all sufficiently large $n$, which yields $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^\varepsilon \sum_{i=1}^\deg {\beta_n (e_i) \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ It is easy to bound $\beta_n (e_i)$. For any given $x\in \T \backslash \{ e\}$ with $|x|\le n$, $n\mapsto \beta_n (x)$ is non-increasing (this is easily checked by induction on $|x|$). Chebyshev’s inequality, together with the Borel–Cantelli lemma (applied to a subsequence, as we did in the proof of the lower bound) and the monotonicity of $n\mapsto \beta_n(e_i)$, readily yields $\beta_n (e_i) \le n^\varepsilon \, \E [\beta_n (e_i)]$ for almost all $\omega$ and all sufficiently large $n$. As a consequence, for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^{2\varepsilon} \sum_{i=1}^\deg {\E [\beta_n (e_i)] \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ By Proposition \[p:beta-gamma\], this yields $E_\omega ( \ee^{- n^{-r} \tau_n} ) \le n^{-\varepsilon}$ (for $\P$-almost all $\omega$ and all sufficiently large $n$; this is where we use ${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon$ if $\kappa \in (1, \, 2]$, and $1 - {r\over 2}> 3\varepsilon$ if $\kappa \in (2, \, \infty]$). In particular, for $n_k:= \lfloor k^{2/\varepsilon} \rfloor$, we have $\P$-almost surely, $E_\omega ( \sum_k \ee^{- n_k^{-r} \tau_{n_k}} ) < \infty$, which implies that, $\p$-almost surely for all sufficiently large $k$, $\tau_{n_k} \ge n_k^r$. This implies that $\p$-almost surely for all sufficiently large $n$, $\tau_n \ge {1\over 2}\, n^r$. The upper bound in (\[nullrec\]) of Theorem \[t:nullrec\] follows.$\Box$ Proposition \[p:beta-gamma\] is proved in Section \[s:beta-gamma\]. Proof of Proposition \[p:beta-gamma\] {#s:beta-gamma} ===================================== Let $\theta \in [0,\, 1]$. Let $(Z_{n,\theta})$ be a sequence of random variables, such that $Z_{1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i$, where $(A_i, \, 1\le i\le \deg)$ is distributed as $(A(x_i), \, 1\le i\le \deg)$ (for any $x\in \T$), and that $$Z_{j+1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i {\theta + Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } , \qquad \forall\, j\ge 1, \label{ZW}$$ where $Z_{j,\theta}^{(i)}$ (for $1\le i \le \deg$) are independent copies of $Z_{j,\theta}$, and are independent of the random vector $(A_i, \, 1\le i\le \deg)$. Then, for any given $n\ge 1$ and $\lambda\ge 0$, $$Z_{n, 1-\ee^{-2\lambda}} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i\, \beta_{n, \lambda}(e_i) , \label{Z=beta}$$ provided $(A_i, \, 1\le i\le \deg)$ and $(\beta_{n, \lambda}(e_i), \, 1\le i\le \deg)$ are independent. \[p:concentration\] Assume $p={1\over \deg}$ and $\psi'(1)<0$. Let $\kappa$ be as in $(\ref{kappa})$. For all $a\in (1, \kappa) \cap (1, 2]$, we have $$\sup_{\theta \in [0,1]} \sup_{j\ge 1} {[\E (Z_{j,\theta} )^a ] \over (\E Z_{j,\theta})^a} < \infty.$$ [*Proof of Proposition \[p:concentration\].*]{} Let $a\in (1,2]$. Conditioning on $A_1$, $\dots$, $A_\deg$, we can apply Lemma \[l:moment\] to see that $$\begin{aligned} &&\E \left[ \left( \, \sum_{i=1}^\deg A_i {\theta+ Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } \right)^a \Big| A_1, \dots, A_\deg \right] \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + (\deg-1) \left[ \sum_{i=1}^\deg A_i\, \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a,\end{aligned}$$ where $c_{32}$ depends on $a$, $\deg$ and the bound of $A$ (recalling that $A$ is bounded away from 0 and infinity). Taking expectation on both sides, and in view of (\[ZW\]), we obtain: $$\E[(Z_{j+1,\theta})^a] \le \deg \E(A^a) \E \left[ \left( {\theta+ Z_{j,\theta}\over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a.$$ We divide by $(\E Z_{j+1,\theta})^a = [ \E({\theta+Z_{j,\theta}\over 1+ Z_{j,\theta} })]^a$ on both sides, to see that $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[ ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })^a] \over [\E ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } + c_{32}.$$ Put $\xi = \theta+ Z_{j,\theta}$. By (\[RSD\]), we have $${\E[ ({\theta+Z_{j,\theta} \over 1+Z_{j,\theta} })^a] \over [\E ({\theta+Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } = {\E[ ({\xi \over 1- \theta+ \xi })^a] \over [\E ({ \xi \over 1- \theta+ \xi })]^a } \le {\E[\xi^a] \over [\E \xi ]^a } .$$ Applying Lemma \[l:moment\] to $k=2$ yields that $\E[\xi^a] = \E[( \theta+ Z_{j,\theta} )^a] \le \theta^a + \E[( Z_{j,\theta} )^a] + (\theta + \E( Z_{j,\theta} ))^a $. It follows that ${\E[ \xi^a] \over [\E \xi ]^a } \le {\E[ (Z_{j,\theta})^a] \over [\E Z_{j,\theta}]^a } +2$, which implies that for $j\ge 1$, $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[(Z_{j,\theta})^a]\over (\E Z_{j,\theta})^a} + (2 \deg \E(A^a)+ c_{32}).$$ Thus, if $\deg \E(A^a)<1$ (which is the case if $1<a<\kappa$), then $$\sup_{j\ge 1} {\E[ (Z_{j,\theta})^a] \over (\E Z_{j,\theta})^a} < \infty,$$ uniformly in $\theta \in [0, \, 1]$.$\Box$ We now turn to the proof of Proposition \[p:beta-gamma\]. For the sake of clarity, the proofs of (\[comp-Laplace\]), (\[E(beta):kappa&gt;2\]) and (\[E(beta):kappa&lt;2\]) are presented in three distinct parts. Proof of (\[comp-Laplace\]) {#subs:beta} --------------------------- By (\[exp\]) and (\[ZW\]), we have, for all $\theta\in [0, \, 1]$ and $j\ge 1$, $$\E \left\{ \exp\left( - t \, { Z_{j+1, \theta} \over \E (Z_{j+1, \theta})}\right) \right\} \le \E \left\{ \exp\left( - t \sum_{i=1}^\deg A_i { Z^{(i)}_{j, \theta} \over \E (Z^{(i)}_{j, \theta}) }\right) \right\}, \qquad t\ge 0.$$ Let $f_j(t) := \E \{ \exp ( - t { Z_{j, \theta} \over \E Z_{j, \theta}} )\}$ and $g_j(t):= \E (\ee^{ -t\, M_j})$ (for $j\ge 1$). We have $$f_{j+1}(t) \le \E \left( \prod_{i=1}^\deg f_j(t A_i) \right), \quad j\ge 1.$$ On the other hand, by (\[defMei1\]), $$g_{j+1}(t) = \E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) M^{(e_i)}_{j+1} \right) \right\} = \E \left( \prod_{i=1}^\deg g_j(t A_i) \right), \qquad j\ge 1.$$ Since $f_1(\cdot)= g_1(\cdot)$, it follows by induction on $j$ that for all $j\ge 1$, $f_j(t) \le g_j(t)$; in particular, $f_n(t) \le g_n(t)$. We take $\theta = 1- \ee^{-2\lambda}$. In view of (\[Z=beta\]), we have proved that $$\E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) {\beta_{n, \lambda}(e_i) \over \E [\beta_{n, \lambda}(e_i)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} , \label{beta_n(e)}$$ which yields (\[comp-Laplace\]).$\Box$ [**Remark.**]{} Let $$\beta_{n,\lambda}(e) := {(1-\ee^{-2\lambda})+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i) \over 1+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i)}.$$ By (\[beta\_n(e)\]) and (\[exp\]), if $\E(A)= {1\over \deg}$, then for $\lambda\ge 0$, $n\ge 1$ and $t\ge 0$, $$\E \left\{ \exp\left( - t {\beta_{n, \lambda}(e) \over \E [\beta_{n, \lambda}(e)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} .$$ Proof of (\[E(beta):kappa&gt;2\]) {#subs:kappa>2} --------------------------------- Assume $p={1\over \deg}$ and $\psi'(1)<0$. Since $Z_{j, \theta}$ is bounded uniformly in $j$, we have, by (\[ZW\]), for $1\le j \le n-1$, $$\begin{aligned} \E(Z_{j+1, \theta}) &=& \E\left( {\theta+Z_{j, \theta} \over 1+Z_{j, \theta} } \right) \nonumber \\ &\le& \E\left[(\theta+ Z_{j, \theta} )(1 - c_{33}\, Z_{j, \theta} )\right] \nonumber \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \E\left[(Z_{j, \theta})^2\right] \label{E(Z2)} \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \left[ \E Z_{j, \theta} \right]^2. \nonumber\end{aligned}$$ By Lemma \[l:abc\], we have, for any $K>0$ and uniformly in $\theta\in [0, \,Ê{K\over n}]$, $$\label{53} \E (Z_{n, \theta}) \le c_{34} \left( \sqrt {\theta} + {1\over n} \right) \le {c_{35} \over \sqrt{n}}.$$ We mention that this holds for all $\kappa \in (1, \, \infty]$. In view of (\[Z=beta\]), this yields the upper bound in (\[E(beta):kappa&gt;2\]). To prove the lower bound, we observe that $$\E(Z_{j+1, \theta}) \ge \E\left[(\theta+ Z_{j, \theta} )(1 - Z_{j, \theta} )\right] = \theta+ (1-\theta) \E(Z_{j, \theta}) - \E\left[(Z_{j, \theta})^2\right] . \label{51}$$ If furthermore $\kappa \in (2, \infty]$, then $\E [(Z_{j, \theta})^2 ] \le c_{36}\, (\E Z_{j, \theta})^2$ (see Proposition \[p:concentration\]). Thus, for all $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{36}\, (\E Z_{j,\theta})^2 .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of (\[Z=beta\]) and Lemma \[l:abc\] readily yields the lower bound in (\[E(beta):kappa&gt;2\]).$\Box$ Proof of (\[E(beta):kappa&lt;2\]) {#subs:kappa<2} --------------------------------- We assume in this part $p={1\over \deg}$, $\psi'(1)<0$ and $1<\kappa \le 2$. Let $\varepsilon>0$ be small. Since $(Z_{j, \theta})$ is bounded, we have $\E[(Z_{j, \theta})^2] \le c_{37} \, \E [(Z_{j, \theta})^{\kappa-\varepsilon}]$, which, by Proposition \[p:concentration\], implies $$\E\left[ (Z_{j, \theta})^2 \right] \le c_{38} \, \left( \E Z_{j, \theta} \right)^{\kappa- \varepsilon} . \label{c38}$$ Therefore, (\[51\]) yields that $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{38} \, (\E Z_{j, \theta})^{\kappa-\varepsilon} .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of Lemma \[l:abc\] implies that for any $K>0$, $$\E (Z_{\ell, \theta}) \ge c_{14} \left( \theta^{1/(\kappa-\varepsilon)} + {1\over \ell^{1/(\kappa -1 - \varepsilon)}} \right), \qquad \forall \, \theta\in [0, \,Ê{K\over n}], \; \; \forall \, 1\le \ell \le n. \label{ell}$$ The lower bound in (\[E(beta):kappa&lt;2\]) follows from (\[Z=beta\]). It remains to prove the upper bound. Define $$Y_{j, \theta} := {Z_{j, \theta} \over \E(Z_{j, \theta})} , \qquad 1\le j\le n.$$ We take $Z_{j-1, \theta}^{(x)}$ (for $x\in \T_1$) to be independent copies of $Z_{j-1, \theta}$, and independent of $(A(x), \; x\in \T_1)$. By (\[ZW\]), for $2\le j\le n$, $$\begin{aligned} Y_{j, \theta} &\; {\buildrel law \over =} \;& \sum_{x\in \T_1} A(x) {(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) \over \E [(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) ]} \ge \sum_{x\in \T_1} A(x) {Z_{j-1, \theta}^{(x)} / (1+ Z_{j-1, \theta}^{(x)}) \over \theta+ \E [Z_{j-1, \theta}]} \\ &=& { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2/\E(Z_{j-1, \theta}) \over 1+Z_{j-1, \theta}^{(x)}} \\ &\ge& \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta} \; ,\end{aligned}$$ where $$\begin{aligned} Y_{j-1, \theta}^{(x)} &:=&{Z_{j-1, \theta}^{(x)} \over \E(Z_{j-1, \theta})} , \\ \Delta_{j-1, \theta} &:=&{\theta\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} + \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2 \over \E(Z_{j-1, \theta})} .\end{aligned}$$ By (\[c38\]), $\E[ {(Z_{j-1, \theta}^{(i)})^2 \over \E(Z_{j-1, \theta})}]\le c_{38}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. On the other hand, by (\[ell\]), $\E(Z_{j-1, \theta}) \ge c_{14}\, \theta^{1/(\kappa-\varepsilon)}$ for $2\le j \le n$, and thus ${\theta\over \theta+ \E [Z_{j-1, \theta}]} \le c_{39}\, (\E Z_{j-1, \theta})^{\kappa-1- \varepsilon}$. As a consequence, $\E( \Delta_{j-1, \theta} ) \le c_{40}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. If we write $\xi \; {\buildrel st. \over \ge} \; \eta$ to denote that $\xi$ is stochastically greater than or equal to $\eta$, then we have proved that $Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{x\in \T_1}^\deg A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta}$. Applying the same argument to each of $(Y_{j-1, \theta}^{(x)}, \, x\in \T_1)$, we see that, for $3\le j\le n$, $$Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{u\in \T_1} A(u) \sum_{v\in \T_2: \; u={\buildrel \leftarrow \over v}} A(v) Y_{j-2, \theta}^{(v)} - \left( \Delta_{j-1, \theta}+ \sum_{u\in \T_1} A(u) \Delta_{j-2, \theta}^{(u)} \right) ,$$ where $Y_{j-2, \theta}^{(v)}$ (for $v\in \T_2$) are independent copies of $Y_{j-2, \theta}$, and are independent of $(A(w), \, w\in \T_1 \cup \T_2)$, and $(\Delta_{j-2, \theta}^{(u)}, \, u\in \T_1)$ are independent of $(A(u), \, u\in \T_1)$ and are such that $\e[\Delta_{j-2, \theta}^{(u)}] \le c_{40}\, (\E Z_{j-2, \theta})^{\kappa-1-\varepsilon}$. By induction, we arrive at: for $j>m \ge 1$, $$Y_{j, \theta} \; {\buildrel st. \over \ge}\; \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) Y_{j-m, \theta}^{(x)} - \Lambda_{j,m,\theta}, \label{Yn>}$$ where $Y_{j-m, \theta}^{(x)}$ (for $x\in \T_m$) are independent copies of $Y_{j-m, \theta}$, and are independent of the random vector $(A(w), \, 1\le |w| \le m)$, and $\E(\Lambda_{j,m,\theta}) \le c_{40}\, \sum_{\ell=1}^m (\E Z_{j-\ell, \theta})^{\kappa-1-\varepsilon} $. Since $\E(Z_{i, \theta}) = \E({\theta+ Z_{i-1, \theta} \over 1+ Z_{i-1, \theta}}) \ge \E(Z_{i-1, \theta}) - \E[(Z_{i-1, \theta})^2] \ge \E(Z_{i-1, \theta}) - c_{38}\, [\E Z_{i-1, \theta} ]^{\kappa-\varepsilon}$ (by (\[c38\])), we have, for all $j\in (j_0, n]$ (with a large but fixed integer $j_0$) and $1\le \ell \le j-j_0$, $$\begin{aligned} \E(Z_{j, \theta}) &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{38}\, [\E Z_{j-i, \theta} ]^{\kappa-1-\varepsilon}\right\} \\ &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{41}\, (j-i)^{-(\kappa-1- \varepsilon)/2}\right\} ,\end{aligned}$$ the last inequality being a consequence of (\[53\]). Thus, for $j\in (j_0, n]$ and $1\le \ell \le j^{(\kappa-1-\varepsilon)/2}$, $\E(Z_{j, \theta}) \ge c_{42}\, \E(Z_{j-\ell, \theta})$, which implies that for all $m\le j^{(\kappa-1-\varepsilon)/2}$, $\E(\Lambda_{j,m, \theta}) \le c_{43} \, m (\E Z_{j, \theta})^{\kappa-1-\varepsilon}$. By Chebyshev’s inequality, for $j\in (j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P\left\{ \Lambda_{j,m, \theta} > \varepsilon r\right\} \le {c_{43} \, m (\E Z_{j, \theta})^{\kappa -1-\varepsilon} \over \varepsilon r}. \label{toto4}$$ Let us go back to (\[Yn&gt;\]), and study the behaviour of $\sum_{x\in \T_m} ( \prod_{y\in ]\! ] e, x ]\! ]} A(y) ) Y_{j-m, \theta}^{(x)}$. Let $M^{(x)}$ (for $x\in \T_m$) be independent copies of $M_\infty$ and independent of all other random variables. Since $\E(Y_{j-m, \theta}^{(x)})= \E(M^{(x)})=1$, we have, by Fact \[f:petrov\], for any $a\in (1, \, \kappa)$, $$\begin{aligned} &&\E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} \\ &\le&2 \E \left\{ \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right) \, \E\left( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a \right) \right\}.\end{aligned}$$ By Proposition \[p:concentration\] and the fact that $(M_n)$ is a martingale bounded in $L^a$, we have $\E ( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a ) \le c_{44}$. Thus, $$\begin{aligned} \E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} &\le& 2c_{44} \E \left\{ \sum_{x\in \T_m} \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right\} \\ &=& 2c_{44} \, \deg^m \, [\E(A^a)]^m.\end{aligned}$$ By Chebyshev’s inequality, $$\P \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right| > \varepsilon r\right\} \le {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a}. \label{toto6}$$ Clearly, $\sum_{x\in \T_m} (\prod_{y\in ]\! ] e, x ]\! ]} A(y) ) M^{(x)}$ is distributed as $M_\infty$. We can thus plug (\[toto6\]) and (\[toto4\]) into (\[Yn&gt;\]), to see that for $j\in [j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P \left\{ Y_{j, \theta} > (1-2\varepsilon) r\right\} \ge \P \left\{ M_\infty > r\right\} - {c_{43}\, m (\E Z_{j, \theta})^{\kappa-1- \varepsilon} \over \varepsilon r} - {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a} . \label{Yn-lb}$$ We choose $m:= \lfloor j^\varepsilon \rfloor$. Since $a\in (1, \, \kappa)$, we have $\deg \E(A^a) <1$, so that $\deg^m [\E(A^a)]^m \le \exp( - j^{\varepsilon/2})$ for all large $j$. We choose $r= {1\over (\E Z_{j, \theta})^{1- \delta}}$, with $\delta := {4\kappa \varepsilon \over \kappa -1}$. In view of (\[M-tail\]), we obtain: for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{23} \, (\E Z_{j, \theta})^{(1- \delta) \kappa} - {c_{43}\over \varepsilon} \, j^\varepsilon\, (\E Z_{j, \theta})^{\kappa-\varepsilon-\delta} - {2c_{44} \, (\E Z_{j, \theta})^{(1- \delta)a} \over \varepsilon^a \exp(j^{\varepsilon/2})} .$$ Since $c_{14}/j^{1/(\kappa-1- \varepsilon)} \le \E(Z_{j, \theta}) \le c_{35}/j^{1/2}$ (see (\[ell\]) and (\[53\]), respectively), we can pick up sufficiently small $\varepsilon$, so that for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge {c_{23} \over 2} \, (\E Z_{j, \theta})^{(1-\delta) \kappa}.$$ Recall that by definition, $Y_{j, \theta} = {Z_{j, \theta} \over \E(Z_{j, \theta})}$. Therefore, for $j\in [j_0, n]$, $$\E[(Z_{j, \theta})^2] \ge [\E Z_{j, \theta}]^2 \, {(1-2\varepsilon)^2\over (\E Z_{j, \theta})^{2(1- \delta)}} \P \left\{ Y_{j, \theta} > {1-2\varepsilon \over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{45} \, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta}.$$ Of course, the inequality holds trivially for $0\le j < j_0$ (with possibly a different value of the constant $c_{45}$). Plugging this into (\[E(Z2)\]), we see that for $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \le \theta + \E(Z_{j, \theta}) - c_{46}\, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta} .$$ By Lemma \[l:abc\], this yields $\E(Z_{n, \theta}) \le c_{47} \, \{ \theta^{1/[\kappa+ (2- \kappa)\delta]} + n^{- 1/ [\kappa -1 + (2- \kappa)\delta]}\}$. An application of (\[Z=beta\]) implies the desired upper bound in (\[E(beta):kappa&lt;2\]).$\Box$ [**Remark.**]{} A close inspection on our argument shows that under the assumptions $p= {1\over \deg}$ and $\psi'(1)<0$, we have, for any $1\le i \le \deg$ and uniformly in $\lambda \in [0, \, {1\over n}]$, $$\left( {\alpha_{n, \lambda}(e_i) \over \E[\alpha_{n, \lambda}(e_i)]} ,\; {\beta_{n, \lambda}(e_i) \over \E[\beta_{n, \lambda}(e_i)]} , \; {\gamma_n(e_i) \over \E[\gamma_n (e_i)]} \right) \; {\buildrel law \over \longrightarrow} \; (M_\infty, \, M_\infty, \, M_\infty),$$ where “${\buildrel law \over \longrightarrow}$" stands for convergence in distribution, and $M_\infty$ is the random variable defined in $(\ref{cvg-M})$.$\Box$ [**Acknowledgements**]{} We are grateful to Philippe Carmona and Marc Yor for helpful discussions. [99]{} Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. [*Ann. Math. Statist.*]{} [**23**]{}, 493–507. Duquesne, T. and Le Gall, J.-F. (2002). [*Random Trees, Lévy Processes and Spatial Branching Processes.*]{} Astérisque [**281**]{}. Société Mathématique de France, Paris. Griffeath, D. and Liggett, T.M. (1982). Critical phenomena for Spitzer’s reversible nearest particle systems. [*Ann. Probab.*]{} [**10**]{}, 881–895. Harris, T.E. (1963). [*The Theory of Branching Processes.*]{} Springer, Berlin. Hoel, P., Port, S. and Stone, C. (1972). [*Introduction to Stochastic Processes.*]{} Houghton Mifflin, Boston. Kesten, H., Kozlov, M.V. and Spitzer, F. (1975). A limit law for random walk in a random environment. [*Compositio Math.*]{} [**30**]{}, 145–168. Le Gall, J.-F. (2005). Random trees and applications. [*Probab. Surveys*]{} [**2**]{}, 245–311. Liggett, T.M. (1985). [*Interacting Particle Systems.*]{} Springer, New York. Liu, Q.S. (2000). On generalized multiplicative cascades. [*Stoch. Proc. Appl.*]{} [**86**]{}, 263–286. Liu, Q.S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. [*Stoch. Proc. Appl.*]{} [**95**]{}, 83–107. Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. [*Ann. Probab.*]{} [**20**]{}, 125–136. Lyons, R. and Peres, Y. (2005+). [*Probability on Trees and Networks.*]{} (Forthcoming book) [http://mypage.iu.edu/\~rdlyons/prbtree/prbtree.html]{} Mandelbrot, B. (1974). Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. [*C. R. Acad. Sci. Paris*]{} [**278**]{}, 289–292. Menshikov, M.V. and Petritis, D. (2002). On random walks in random environment on trees and their relationship with multiplicative chaos. In: [*Mathematics and Computer Science II (Versailles, 2002)*]{}, pp. 415–422. Birkhäuser, Basel. Pemantle, R. (1995). Tree-indexed processes. [*Statist. Sci.*]{} [**10**]{}, 200–213. Pemantle, R. and Peres, Y. (1995). Critical random walk in random environment on trees. [*Ann. Probab.*]{} [**23**]{}, 105–140. Pemantle, R. and Peres, Y. (2005+). The critical Ising model on trees, concave recursions and nonlinear capacity. [ArXiv:math.PR/0503137.]{} Peres, Y. (1999). Probability on trees: an introductory climb. In: [*École d’Été St-Flour 1997*]{}, Lecture Notes in Mathematics [**1717**]{}, pp. 193–280. Springer, Berlin. Petrov, V.V. (1995). [*Limit Theorems of Probability Theory.*]{} Clarendon Press, Oxford. Rozikov, U.A. (2001). Random walks in random environments on the Cayley tree. [*Ukrainian Math. J.*]{} [**53**]{}, 1688–1702. Sinai, Ya.G. (1982). The limit behavior of a one-dimensional random walk in a random environment. [*Theory Probab. Appl.*]{} [**27**]{}, 247–258. Sznitman, A.-S. (2005+). Random motions in random media. (Lecture notes of minicourse at Les Houches summer school.) [http://www.math.ethz.ch/u/sznitman/]{} Zeitouni, O. (2004). Random walks in random environment. In: [*École d’Été St-Flour 2001*]{}, Lecture Notes in Mathematics [**1837**]{}, pp. 189–312. Springer, Berlin. -- ------------------------------ --------------------------------------------------- Yueyun Hu Zhan Shi Département de Mathématiques Laboratoire de Probabilités et Modèles Aléatoires Université Paris XIII Université Paris VI 99 avenue J-B Clément 4 place Jussieu F-93430 Villetaneuse F-75252 Paris Cedex 05 France France -- ------------------------------ ---------------------------------------------------
{ "pile_set_name": "arxiv" }
"---\nabstract: 'This issue of *Statistical Science* draws its inspiration from the work of James M.(...TRUNCATED)
{ "pile_set_name": "arxiv" }
"1911 North East Cork by-election\n\nThe North East Cork by-election of 1911 was held on 15 July 191(...TRUNCATED)
{ "pile_set_name": "wikipedia_en" }
"---\nabstract: 'We report the growth of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconducting single crystal(...TRUNCATED)
{ "pile_set_name": "arxiv" }
"Teresa Carlson\n\nTeresa Carlson is the current vice president for Amazon Web Services' worldwide p(...TRUNCATED)
{ "pile_set_name": "wikipedia_en" }
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1