nexoncn.com

文档资料共享网 文档搜索专家

文档资料共享网 文档搜索专家

当前位置：首页 >> >> Random walks on supercritical percolation clusters

The Annals of Probability 2004, Vol. 32, No. 4, 3024–3084 DOI: 10.1214/009117904000000748 c Institute of Mathematical Statistics, 2004

arXiv:math/0302004v2 [math.PR] 6 Apr 2005

RANDOM WALKS ON SUPERCRITICAL PERCOLATION CLUSTERS1 By Martin T. Barlow University of British Columbia

We obtain Gaussian upper and lower bounds on the transition density qt (x, y) of the continuous time simple random walk on a supercritical percolation cluster C∞ in the Euclidean lattice. The bounds, analogous to Aronsen’s bounds for uniformly elliptic divergence form di?usions, hold with constants ci depending only on p (the percolation probability) and d. The irregular nature of the medium means that the bound for qt (x, ·) holds only for t ≥ Sx (ω), where the constant Sx (ω) depends on the percolation con?guration ω.

0. Introduction. In this paper we study the simple random walk on the in?nite component of supercritical bond percolation in the lattice Zd . We recall the de?nition of percolation [see Grimmett (1999)]: For edges e = {x, y} ∈ Ed = {{x, y} : |x ? y| = 1}, we have i.i.d. Bernoulli r.v. ηe , with Pp (ηe = 1) = p ∈ [0, 1], de?ned on a probability space (?, F, Pp ). Edges e with ηe = 1 are called open and the open cluster C(x) that contains x is the set of y such that x and y are connected by an open path. It is well known that there exists pc ∈ (0, 1) such that when p > pc there is a unique in?nite open cluster, which we denote C∞ = C∞ (ω). x For each ω let Y = (Yt , t ≥ 0, Pω , x ∈ C∞ ) be the continuous time simple random walk (CTSRW) on C∞ ; Y is the process that waits an exponential mean 1 time at each vertex x and then jumps along one of the open edges e that contains x, with each edge chosen with equal probability. If we write νxy (ω) = 1 if {x, y} is an open edge and 0 otherwise, and set ?(x) = y νxy , then Y is the Markov process with generator (0.1) Lω f (x) = ?(x)?1

y

νxy (f (y) ? f (x)),

x ∈ C∞ .

Received January 2003; revised January 2004. Supported in part by a NSERC (Canada) grant, by CNRS (France) and by the Centre Borel (Paris). AMS 2000 subject classi?cations. Primary 60K37; secondary 58J35. Key words and phrases. Percolation, random walk, heat kernel.

1

This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Probability, 2004, Vol. 32, No. 4, 3024–3084. This reprint di?ers from the original in pagination and typographic detail. 1

2

M. T. BARLOW

A number of papers have studied this process or the closely related disx crete time random walk X = (Xn , n ≥ 0, Pω , x ∈ C∞ ). De Gennes (1976) discussed the link between the behavior of X and resistance properties of C∞ , and coined the term “the ant in the labyrinth” to describe its motion. It is believed that, in all dimensions, there is no in?nite cluster if p = pc (this is known to be true if d = 2 or d ≥ 19). Kesten (1986a) de?ned the incipient in?nite cluster when d = 2, and proved [Kesten (1986b)] that the random walk on this set has subdi?usive behavior. For the supercritical case p > pc , Y is expected to have long time behavior similar to the random walk on Zd , and this has been con?rmed in several ways. De Masi, Ferrari, Goldstein and Wick (1989) proved an invariance principle (d = 2), while Grimmett, Kesten and Zhang (1993) proved that Y is recurrent if d = 2 and transient when d ≥ 3. More recent papers have studied the transition density of Y with respect to ?: (0.2)

ω x qt (x, y) = qt (x, y) = Pω (Yt = y)?(y)?1 .

Theorem 1.2 of Mathieu and Remy (2004) gives the (quenched) bound

ω qt (0, y) ≤ c1 (p, d)t?d/2 ,

t ≥ T0 (ω), y ∈ C∞ ,

Pp -a.s. on the set {ω : 0 ∈ C∞ }, while a similar (annealed) estimate was given by Heicklen and Ho?man (2000), but with an extra logarithmic factor. The main result of this paper is the following two-sided bound on qt . Theorem 1. Let p > pc . There exists ?1 ? ? with Pp (?1 ) = 1 and r.v. Sx , x ∈ Zd , such that Sx (ω) < ∞ for each ω ∈ ?1 , x ∈ C∞ (ω). There exist constants ci = ci (d, p) such that for x, y ∈ C∞ (ω), t ≥ 1 with (0.3) the transition density (0.4) Sx (ω) ∨ |x ? y|1 ≤ t,

ω qt (x, y)

of Y satis?es

ω c1 t?d/2 exp(?c2 |x ? y|2 /t) ≤ qt (x, y) ≤ c3 t?d/2 exp(?c4 |x ? y|2 /t). 1 1

d i=1 |xi

Remarks. 1. The usual graph distance on Zd is denoted |x ? y|1 = ? yi |.

2. The CTSRW on Zd (i.e., p = 1) satis?es these bounds with Sx ≡ 1. For |x ? y|1 > t, we have bounds which, up to constants, depend only on the tail of the Poisson distribution and not on the geometry of C∞ ; see Lemma 1.1. 3. If G is any ?nite graph which can be embedded in Zd , then C∞ contains in?nitely many copies of G (attached at one point to the rest of C∞ ). Since (0.4) does not hold uniformly for all such graphs, it is clear that we cannot expect (0.4) for all x, y, t with |x ? y|1 ≤ t. This irregularity of C∞ is taken care of by the random variable Sx ; after an initial period of possible bad behavior, qt (x, ·) settles down to a distribution with Gaussian tails.

RANDOM WALKS AND PERCOLATION

3

Fig. 1. Bond percolation with p = 0.53. The vertices in the largest open cluster are marked with black circles.

4. In (0.4) we can replace |x ? y|1 by the graph distance in C∞ and, in fact, this is the result that we prove ?rst (Proposition 6.1). 5. The constants ci are nonrandom, and depend only on p and d. For p su?ciently close to 1 it would, in principle, be possible to estimate them by careful tracking of the various constants in this paper. However, for general p ∈ (pc , 1) the constants arise in a none?ective fashion—if p > pc , then we know that certain kinds of good behavior occur in cubes of side k ≥ k0 (p, d), but have no control on k0 . The constants ci then depend on k0 . 6. The tail of the random variable Sx satis?es (0.5) Pp (x ∈ C∞ , Sx ≥ n) ≤ c exp(?c′ nεd ); see Lemma 2.24 and Section 6.

4

M. T. BARLOW

7. Similar bounds hold for the discrete time r.w. X. The proofs (which are not given here) run along the same lines, but with some extra (mainly minor) di?culties due to the discrete time. Whereas Zd and C∞ are bipartite we need to replace qt (x, y) in (0.4) with pn?1 (x, y) + pn (x, y). 8. It seems very likely that this theorem holds for other lattices in Zd and that the “random walk” part of the proofs below transfers easily to other situations, but many of the percolation estimates that we use have been proved only for the Euclidean lattice. Similar estimates hold in the annealed case. Theorem 2. Let p > pc . There exist constants ci = ci (d, p) such that for x, y ∈ Zd , t ≥ 1 with |x ? y|1 ≤ t, (0.6)

ω c1 t?d/2 exp(?c2 |x ? y|2 /t) ≤ Ep (qt (x, y)|x, y ∈ C∞ ) 1 ≤ c3 t?d/2 exp(?c4 |x ? y|2 /t). 1

An immediate consequence of Theorem 1 is that Y is transient if and only if d ≥ 3, but of course this was already known from Grimmett, Kesten and Zhang (1993). As an example of a new application, the o?-diagonal bounds in Theorem 1 enable us to control harmonic functions on C∞ . Write dω (x, y) for the graph distance on C∞ and let Bω (x, R) = {y : dω (x, y) < R} for balls. A function h : Bω (x0 , R + 1) → R is harmonic on Bω (x, R) if Lh(x′ ) = 0, x′ ∈ Bω (x, R). We have the following Harnack inequality. Theorem 3. Let p > pc . There exists c1 = c1 (p, d), ?1 ? ? with Pp (?1 ) = 1 and R0 (x, ω) such that R0 (x, ω) < ∞ for each ω ∈ ?1 , x ∈ C∞ . If R ≥ R0 (x, ω) and h : Bω (x, 2R + 1) → (0, ∞) is a positive harmonic function on Bω (x, 2R), then writing B = Bω (x, R), (0.7) sup h ≤ c1 inf h.

B B

This leads immediately to the Liouville property for positive harmonic functions. Theorem 4. h is constant. (a) Let h : C∞ → R be positive and harmonic on C∞ . Then

(b) Let T denote the tail σ-?eld of Y . There exists ?2 with Pp (?2 ) = 1 x such that for each ω ∈ ?2 and for each F ∈ T , either Pω (F ) = 0 for all x (F ) = 1 for all x ∈ C (ω). x ∈ C∞ (ω) or else Pω ∞ Remark. The Liouville property for bounded harmonic functions on C∞ is already known; see Kaimanovitch (1990) and Lemma 4.6 of Benjamini, Lyons and Schramm (1999).

RANDOM WALKS AND PERCOLATION

x We can also use Theorem 1 to estimate Eω |Yt ? x|2 .

5

Theorem 5. Let p > pc . There exists ?1 ? ? with Pp (?1 ) = 1 and r.v. ′ ′ Sx , x ∈ Zd , such that Sx (ω) < ∞ for each ω ∈ ?1 , x ∈ C∞ (ω). There exist ′ constants 0 < c5 ≤ c6 such that for x ∈ C∞ (ω), t ≥ Sx , (0.8)

x c5 t ≤ Eω |Yt ? x|2 ≤ c6 t.

This result implies that the (annealed) invariance principle, proved in De Masi, Ferrari, Goldstein and Wick (1989) for d = 2, can be extended to d ≥ 3. See Sidoravicius and Sznitman (2003) for a discussion of this and for a much more delicate quenched invariance principle when d ≥ 4. The proof of Theorem 1 breaks into two fairly distinct parts. First (Section 2) we prove suitable geometric and analytic properties of C∞ . Then (Sections 3–5) we use “heat kernel” techniques to obtain the estimate (0.4). These techniques originate in the work of De Giorgi, Moser and Nash on divergence form elliptic equations; more recently they have been employed to study random walks on graphs. While they have been very successful in a wide variety of algebraic and geometric contexts, this has almost always been in circumstances in which the same regularity condition holds for all balls of a given size r. A guide to the kind of properties we need is given by the following theorem from Delmotte, which is a translation to graphs of results from Grigor’yan (1992) and Salo?-Coste (1992) on manifolds. (The version given here has been adapted to the CTSRW on a positive density subgraph G of Zd .) Theorem A [see Delmotte (1999)]. Let G = (G, E) be a subgraph of Zd with graph distance d. Let c0 , c1 and c2 be positive constants. Suppose that G satis?es the following two conditions. (a) For all balls B(x, R) in G, (Vd ) c0 Rd ≤ ?(B(x, R)) ≤ c1 Rd .

x∈B f (x)?(x)/?(B),

? (b) For any ball B = B(x, R) and function f : B → R, writing fB = (PI)

x∈B

? (f (x) ? fB )2 ?(x) ≤ c2 R2

x∈B y∈B y?x

(f (x) ? f (y))2 .

Then the transition density qt (x, y) of Y satis?es, for t ≥ d(x, y) ∨ 1, (0.9) c3 t?d/2 exp(?c4 d(x, y)2 /t) ≤ qt (x, y) ≤ c5 t?d/2 exp(?c6 d(x, y)2 /t).

6

M. T. BARLOW

The ?rst condition is of regular volume growth of balls in G, and can be replaced by a more general “volume doubling” condition. The second is a family of Poincar? or spectral gap inequalities for G. e This theorem suggests that to prove Theorem 1, we should obtain volume growth and Poincar? inequalities for C∞ . Some results of this kind are in e the literature; see Pisztora (1996) and Deuschel and Pisztora (1996) for e volume growth estimates, and Benjamini and Mossel (2003) for a Poincar? inequality. These results show that for ?xed x ∈ C∞ the probability that (Vd ) and (PI) hold for a ball Bω (x, R) increases to 1 as R → ∞. However, on its own this is not enough to give Theorem 1. There are now several proofs of Theorem A [Delmotte (1999) used Moser iteration to prove a parabolic Harnack inequality], but all involve iterative methods or di?erential inequalities which use (Vd ) and (PI) for many balls of di?erent sizes. The exact de?nition of “good” and “very good” balls is given in Section 1.7, but roughly speaking we say a ball Bω (y, r) is good if (Vd ) and (PI) hold, and a ball Bω (x, R) is very good if all subballs Bω (y, r) ? Bω (x, R) are good for r ≥ R1/(d+2) . We need to prove that all su?ciently large balls Bω (x0 , R) (centered at a ?xed x0 ) are very good, and to do this we have to extend some of the estimates in the literature. This is done in Section 2. The estimates in Pisztora (1996) and Deuschel and Pisztora (1996) are enough for the volume growth bounds, but more work is needed for the Poincar? inequality. As in Benjamini and Mossel (2003), we prove this from e an isoperimetric inequality, which was obtained by Benjamini and Mossel (2003) and Mathieu and Remy (2004). We use the methods of those papers, but the need for better control of the probabilities means that we have to rework some of these arguments to identify more precisely the set of percolation con?gurations ω for which a ball is good or very good. If p is su?ciently close to 1, then a fairly direct counting argument [see Benjamini and Mossel (2003) and Mathieu and Remy (2004)] is all that is needed, but for general p > pc , we have to use a renormalization argument, as in Antal and Pisztora (1996). In this paper we follow quite closely the approach of Mathieu and Remy (2004); there is a gap in the renormalization argument of Benjamini and Mossel (2003). While percolation arguments generally use cubes in Zd , the heat kernel estimates work most naturally if we use the “chemical” or graph distance dω (x, y) on C∞ . We can compare these two metrics using the main theorem of Antal and Pisztora (1996). Many of the methods used to derive (0.9) from (Vd ) and (PI) are not suitable for the percolation context. For example, Salo?-Coste (1992) proved in that (PI) and (Vd ) imply a Nash estimate: for f : G → R, (N) |?f |2 ≥ c f

2+4/d 2

f

?4/d , 1

RANDOM WALKS AND PERCOLATION

7

and Carlen, Kusuoka and Stroock (1987) proved that (N) is equivalent to (0.10) qt (x, y) ≤ c′ t?d/2 for all x, y ∈ G, t ≥ 1.

However, since C∞ contains copies of {0, . . . , n} for all n, (0.10) is clearly false for C∞ . More generally, we cannot use any method which relies on global Sobolev inequalities; it is necessary to use local methods. [One approach here might be that of rooted or anchored isoperimetric inequalities as in Thomassen (1992) or Benjamini, Lyons and Schramm (1999). However, the correct extension to Nash or Sobolev inequalities has not yet been made.] In Section 3 we obtain an initial global upper bound on qt using the Poincar? inequality directly, following the approach of Kusuoka and Zhou e (1992). We can then obtain the o?-diagonal upper bound in (0.4) using a method of Nash (1958), Bass (2002) and Barlow and Bass (1989). We use the method of Fabes and Stroock (1986), also based on ideas of Nash, to obtain a local lower bound; that is, for qt (x, y) if d(x, y) ≤ t1/2 . However, a di?culty arises in extending this to prove (0.4) for points x, y with dω (x, y) ≈ t1?ε . The standard technique is chaining: using a sequence of small balls B(zi , r) that connect x and y, and the Chapman–Kolmogorov equations. It turns out that we need to take r ≈ t/dω (x, y), so we may need balls so small that we cannot be sure that they are very good. This problem is resolved by an additional percolation argument: for some ?xed r1 = r1 (p, d) ? 1, we can show that the collection of good balls of size r ≥ r1 is large enough so that a suitable chain [B(zi , r), 0 ≤ i ≤ m] of very good balls exists with z0 close to x and zm close to y. (see Theorems 2.23 and 5.4). This argument needs renormalization techniques, even if p is close to 1. Section 1 contains a brief account of various known facts on random walks on graphs which are used in the rest of the paper. The percolation arguments are given in Section 2. Sections 3–5 were written for a general graph G that satis?es appropriate volume growth and Poincar? inequalities, and can be e read independently of Section 2. Upper bounds on qt are obtained in Section 3. Section 4 proves a weighted Poincar? inequality from the unweighted e (weak) Poincar? inequalities derived in Section 2, using methods of Salo?e Coste and Stroock (1991) and Jerison (1986). This is then used in Section 5 to obtain lower bounds on qt . Section 6 then ties these results together and gives the proofs of Theorems 1–4. We use ci to denote constants whose values are ?xed within each argument and use C· to denote constants ?xed within each section; c2.3.1 denotes the constant c1 of Lemma 2.3, and c and c′ are constants whose values may change on each appearance. The constants Ci , ci , c and c′ are always strictly positive. The notation k = k(p, d) means that the constant k depends only on p and d.

8

M. T. BARLOW

1. Graphs and random walks. In this section we review some well-known facts concerning graphs, random walks, and isoperimetric Cheeger and Poincar? e inequalities. Let G = (G, E) be an in?nite, locally ?nite, connected graph. We de?ne weights νxy by νxy = 1, 0, if {x, y} ∈ E, otherwise,

and set ?(x) = y νxy . We extend ? to a measure on G and extend ν to a measure on E. Given a function f : G → R, we de?ne |?f | : E → R by |?f |(e) = |f (x) ? f (y)| if e = {x, y}. We write f= |?f |p = f d? =

x∈V

f (x)?(x), (|?f |(e))p νe .

e∈E

|?f |p dν =

Let d(x, y) be the graph distance on G and de?ne B(x, r) = {y : d(x, y) < r}. We assume we have a global upper bound on the size of balls: for any x ∈ G, r ≥ 1, (1.1) (1.2) ?(B(x, r)) ≤ C0 r d . 1 ≤ ?(x) ≤ C0 . Note that this implies that for each x ∈ G, Let Y = (Yt , t ≥ 0, P x , x ∈ G) be the continuous time random walk on G; this is the Markov process with generator (1.3) Lf (x) = ?(x)?1

y

νxy (f (y) ? f (x)).

Thus Y waits at x for an exponential mean 1 random time and then moves to a neighbor of x at random. We de?ne the transition density of Y with respect to ? (or heat kernel density) by qt (x, y) = ?(y)?1 P x (Yt = y). Note that by (1.2) we have qt (x, y) ≤ 1 for all x, y, t. Given any points x0 , . . . , xk ∈ G and times t1 , . . . , tk , then by the Markov property, if t = k tk ,

k

qt (x0 , xk ) ≥ ?(xk )?1 (1.4) = ?(xk )?1

i=1 i=1 k

P xi?1 (Yti = xi )

k

qti (xi?1 , xi )?(xi ) ≥

i=1

qti (xi?1 , xi ).

We begin by recalling some general bounds on qt . These are not given in full generality, but just as they apply to the situation here.

RANDOM WALKS AND PERCOLATION

9

Lemma 1.1.

Let G satisfy (1.1). ?c2 D 2 , t

(a) There exist constants ci = ci (d, C0 ) such that, writing D = d(x, y), (1.5) qt (x, y) ≤ c1 exp D t ≤ qt (x, y) ≤ c′ exp ?c4 D 1 + log and (1.7) c4 c3 ≤ qt (x, x) ≤ 1/2 , d/2 (t log t) t qt (x, y) ≤ c6 t?d . t ≥ 1. D t , D≥t≥1 D ≤ t,

c exp ?c3 D 1 + log (1.6)

(b) If d(x, y) ≥ R ≥ 2 and t ≤ c5 R2 / log R, then (1.8)

Proof. (a) See Corollaries 11 and 12 of Davies (1993) for (1.5) and (1.6). For the discrete time random walk X on G, the lower bound in (1.7) is immediate from Theorem 2.2 of Coulhon and Grigor’yan (2003) and (1.1), while the upper bound follows from Theorem 2.3 of Coulhon and Grigor’yan (2003) and the fact that ?(B(x, r)) ≥ r for all r ≥ 1. These bounds then transfer to qt by integration. (b) If t < c5 R2 / log R, then t log t ≤ 2c5 R2 provided R ≥ c7 . Hence t?d ≥ exp(?2dc5 R2 /t) and, taking c5 su?ciently small, the bound (1.8) is an easy consequence of (1.5). If R < c8 , then t ≤ c9 and (1.8) still holds on adjusting the constant c6 . We now review some geometric and analytic inequalities on G. Let H ? G be ?nite, write E(H) = {e = {x, y} : x, y ∈ H} and call H = (H, E(H)) the induced subgraph on H. We de?ne the measures ?0 on H and ν0 on E(H) in the same way as ? and ν are de?ned for G. Note that while ν0 and ν agree on E(H), we have only (1.9) (1.10) Let i(A) = ν(?E (A, H ? A)) , ?0 (A) A ? H, ?0 (A) ≤ ?(A) ≤ C0 ?0 (A), A ? H. We now assume that H is connected. For A1 , A2 ? H let ?E (A1 , A2 ) = {e = {x, y} : x ∈ A1 , y ∈ A2 }.

10

M. T. BARLOW

and de?ne the isoperimetric constant IH =

0<?0 (A)≤?0 /2(H)

min

i(A).

Closely related is the Cheeger constant: Let χ(A) = and de?ne (1.11) JH = min χ(A).

A=?,H

?0 (H)ν(?E (A, H ? A)) ?0 (A)?0 (H ? A)

Lemma 1.2. The minimum in (1.11) is attained by a set A such that A and H ? A are connected. This is quite well known. For a proof, see, for example, Section 3.1 of Mathieu and Remy (2004). Lemma 1.3 [see Mathieu and Remy (2004), Section 3.1, and Benjamini and Mossel (2003)]. Let H be ?nite and connected. (a) There exists IH ≥ 2/?0 (H). ? (b) If IH = min{i(A) : 0 < ?(A) ≤ 1 ?(H), A and H ? A are connected}, 2 then (1.12)

? IH ≤ IH ≤ 2IH .

Proof. (a) Let 0 < ?0 (A) ≤ 1 ?0 (H). Then since H is connected, ?E (A, H ? 2 A) is nonempty, so i(A) ≥ 1/?0 (A) ≥ 2/?0 (H). 1 (b) The left-hand bound in (1.12) is obvious. If 0 < ?0 (A) ≤ 2 ?0 (H), then since 1 ≤ ?0 (H)/?0 (H ? A) ≤ 2 we have i(A) ≤ χ(A) ≤ 2i(A). This immediately implies that IH ≤ JH ≤ 2IH . Let A be a minimal set for JH . By Lemma 1.2 we can assume that A and H ? A are connected. We can ? also take ?0 (A) ≤ 1 ?0 (H). Then IH ≤ i(A) ≤ χ(A) = JH ≤ 2IH , proving the 2 right-hand bound in (1.12). Proposition 1.4. Let H ? G be ?nite. Suppose that IH ≥ α?1 .

(a) If f : H → R, then min

a H

|f ? a|2 d?0 ≤ c1 α2

|?f |2 dν.

E(H)

(b) If f : H → R, then (1.13) min

a H

|f ? a|2 d? ≤ c1 C0 α2

|?f |2 dν.

E(H)

RANDOM WALKS AND PERCOLATION

11

Proof. (a) This result is well known. For a recent proof, see Lemma 3.3.7 of Salo?-Coste (1997). (b) Since (1.9) can be used to replace ?0 with ?, this follows immediately from (a). Inequality (1.13) is a Poincar? or spectral gap inequality for H. The e ? minimum in (1.13) is of course attained by the value a = fH = H f d?/?(H). The following result is immediate from Lemma 1.3 and Proposition 1.4. Corollary 1.5. Let H ? G be ?nite and connected, and let PH be the best constant in the Poincar? inequality (1.13). e (a) There exists PH ≤ c?(H)2 . (b) If i(A) ≥ α?1 for all A ? H such that A and H ? A are connected, then PH ≤ cα2 . We note the following discrete version of the Gauss–Green lemma. Lemma 1.6. (1.14)

x∈G

Let f, g ∈ L2 (G, ?). Then

1 2

g(x)Lf (x)?(x) =

(f (x) ? f (y))(g(x) ? g(y))νxy .

x y

In the sequel we need the following de?nitions. Definition 1.7. Let CV , CP and CW ≥ 1 be ?xed constants. We say B(x, r) is (CV , CP , CW )-good if (1.15) CV r d ≤ ?(B(x, r))

and the weak Poincar? inequality e (1.16)

B(x,r)

? (f ? fB(x,r) )2 d? ≤ CP r 2

|?f |2 dν

E(B(CW x,r))

holds for every f : B(x, CW r) → R. We say B(x, R) is (CV , CP , CW )-very good if there exists NB = NB(x,R) ≤ such that B(y, r) is good whenever B(y, r) ? B(x, R), and NB ≤ r ≤ R. We can always assume that NB ≥ 1. Usually the values of CV , CP and CW are clear from the context and we just use the terms “good” and “very good.” R1/(d+2)

12

M. T. BARLOW

2. Percolation estimates. We work with both bond and site percolation in Zd . We regard Zd as a graph, with edge set Ed = {{x, y} : |x ? y| = 1} and write x ? y to mean {x, y} ∈ Ed . Given Q ? Zd , de?ne the internal and external boundaries of A ? Q by ?i (A|Q) = {y ∈ A : y ? x for some x ∈ Ac ∩ Q}, ?e (A|Q) = {y ∈ Ac ∩ Q : y ? x for some x ∈ A} = ?i (Q ? A|Q). We begin with the notation for site percolation. Let q ∈ (0, 1) and ?s = d {0, 1}Z , and de?ne the coordinate maps ζx (ω) = ω(x). Let Qq be the probability measure on ?s which makes the ζx i.i.d. Bernoulli r.v. with Qq (ζx = 1) = q. We call those x such that ζx = 1 the open sites and write O = O(ω) = {x : ζx = 1}. For A ? Zd we de?ne the graph distance dA (x, y) to be the smallest k such that there exists a path γ = {x0 , x1 , . . . , xk } ? A with x = x0 , xk = y and xi?1 ? xi , 1 ≤ i ≤ k. If there is no such k, then dA (x, y) = ∞. [We have dA (x, x) = ∞ if x ∈ A.] We write dω (x, y) = dO(ω) (x, y) and refer to a path / x = x0 , x1 , . . . , xk = y such that each xi is open as an open path. We say A is connected if dA (x, y) < ∞ for all x, y ∈ A. Now write C(x) = {y : dω (x, y) < ∞} for the connected open cluster that contains x. Write also, given Q ? Zd , CQ (x) = {y : dQ∩O(ω) (x, y) < ∞}. This is the set of points connected to x by an open path within Q. We call sets of the form C(x) open clusters and call the sets CQ (x) open Q clusters. It is well known that there exists qc = qc (d) ∈ (0, 1) such that if q > qc , then Qq a.s. there is a unique in?nite open cluster C∞ . However, for site percolation we are interested only in the case when q is either close to 1 or close to 0. Given a cube Q ? Zd , the set Q ∩ C∞ in general is not connected. We write C ∨ (Q) for the largest open Q cluster. (We adopt some procedures for breaking ties.) Write |x ? y|∞ = maxi |xi ? yi |, let E? = {{x, y} : |x ? y|∞ = 1} and write d ? x?y if {x, y} ∈ E? . We also need to consider site percolation in the graph d (Zd , E? ). We say that A ? Zd is ?-connected if A is connected in the graph d (Zd , E? ) and we de?ne the clusters C ? (x) analogously. d Definition 2.1. 1. Let Q be a cube of side n in Zd . We write s(Q) = n for the side length of Q. Let Q+ = A1 ∩ Zd and Q⊕ = A2 ∩ Zd , where A1 and A2 are the cubes in Rd with the same center as Q and with sides 3 n and 2 6 5 n, respectively.

RANDOM WALKS AND PERCOLATION

13

2. A cluster C in a cube Q is crossing for a cube Q′ ? Q if for all d directions there exists an open path in C ∩ Q′ that connects the two opposing faces of Q′ . 3. The diameter of a set A is de?ned by diam(A) = max{|x ? y|∞ : x, y ∈ A}. 4. Given a set A, we write |A| for the number of elements in A. Remark. In the arguments in this section, we frequently need to assume that a cube Q is su?ciently large. More precisely, we need that s(Q) ≥ n1 , where n1 = n1 (d, p) is a constant that depends only on d and p. We make this assumption in our proofs whenever necessary without stating it explicitly each time. Unless otherwise indicated, the statements of the results are true for all n; this can be ensured by adjusting the constants so that the result is automatic for small cubes. Let Q be a cube in Zd . De?ne the event (2.1) K(Q, λ) = ω : C ∨ (Q) is crossing for Q and |C ∨ (Q)| >λ . |Q|

The following estimate was proved in Theorem 1.1 of Deuschel and Pisztora (1996). Lemma 2.2. Let Q be a cube of side n and λ < 1. Then there exists q0 = q0 (d, λ) < 1 and ci = ci (λ, d) such that if q ∈ [q0 , 1), then Qq (K(Q, λ)c ) ≤ c1 exp(?c2 nd?1 ). Let Σ = {x ∈ Zd : x ? 0} ∪ {0}. For σ ∈ Σ de?ne the shifted set of open sites Oσ = {x ? σ : x ∈ O}. Let (2.2) β = 1 ? 2(1 + d)?1 < (d ? 1)/d.

Set for r ∈ N and ε > 0, F (Q, r, σ, ε) = {any ?-connected set A ? Q with |A| = r satis?es |A ∩ Oσ | ≥ (1 ? ε)|A|}, F (Q, ε) =

r≥s(Q)β σ∈Σ

F (Q, r, σ, ε).

Note that the event F (Q, r, σ, ε) is increasing and that although K(Q, λ) is not in general increasing, it is if λ > 1 . 2 Recall the following bounds on the tail of the binomial.

14

M. T. BARLOW

Lemma 2.3.

Let X ? Binomial(n, p) and λ ∈ (0, 1). Then P(X < λn) ≤ e?nb(λ,p) ,

where b(λ, p) → ∞ as p → 1 for each ?xed λ ∈ (0, 1). Lemma 2.4 [see Grimmett (1999), Section 4.2]. The number of ?-connected sets A with |A| = r containing a ?xed point x0 ∈ Zd is bounded by exp(c1 r). Lemma 2.5. Let ε ∈ (0, 1) and let Q be a cube of side n. There exists q1 = q1 (ε, d) > qc such that if q ≥ q1 , then Qq (F (Q, ε)c ) ≤ c exp(?nβ ). Proof. If A is a ?xed connected set in Q with |A| = r, then by Lemma 2.3, Qq (|A ∩ Oσ | < (1 ? ε)|A|) ≤ e?rb(1?ε,q) . Given ε, choose q1 large enough so that b(λ, p) > 2 + c2.4.1 for λ ≥ 1 ? ε, q ≥ q1 . Then since there are at most (n + 1)d exp(c2.4.1 r) ?-connected sets in Q of size r, Qq (F (Q, r, σ, ε)c ) ≤ (n + 1)d exp(c2.4.1 r ? rb(1 ? ε, q)) ≤ nd e?2r and, as |Σ| = 2d + 1,

∞

Qq (F (Q, ε)c ) ≤

r=n0

(2d + 1)(n + 1)d e?2r

≤ cnd exp(?2nβ ) ≤ c′ exp(?nβ ). We collect from Deuschel and Pisztora (1996) the following results on the boundaries of discrete sets contained in cubes. Lemma 2.6. Let Q be a cube in Zd .

(a) Let A Q be ?-connected. Let Λj , 1 ≤ j ≤ k, be the connected components of Q ? A. Then ?i (Λj |Q) and ?e (Λj |Q), 1 ≤ j ≤ k, are ?-connected. (b) Let A ? Q with |A| ≤ (15/16)|Q|. Then (2.3) |A| ≤ c1 |?i (A|Q)|d/(d?1) and |A| ≤ c1 |?e (A|Q)|d/(d?1) .

Proof. Part (a) is Lemma 2.1(ii) of Deuschel and Pisztora (1996), while the discrete isoperimetric inequality (2.3) is assertion (A.3) on page 480 of Deuschel and Pisztora (1996). The following result is based on ideas in Mathieu and Remy (2004). Recall from (1.10) the de?nition of ?E (A1 , A2 ).

RANDOM WALKS AND PERCOLATION

15

7 Proposition 2.7. Let ε < 1/(4d + 2) and λ ≥ 8 . Suppose that both F (Q, ε) and K(Q, λ) occur for a cube Q with side n. Let A ? Q be connected.

(a) If A ∩ C ∨ (Q) = ?, then |A| < cnβd/(d?1) < n. (b) If |A| ≤ 3 |Q| and A ∩ C ∨ (Q) = ?, then there exists c1 such that 4 (2.4) |?E (A ∩ C ∨ (Q), (Q ? A) ∩ C ∨ (Q))| ≥ c1 n?1 |A|.

Proof. We write C ∨ = C ∨ (Q). (a) Let A0 be the connected component of Q ? C ∨ that contains A. By Lemma 2.6, ?i (A0 |Q) is ?-connected. The de?nition of A0 implies that ?i (A0 |Q) ∩ O = ?. Since F (Q, ε) occurs, this implies that |?i (A0 |Q)| < nβ [since otherwise |?i (A0 |Q) ∩ O| > 0]. Hence by the discrete isoperimetric inequality (2.3), |A| ≤ |A0 | ≤ c|?i (A0 |Q)| ≤ cnβd/(d?1) < n. |C ∨ | ≥ 7 nd , 8 (b) Now let A ∩ C ∨ = ? and |A| ≤ n. So there exists x ∈ A ∩ C ∨ . Since there exists y ∈ (Q ? A) ∩ C ∨ and whereas C ∨ is connected, there therefore exists {x′ , y ′ } ∈ ?E (A ∩ C ∨ (Q), (Q ? A) ∩ C ∨ (Q)). So |?E (A ∩ C ∨ (Q), (Q ? A) ∩ C ∨ (Q))| ≥ 1 ≥ n?1 |A|. It remains to consider the case A ∩ C ∨ = ?, |A| > n. Let Λ1 = {CQ (y) : y ∈ ?e (A|Q) ∩ O ? C ∨ }, Λ2 = {Ci : Ci ∩ C ∨ = ?}, A1 = A ∪ Λ1 . Let Ci , 1 ≤ i ≤ k, be the connected components of Q ? A1 and let A2 = A1 ∪ Λ2 . It is clear from the construction of A1 and A2 as a union of connected sets each connected to A or A1 that A2 is connected. Note that since Λ1 ∪ Λ2 ? Q ? C ∨ and K(Q, λ) holds, we have |A2 | ≤ |A| + |Q ? C ∨ | ≤ 7 |Q|. 8 We now show that (2.5) ?e (A|Q) ∩ C ∨ = ?e (A2 |Q) ∩ O. First let y ∈ ?e (A|Q) ∩ C ∨ , so that y ? x with x ∈ A. Whereas y ∈ C ∨ , we cannot have y ∈ Λ1 ∪ Λ2 , so y ∈ A2 and thus y ∈ ?e (A2 |Q) ∩ O. / Now let y ∈ ?e (A2 |Q) ∩ O, so that y ? z for some z ∈ A2 . If z ∈ Λ2 , then z ∈ Ci for some i. Hewever, because y ∈ A1 and y ? z, we have y ∈ Ci , / a contradiction, so z ∈ A1 . If z ∈ Λ1 , then z ∈ C(x) for some x ∈ ?e (A|Q) ∩ O ? C ∨ . Again we have y ∈ C(x), a contradiction, so we must have z ∈ A and, therefore, y ∈ ?e (A|Q). If y ∈ C ∨ , then because y ∈ O, C(y) is included / in Λ1 and y is in A1 . Hence we have y ∈ C ∨ . This completes the proof of (2.5). Let Γ1 , . . . , Γl be the connected components of Q ? A2 , arranged in decreasing order of size.

16

M. T. BARLOW

Case 1. Suppose that |Γ1 | > 1 |Q|. Let A3 = A2 ∪( l Γj ). By Lemma 2.6, i=2 2 ?e (A3 |Q) = ?i (Γ1 |Q) is ?-connected. We also have ?e (A3 |Q) ? ?e (A2 |Q). Note that since n < |A| ≤ |A3 | ≤ 1 |Q|, by the isoperimetric inequality, 2 |?e (A3 |Q)| ≥ c2 |A3 |1?1/d ≥ c2 n1?1/d > nβ . Since F (Q, ε) holds, writing O′ = |?e (A3 |Q) ∩ (O′ )c | ≤

σ σ∈Σ Oσ ,

|?e (A3 |Q) ∩ (Oσ )c | ≤ (2d + 1)ε|?e (A3 |Q)|.

We deduce

1 |?e (A3 |Q) ∩ O′ | ≥ (1 ? (2d + 1)ε)|?e (A3 |Q)| ≥ 2 |?e (A3 |Q)|.

If y ∈ ?e (A3 |Q) ∩ O′ , then y ∈ ?e (A2 |Q) ∩ O and therefore, by (2.5), y ∈ ?e (A|Q) ∩ C ∨ . So there exists x ∈ A with y ? x. Whereas y ∈ O′ , we have x ∈ O and thus x ∈ C ∨ . Hence {x, y} ∈ ?E (A ∩ C ∨ , (Q ? A) ∩ C ∨ ). Thus ?E (A ∩ C ∨ , (Q ? A) ∩ C ∨ ) ≥ |?e (A3 |Q) ∩ O′ | ≥ 1 |?e (A3 |Q)| 2

1 1 ≥ 2 c2 |A3 |1?1/d ≥ 1 c2 n?1 |A3 | ≥ 2 c2 n?1 |A|, 2

proving (2.4) in this case. Case 2.

1 Suppose |Γ1 | ≤ 2 |Q|. Note that

|?E (A ∩ C ∨ , (Q ? A) ∩ C ∨ )| =

j

|?E (Γj ∩ C ∨ , (Q ? Γj ) ∩ C ∨ )|.

We have (2.6) |?i (Γj |Q) ∩ O′ | ≤ |?E (Γj ∩ C ∨ , (Q ? Γj ) ∩ C ∨ )|.

For if y ∈ ?i (Γj |Q) ∩ O′ , then y ∈ Γj and there exists x ∈ A2 with x ? y. So y ∈ ?e (A2 |Q) ∩ O and hence y ∈ ?e (A|Q) ∩ C ∨ by (2.5). Whereas y ∈ O′ , then x ∈ O, so that x ∈ C ∨ and hence x ∈ A since x cannot be in Λ1 ∪ Λ2 . Therefore {y, x} ∈ ?E (Γj ∩ C ∨ , (Q ? Γj ) ∩ C ∨ ). Next, each Γj intersects C ∨ by the construction of A2 . If |?i (Γj |Q)| ≥ nβ , then |?i (Γj |Q) ∩ O′ | ≥ 1 |?i (Γj |Q)| ≥ c|Γj |1?1/d ≥ c′ n?1 |Γj |. 2 If |?i (Γj |Q)| < nβ , then |Γj | ≤ cnβd/(d?1) < n and |?E (Γj ∩ C ∨ , (Q ? Γj ) ∩ C ∨ )| ≥ 1 ≥ n?1 |Γj |.

RANDOM WALKS AND PERCOLATION

17

Combining these estimates we obtain |?E (A ∩ C ∨ , (Q ? A) ∩ C ∨ )| =

j

|?E (Γj ∩ C ∨ , (Q ? Γj ) ∩ C ∨ )| |Γj | ≥ cn?1 (|Q ? A2 |).

j

≥ cn?1

1 Whereas |Q ? A2 | ≥ 8 nd ≥ c|A|, this proves (2.4).

We now follow the renormalization argument of Mathieu and Remy (2004), which uses techniques introduced by Antal and Pisztora (1996). We introduce a second percolation process, which is bond percolation on (Zd , Ed ). Let p ∈ (0, 1): We set ?b = {0, 1}Ed , let ηe , e ∈ Ed , be the coordinate maps and let Pp on ?b be the probability measure which makes ηe i.i.d. Bernoulli r.v. with Pp (ηe = 1) = p. We call the edges e such that ηe = 1 open edges and we write OE = {e : ηe = 1}. An open path is any path γ = {x0 , . . . , xk } such that each {xi?1 , xi } ∈ OE . As for site percolation, we write dω (x, y) for the length of the shortest open path connecting x and y, and set dω (x, y) = ∞ if there is no such open path. For a set A we write dA (x, y) for the length of the shortest open path contained in A connecting x and y. Set C(x) = {y : dω (x, y) < ∞}. Let θ(p) = Pp (|C(0)| = ∞) and pc = inf{p : θ(p) > 0}. Then if p > pc , there exists Pp -a.s. a unique in?nite cluster. We always assume that p > pc . We de?ne C∞ , CQ (x) and C ∨ (Q) in the same way as for site percolation. Let n ≥ 16 be ?xed and let Q be a cube of side n. Recall the de?nition of + and crossing clusters from De?nition 2.1. We de?ne the event R (Q) in Q 0 (n) a similar fashion to the event Ri in Antal and Pisztora (1996) and we set R0 (Q) = {there exists a unique crossing cluster C in Q+ for Q+ , all open paths contained in Q+ of diameter greater than 1 n are connected to C 8 in Q+ and C is crossing for each cube Q′ ? Q with s(Q′ ) ≥ n/8}, R(Q) = R0 (Q) ∩ {C ∨ (Q) is crossing for Q} ∩ {C ∨ (Q+ ) is crossing for Q+ }. Note that if ω ∈ R(Q), then this forces C ∨ (Q) ? C ∨ (Q+ ) and that C ∨ (Q+ ) is the unique crossing cluster given by the event R0 (Q). Now let k ≥ 17 and consider a tiling of Zd by disjoint cubes (2.7) T (x) = {y ∈ Zd : xi ≤ yi < xi + k, 1 ≤ i ≤ d}

with side k ? 1. Let (2.8) ?(x) = ?R(T (x)) .

18

M. T. BARLOW

Lemma 2.8. (a) Let Q be a cube of side k ? 1 and let p > pc . There exists c1 = c1 (p, d) such that (2.9) Pp (R(Q)c ) ≤ c exp(?c1 k).

(b) The process (?x , x ∈ Zd ) dominates Bernoulli site percolation with parameter q ? (k), where q ? (k) → 1 as k → ∞. Proof. (a) The bound Pp (R0 (Q)c ) ≤ c exp(?c′ n) follows from Theorem 3.1 of Pisztora (1996) (d ≥ 3) and Theorem 5 of Penrose and Pisztora (1996) for d = 2. The estimate Pp (C ∨ (Q) is not crossing for Q) ≤ c exp(?c′ nd?1 ) follows from Theorem 1.2 of Pisztora (1996) (d ≥ 3) and Theorem 1 of Couronn? and Messikh (2003) (d = 2). e (b) By (a) we have Pp (?(x) = 1) → 1 as k → ∞. The r.v. ?(x) and ?(y) are independent if |x ? y|∞ ≥ 3, so the result is immediate from Theorem 0.0 of Liggett, Schonmann and Stacey (1997). [We remark that using this (N ) de?ned in (2.5) of Antal and Pisztora theorem means that the events Si (1996) are no longer needed.] Recall the de?nition of q1 (ε, d) from Lemma 2.5 and choose k0 = k0 (p, d) large enough so that q ? (k) ≥ q1 (ε, d) for all k ≥ k0 . We ?x k = k0 and refer to the process ? as the macroscopic percolation process. We write O, C(x), to denote the open sites, open clusters for the macroscopic process and F for associated events. Let Q be a macroscopic cube of side m and associate with Q the microscopic cube Q = {T + (x), x ∈ Q}. Any (microscopic) cube which can be obtained in this way is called a special cube. De?ne T ′ (x) to be T (x) if x is in the interior of Q [so that T + (x) is in the interior of Q]. Otherwise, if x ∈ ?i (Q|Zd ), let T ′ (x) be T (x) together with all points in T + (x) which are closer to T (x) than any T (y), y = x, y ∈ Q. Thus T ′ (x) is a d-dimensional rectangle and Q is the disjoint union of the T ′ (x), x ∈ Q. Note that each side of T ′ (x) (x ∈ Q) is less than 3 (k0 ? 1). 2 Fix ε0 = (4d + 2)?1 . Lemma 2.9. Let Q be a special cube with s(Q) = n and let Q be the associated macroscopic cube. Suppose that m = s(Q) ≥ m0 (k0 ) and that the 7 event K(Q, 8 ) ∩ F (Q, ε0 ) holds for (?x , x ∈ Zd ). Then the cluster C ∨ (Q) satis?es |C ∨ (Q)| ≥ c1 nd , diam(C ∨ (Q)) = n.

RANDOM WALKS AND PERCOLATION

19

Proof. Write Cx = C ∨ (T + (x)). If x, y ∈ O ∩ Q and x ? y, then the events R(T (x)) and R(T (y)) force the clusters Cx and Cy to be connected. Thus there exists a Q-cluster C ′ with {Cx , x ∈ C ∨ (Q)} ? C ′ . It follows immediately that |C ′ | ≥ |C ∨ (Q)| ≥ 7 md ≥ c1 nd . 8 Also, since C ∨ is crossing for Q, we deduce that C ′ is crossing for Q and diam(C ′ ) = n. It remains to prove that C ′ = C ∨ (Q). Suppose not. Choose m0 so that 3 ( 2 k0 )d < 7 md and therefore the cluster C ∨ (Q) is not contained in any one 8 cube T + (x). Let x ∈ C ∨ (Q). Then C ∨ (Q) ∩ Cx = ? and so C ∨ (Q) ∩ T + (x) consists of clusters which have diameter less than k0 /8. Since C ∨ (Q) contains points outside T + (x), it follows that C ∨ (Q) ∩ T (x) = ?. So, if Γ1 , . . . , Γk are the connected components of Q ? C ∨ (Q), we deduce that for some j, C ∨ (Q) ? {T ′ (x), x ∈ Γj }. By Proposition 2.7(a) we deduce that |Γj | ≤ c2 mdβ/(d?1) and so if m0 is large enough, |C ∨ | ≤ c2 ( 3 k0 )d mdβ/(d?1) < 7 md , 2 8 giving a contradiction. Thus C ′ = C ∨ (Q). Let Q be a special cube with s(Q) = n and let Q be the associated macroscopic cube. For sets A1 , A2 ? Q, let ?E (A1 , A2 |OE ) = {{x, y} : {x, y} ∈ OE , x ∈ A1 , y ∈ A2 }. Let A ? C ∨ = C ∨ (Q) and let Γ = C ∨ ? A both be connected. Set A = {x ∈ Q : A ∩ T ′ (x) = ?}, Γ = {x ∈ Q : Γ ∩ T ′ (x) = ?}. Note that (2.10) |A| ≥ |A| ≥ ( 3 k)?d |A|. 2

Whereas A and Γ are connected, it is clear that A and Γ are connected. Let C ∨ = C ∨ (Q) be the largest open (macroscopic) cluster in Q. Lemma 2.10. Let Q be a special cube with s(Q) = n and let Q be the associated macroscopic cube. Suppose that m = s(Q) ≥ m0 (k0 ) and that the event K(Q, 7 ) ∩ F (Q, ε0 ) holds for (?x , x ∈ Zd ). Let A ? C ∨ and Γ = C ∨ ? A 8 be connected. (a) Let x ? y, x ∈ A and y ∈ Q ? A with x, y ∈ O. Then the set T + (x) ∩ Q contains at least one edge in ?E (A, Γ|OE ).

20

M. T. BARLOW

(b) Suppose x ∈ A ∩ Γ ∩ O. Then the set T + (x) ∩ Q contains at least one edge in ?E (A, Γ|OE ). T + (x). Proof. As in Lemma 2.9 we have that C ∨ is not contained in any one Let Cx = C ∨ (T + (x)) be as in Lemma 2.9.

(a) Since x ∈ A there exists x′ ∈ T (x) ∩ A, so since C ∨ is connected there exists an open path γ ? Q from x′ to Q ∩ T + (x)c . This path must have diameter greater than k0 /3, so it is connected within T + (x) to Cx . Hence x′ is connected within T + (x) to Cy . Choose y ′ ∈ Cy ∩ T (y); then y ′ ∈ A but / ∨ . There exists an open path γ ′ from x′ to y ′ that must contain at least y∈C one edge in ?E (A, Γ|OE ). (b) Let x′ ∈ A ∩ T ′ (x) and y ′ ∈ Γ ∩ T ′ (x). Since x ∈ O, both x′ and y ′ are in Cx and there therefore exists an open path in T + (x) between x′ and y ′ . As in (a) this path must contain at least one edge in ?E (A, Γ|OE ). Proposition 2.11. Let Q and Q be as above with m = s(Q) ≥ m0 . Suppose that the event K(Q, 7 ) ∩ F (Q, ε0 ) holds for (?x , x ∈ Zd ). Let A 8 be a connected open subset of C ∨ (Q) with |A| ≤ 1 |C ∨ (Q)| and such that 2 Γ = C ∨ (Q) ? A is also connected. Then (2.11) |?E (A, Γ|OE )| ≥ c1 n?1 |A|.

Proof. We write C ∨ = C ∨ (Q), C ∨ = C ∨ (Q). Since A = C ∨ (Q) and C ∨ is connected, we have |?E (A, Γ|OE )| ≥ 1, so (2.11) is immediate if |A| ≤ c?1 n. 1 Also, using Lemma 2.10(a) we have (2.12) |?E (A, Γ|OE )| ≥ c2 |?E (A ∩ O, (Q ? A) ∩ O)|.

We consider four cases. Case 1. A ∩ C ∨ = ?. Let Λ be the connected component of Q ? C ∨ which contains A. By Proposition 2.7(a), |Λ| ≤ cmdβ/(d?1) , so that

d |A| ≤ ( 3 k0 )d |Λ| ≤ ck0 mdβ/(d?1) ≤ n, 2

provided m0 is large enough. Hence (2.11) holds for A. Case 2. that A ∩ C ∨ = ? and |A| ≤ 3 |Q|. We apply Proposition 2.7(b) to see 4 |?E (A ∩ C ∨ , (Q ? A) ∩ C ∨ )| ≥ cm?1 |A| ≥ cn?1 |A|, and combining this with (2.12) proves (2.11).

RANDOM WALKS AND PERCOLATION

21

3 Case 3. A ∩ C ∨ = ?, |A| ≥ 4 |Q| and |Γ| ≤ 3 |Q|. Using Lemma 2.9 we 4 have |Γ| ≥ ( 3 k0 )?d |Γ| ≥ 1 ( 3 k0 )?d |C ∨ | ≥ cnd . So by Proposition 2.7(a), Γ ∩ 2 2 2 C ∨ = ? and we can therefore apply Proposition 2.7(b) to Γ to deduce

|?E (Γ ∩ C ∨ , (Q ? Γ) ∩ C ∨ )| ≥ cm?1 |Γ| ≥ cm?1 nd ≥ cn?1 |A|. Using (2.12) (with A and Γ interchanged) we deduce that |?E (Γ, A|OE )| ≥ c2 |?E (Γ ∩ O, (Q ? Γ) ∩ O)| ≥ c2 |?E (Γ ∩ C ∨ , (Q ? Γ) ∩ C ∨ )| ≥ n?1 |A|.

3 3 Case 4. A ∩ C ∨ = ?, |A| ≥ 4 |Q| and |Γ| ≥ 4 |Q|. Since A and Γ are both connected and F (Q, ε0 ) holds, we have 2 |A ∩ O| ≥ (1 ? ε0 )|A| > 3 |Q|, 2 |Γ ∩ O| ≥ (1 ? ε0 )|Γ| > 3 |Q|.

Hence |A ∩ Γ ∩ O| ≥ 1 |Q|. So by Lemma 2.10(b), 3 |?E (A, Γ|OE )| ≥ c|A ∩ Γ ∩ O| ≥ c′ md ≥ c′′ |A|, which implies (2.11). As in Section 1 we de?ne νxy = νxy (ω) = ?(x) = ?(x)(ω) = f : A → R, we write

y νxy ,

1, 0,

if {x, y} is an open edge, otherwise,

x ∈ Zd , and we extend ν and ? to measures. If ? fA = ?(A)?1 f d?.

A

Let Q be a special cube and let Q be the associated macroscopic cube. De?ne

7 H0 (Q) = K(Q, 8 ) ∩ F (Q, ε0 )

and extend the de?nition of H0 (Q) to all cubes Q by taking H0 (Q) = H0 (Q′ ), where Q′ is the largest special cube contained in Q. [We set H0 (Q) = ? if there is no such special cube.] Recall the de?nition of β from (2.2). Proposition 2.12. Consider bond percolation ηe on (Zd , Ed ) with p > pc . Let Q be a special cube of side n.

22

M. T. BARLOW

(a) There exists (2.13) Pp (H0 (Q)c ) ≤ c1 exp(?c2 nβ ).

(b) If ω ∈ H0 (Q) and f : C ∨ (Q)(ω) → R, then

C ∨ (Q)

? (f (y) ? fC ∨ (Q) )2 d? ≤ c3 n2

E(C ∨ (Q))

|?f |2 dν.

Proof. (a) To prove (2.13) note that (2.14) Pp (H0 (Q)c ) ≤ Pp (K(Q, 7 )c ) + Pp (F (Q, ε0 )c ). 8

As the events K(·, 7 ) and F (·, ·) are increasing, the two probabilities on the 8 right-hand side of (2.14) are bounded by the probabilities of these events with respect to a Bernoulli site percolation process with probability q ? = q ? (k0 ). Using Lemmas 2.2 and 2.5, Pp (K(Q, 7 )c ) + Pp (F (Q, ε0 )c ) ≤ Qq? (K(Q, 7 )c ) + Qq? (F (Q, ε0 )c ) 8 8 ≤ c exp(?cmd?1 ) + c′ exp (?c′ mβ ) ≤ c exp(?c′ nβ ). (b) This is immediate from Propositions 1.4 and 2.11. Remark. See Section 3.2 of Mathieu and Remy (2004) for similar bounds in the case when p is close to 1. Recall from De?nition 2.1 that Q ? Q⊕ ? Q+ . Let α ∈ (0, 1 ), let Q be a 2 cube with s(Q) = n and set H(Q, α) = R(Q) ∩ {R(Q′ ) ∩ H0 (Q′ ) occurs for every cube Q′ with (Q′ )+ ? Q+ , Q′ ∩ Q⊕ = ? (2.15) and nα ≤ s(Q′ ) ≤ n}. Lemma 2.13. (a) For p > pc , Pp (H(Q, α)c ) ≤ c exp(?c1 nαβ ). (b) If ω ∈ H(Q, α) and Q0 ? Q satis?es the condition (2.15), then C ∨ (Q0 ) ? C ∨ (Q+ ). Proof. (a) By (2.9) and (2.13),

n

(2.16)

Pp (H(Q, α)c ) ≤

r=nα

cnd exp(?c1 r β ) ≤ c′ exp(?c2 nαβ ),

RANDOM WALKS AND PERCOLATION

23

proving (a). (b) De?ne a sequence of cubes Qi , 0 ≤ i ≤ k, by Qi+1 = Q? and where i we stop at the last cube Qk with Q+ ? Q+ . The events R(Qi ) then force k C ∨ (Qi ) ? C ∨ (Qi+1 ), so that C ∨ (Q0 ) ? C ∨ (Q+ ). Since diam C ∨ (Q+ ) = diam(Q+ ) > k k k n/8, the event R(Q) implies that C ∨ (Q+ ) ? C ∨ (Q+ ). k Proposition 2.12 and Lemma 2.13 complete our results on the Poincar? e ∨ (Q). However, to be able to obtain bounds inequality in sets of the form C (particularly lower bounds) on transition densities on C∞ , we need to relate |x ? y|1 to the shortest path (or chemical) metric dω on C∞ . This was done in Theorem 1.1 of Antal and Pisztora (1996), but we need some minor extensions of their results and it is desired to make this paper as self-contained as possible, so we repeat some of their constructions. Let k ≥ 17 and recall the site process ?(x) = ?R(T (x)) introduced in (2.8). Set ?′ (x) = 1 ? ?(x); since, by Lemma 2.8(b), ? dominates Bernoulli site percolation with parameter q ? (k), ? is dominated by Bernoulli site percolation with parameter q ′ (k) = 1 ? q ? (k). We consider the clusters of the process ?′ on the graph (Zd , E? ). Given a function ?′ : Zd → {0, 1}, we write d O(?′ ) = {x : ?′ (x) = 1}. If ?′ (x) = 0, we set C ? (?′ , x) = ?, and if ?′ (x) = 1, we let C ? (?′ , x) be the ?-connected component of O(?′ ) that contains x. Let (2.17) D(?′ , x) = ?e (C ? (?′ , x)|Zd ), {x}, if ?′ (x) = 1, if ?′ (x) = 0.

For x, y ∈ Zd let γ(x, y) be a shortest path [in (Zd , Ed )] that connects x and y. Note that if x and y are contained in a cube Q, then γ(x, y) ? Q. Set (2.18) W (?′ , x, y) =

z∈γ(x,y)

D(?′ , z).

The importance of W comes from the following result, which is Proposition 3.1 of Antal and Pisztora (1996). Proposition 2.14. Let k ≥ 17, x, y ∈ Zd and x, y ∈ Zd be such that ? ? x ∈ T (?) and y ∈ T (?). Suppose that [ for the bond percolation process ηe on x y (?b , Pp )] dω (x, y) < ∞. Then there exists an open path γ ′ (x, y) that connects x and y contained in W ′ (x, y) =

z ∈W (?′ ,?,?) ? xy

T + (?). z

In particular, dω (x, y) ≤ |W ′ (x, y)| ≤ (3k)d |W (?′ , x, y )|. ? ? Proposition 2.15. Let p > pc . There exists k1 = k1 (p, d) and CAP such that if k ≥ k1 , then the following hold.

24

M. T. BARLOW

(a) If x, y ∈ Zd , x ∈ T (?) and y ∈ T (?), then ? ? x y Pp (|W (?′ , x, y)| ≥ CAP |? ? y |1 ) ≤ c2 exp(?c3 |? ? y |1 ). x ? x ? (b) If x and y ∈ Zd , λ ≥ 0, then ? ? Pp

z ∈γ(?,?) ? xy

max diam(D(?′ , z )) > λ ≤ c4 |? ? y |1 exp(?c5 λ). ? x ?

Proof. (a) This is proved on page 1047 of Antal and Pisztora (1996). (b) We choose k1 large enough so that q ′ (k) < qc for all k ≥ k1 . Since ?′ is dominated by Bernoulli site percolation with parameter q ′ , we have Pp (diam(D(?′ , z )) > λ) ≤ Qq′ (2 + diam(C ? (0)) > λ) ≤ c exp(?c′ λ). ? [For the second estimate above, see Theorem 5.4 of Grimmett (1999).] The bound in (b) is now immediate. Now ?x k1 as in Proposition 2.15 and set (2.19) CH = dCAP (3k1 )d . E(Q, x, y) = {x, y ∈ C ∨ (Q+ ) : dC ∨ (Q+ ) (x, y) > CH |x ? y|∞ }. Lemma 2.16. Then Let p > pc , let Q be a cube with side n and let x, y ∈ Q. Pp (E(Q, x, y)) ≤ c exp(?c5 |x ? y|∞ ). Proof. Let x ∈ T (?) and y ∈ T (?). Suppose x, y ∈ C ∨ (Q+ ). Then by x y Proposition 2.14 there exists a open path γ ′ that connects x and y contained in W ′ (?′ , x, y). So if E(Q, x, y) holds, then either W ′ (?′ , x, y) is not contained in Q+ or CH |x ? y|∞ < dC ∨ (Q+ ) (x, y) ≤ (3k1 )d |W (?′ , x, y )|. ? ? Thus Pp (E(Q, x, y)) ≤ Pp (|W (?′ , x, y )| ≥ CAP |? ? y |1 ) ? ? x ? + Pp

z ∈γ(?,?) ? xy

Let Q be a cube with side n. For x, y ∈ Q let

max diam(D(?′ , z )) > ?

n 8k1 cn k1

≤ c exp(?c|? ? y |1 ) + c|? ? y |1 exp ? x ? x ? ≤ c exp(?c|x ? y|∞ ).

RANDOM WALKS AND PERCOLATION

25

Let Q be a cube with side n. Set (2.20) and (2.21) Let also Bω (y, r) = {x : dω (x, y) < r}. Since C∞ is embedded in Zd we have ?(Bω (y, r)) ≤ C0 r d for some C0 = C0 (d). Proposition 2.17. (a) (b) (c) (d) Let p > pc . D(Q, α) = {D0 (Q′ ) occurs for every cube Q′ with (Q′ )+ ? Q+ , Q′ ∩ Q⊕ = ? and nα ≤ s(Q′ ) ≤ n}. D0 (Q) = R(Q) ∩ {dC ∨ (Q+ ) (x, y) ≤ CH |x ? y|∞ if x, y ∈ C ∨ (Q+ ) ∩ Q, |x ? y|∞ ≥ n/12}

There exists Pp (D0 (Q)c ) ≤ c1 exp(?c2 n). There exists Pp (D(Q, α)c ) ≤ c3 exp(?c4 nα ). Let ω ∈ D0 (Q) and x, y ∈ Q ∩ C ∨ (Q+ ). Then dω (x, y) ≤ CH n. Let ω ∈ D(Q, α) and x, y ∈ Q⊕ ∩ C ∨ (Q+ ). Then |x ? y|∞ ≤ dω (x, y) ≤ CH ((1 + nα ) ∨ |x ? y|∞ ).

(e) Let ω ∈ D(Q, α), let x ∈ Q⊕ and let Q′ satisfy the conditions in (2.21) with s(Q′ ) = r. Then Q′ ∩ C ∨ (Q+ ) ? Bω (x, CH r). Proof. (a) By Lemma 2.16, Pp (D0 (Q)c ) ≤ Pp (R(Q)c ) +

x′ ,y ′

c exp(?c5 |x′ ? y ′ |∞ ),

where the sum is over x′ , y ′ ∈ Q with |x′ ? y ′ |∞ ≥ n/12. Hence, using (2.9), (2.22) Pp (D0 (Q)c ) ≤ c(n + 1)2d exp(?c5 n/8) ≤ c1 exp(?c2 n).

(b) This is immediate from (a), since

n

Pp (D(Q, α)c ) ≤

r=nα

3 ( 2 n + 1)d c1 exp(?c2 r) ≤ c exp(?c′ nα ).

(c) This is immediate if |x ? y|∞ ≥ n/12. If |x ? y|∞ < n/8, then choose ∈ C ∨ (Q+ ) ∩ Q with n/12 ≤ |z ? x|∞ ≤ n/4 and n/12 ≤ |z ? y|∞ ≤ n/4. Since C ∨ (Q+ ) is crossing for Q, such a choice of z is possible. Then dω (x, y) ≤ dω (x, z) + dω (z, y) ≤ 2CH (n/4) ≤ CH n. z′

26

M. T. BARLOW

(d) Since D(Q, α) ? D0 (Q), this is immediate from (b) if |x ? y|∞ ≥ n/12. Otherwise choose the smallest possible cube Q′ such that s(Q′ ) ≥ nα ∨ |x ? y|∞ and x, y ∈ Q′ . We have (Q′ )+ ? Q+ . As in Lemma 2.13(b), we have C ∨ (Q′ ) ? C ∨ (Q+ ) and, by (c), dC ∨ (Q+ ) (x, y) ≤ CH s(Q′ ) ≤ CH (|x ? y|∞ ∨ (1 + nα )). (e) Since D0 (Q′ ) occurs, this is immediate from (c). Recall from De?nition 1.7 the de?nition of good and very good balls.

1 Theorem 2.18. Let α ∈ (0, 2 ), let Q be a cube of side n, let ω ∈ H(Q, α)∩ D(Q, α) and let CH nα ≤ r ≤ n. Write Q(y, s) = {z ∈ Zd : |z ? y|∞ ≤ s}. Let y ∈ C ∨ (Q+ ) ∩ Q⊕ with Q(y, r + k0 )+ ? Q+ .

(a) There exists CV = CV (p, d) such that (2.23) CV r d ≤ |Bω (y, r)| ≤ C0 r d .

(b) There exist constants CP (p, d) and CW (p, d) such that if f : Bω (y, CW r) → ? ? R and writing fB = fBω (y,r) , then (2.24)

Bω (y,r)

? (f ? fBω (y,r) )2 d? ≤ CP r 2

|?f |2 dν.

E(Bω (y,CW r))

3 (c) If (CH nα )d+2 ≤ R ≤ n and Bω (y, 2 R) ? Q⊕ , then Bω (y, R) is (CV , CP , CW )very good with NBω (y,R) ≤ CH nα .

C ∨ (Q+ )

Proof. Recall from Lemma 2.13(b) that ω ∈ H(Q, α) implies that C ∨ (Q′ ) ? for every Q′ satisfying (2.15).

(a) Since Bω (y, r) ? Q(y, r), the upper bound in (2.23) is clear. Let s = r/(2CH ), so that 2s ≥ nα . By Proposition 2.17(e), C ∨ (Q(y, s)) ? Bω (y, CH s), so that by Lemma 2.9, |Bω (y, r)| ≥ |C ∨ (Q(y, s))| ≥ c1 sd ≥ c2 r d . (b) Let Q1 be the smallest special cube that contains Q(y, r); we have Q+ ? Q+ . Let r1 = s(Q1 ). By Proposition 2.17(e), C ∨ (Q1 ) ? Bω (y, CH r1 ) 1 and so taking CW = 2CH , C ∨ (Q1 ) ? Bω (y, CW r). By Proposition 2.12(b),

Bω (y,r)

? (f ? fB )2 d? ≤ ≤

Bω (y,r)

? (f ? fC ∨ )2 d?

C ∨ (Q1 )

? (f ? fC ∨ )2 d? |?f |2 dν ≤ c4 r 2 |?f |2 dν.

E(Bω (y,CW r))

2 ≤ c3 r1

E(C ∨ (Q1 ))

RANDOM WALKS AND PERCOLATION

27

(c) This is immediate from (a), (b) and the de?nition of very good balls.

Using the estimates in Lemma 2.13(a), Proposition 2.17(b) and and Borel– Cantelli lemma, we obtain the following lemma. Lemma 2.19. Let p > pc . For each x ∈ Zd there exists Mx (ω) with Pp (Mx ≥ n) ≤ c1 exp(?c2 nαβ ) such that whenever n ≥ Mx , then H(Q, α) ∩ D(Q, α) holds for all cubes Q of side n with x ∈ Q. Remarks. 1. An inequality of the form (2.24) is called a weak Poincar? e inequality. In many situations (including this one) it is possible to derive a strong Poincar? inequality (i.e., with CW = 1) from a family of weak ones; e see Lemma 4.9. 2. Note that if x ∈ Q and s(Q) ≥ Mx , then C ∨ (Q) ? C∞ . Theorem 2.18 and Lemma 2.19 are suitable for most of our needs, but they have the defect that the minimum size of ball inside a cube Q of side n for which the Poincar? inequality is certain to hold increases with n. Since e (for a ?xed CP ) the cluster C∞ contains arbitrarily large balls in which the Poincar? inequality fails, we cannot do better than this as long as we e require it for all balls of some size. However, we can improve Theorem 2.18 if we relax this condition, and in Section 5 we want to connect points by a chain of very good balls of some ?xed size. To do this we need an additional percolation argument. We consider again Bernoulli site percolation ζx on (Zd , Ed ) with parameter q, where q > qc is close to 1. Let Q be a cube of side n. For x, y ∈ Q, λ > 1, let S(ζ, Q, λ, x, y) = K(Q, 7 ) ∩ {there exist x′ , y ′ ∈ C ∨ (Q+ ) 8 (2.25) with |x ? x′ |∞ ≤ n1/9 , |y ? y ′ |∞ ≤ n1/9 , such that dC ∨ (Q+ ) (x′ , y ′ ) ≤ λ|x ? y|∞ }. Note that this event is increasing. Lemma 2.20. Let Q be a cube of side n and let x, y ∈ Q. There exists q2 = q2 (d) ∈ (qc , 1) and λ0 ≥ 1 so that if q > q2 , then (2.26) Qq (S(ζ, Q, λ0 , x, y)c ) ≤ c1 exp(?c2 n1/9 ).

Proof. We follow the proof of Theorem 1.1 of Antal and Pisztora (1996) ′ and consider the dual process to ζ given by ζx = 1 ? ζx . We view ζ ′ as site ′ = 1 ? q) on the lattice (Zd , E? ) and write percolation (with parameter q d

28

M. T. BARLOW

C ? (ζ ′ , z) for the ?-connected cluster of the process ζ ′ that contains z. Then by Theorem 5.4 of Grimmett (1999) we can choose q2 large enough so that if q ≥ q2 , then Qq (|C ? (ζ ′ , z)| ≥ k) ≤ exp(?c3 k), So, if G = {|C ? (ζ ′ , x)| ≤ n1/9 for all x ∈ Q}, then (using Lemma 2.2) (2.27) n1/9 . Qq (K(Q, 7 )c ∪ Gc ) ≤ exp(?c4 n1/9 ). 8 k ≥ 1.

If |x ? y|∞ ≤ n1/9 and ω ∈ G, there exists x′ ∈ C ∨ (Q+ ) with |x ? x′ |∞ ≤ In this case we can take y ′ = x′ . So suppose ω ∈ G and |x ? y|∞ > n1/9 . Let l = |x ? y|1 and γ = {x = x0 , x1 , . . . , xl = y} be a path in (Zd , Ed ) of length l that connects x and y— note that γ ? Q. Whereas each cluster C ? (ζ ′ , y), y ∈ Q, has diameter less than n1/9 , the path γ must intersect C ∨ (Q+ ). Let Vx and Vy be the ?rst and last (resp.) points in γ ∩ C ∨ (Q+ ); we have |x ? Vx| ≤ n1/9 and |y ? Vy | ≤ n1/9 . We take x′ = Vx , y ′ = Vy and construct a path Γ from x′ to y ′ in C ∨ (Q+ ). This path follows γ whenever possible, and when it encounters a site z with ζz = 0 it “walks around” C ? (ζ ′ , z)—this requires at most 3d |C ? (ζ ′ , z)| steps. Since ω ∈ G this path does not leave Q+ . Hence, recalling from (2.18) the de?nition of W (·),

l

(2.28)

|Γ| ≤ l + 3

d i=0

|C ? (ζ ′ , z)| ≤ 3d |W (ζ ′ , x′ , y ′ )| ≤ 3d |W (ζ ′ , x, y)|.

By Proposition 2.15(a) we have (2.29) Qq (|W (ζ ′ , x, y)| ≥ c5 |x ? y|1 ) ≤ c6 exp(?c7 |x ? y|1 ) ≤ c6 exp(?c8 n1/9 ).

Taking λ0 = dc5 and combining the bounds (2.27)–(2.29) completes the proof. Let m ≥ k0 ∨ k1 and let {T m (?), x ∈ Zd } be the tiling of Zd by disjoint x ? cubes of side m ? 1 given by (2.7). Let α1 = 1/(4 + d) and de?ne [on the space (?, Pp ) carrying the bond percolation process ηe ] (2.30) ψx ?

(m)

= ?H(T m (?),α1 )∩D(T m (?),α1 ) , x x

x ∈ Zd . ?

Lemma 2.21. There exists CE = CE (d, p) ≥ 1 such that for any k ≥ CE the process ψ (m) , x ∈ Zd , under Pp dominates Bernoulli site percolation on ? Zd with parameter q2 .

RANDOM WALKS AND PERCOLATION

(m) (m)

29

Proof. Note that ψx and ψy are independent if |? ? y|∞ ≥ 3. Usx ? ? ? ing this and the fact that Pp (H(Q, α1 ) ∩ D(Q, α1 )) → 1 as p ↑ 1, this is an immediate consequence of Theorem 0.0 of Liggett, Schonmann and Stacey (1997). Let λ0 be as in Lemma 2.20 and let Q be a cube of side n. For x0 , x1 ∈ Q and CE ≤ m ≤ n1/9 set L(Q, m, x0 , x1 ) = {there exist x′ , x′ ∈ C ∨ (Q+ ) with |xj ? x′ | ≤ n2/9 , j = 0, 1, 0 1 j (2.31) k with mk < 2λ0 |x0 ? x1 |∞ and a path {?0 , . . . , yk } in (Zd , Ed ) y ? y y such that T m (?i ) ? Q+ , 0 ≤ i ≤ k, x′ ∈ T m (?0 ), x′ ∈ T m (?k ) y 1 0 and H(T m (?i ), α1 ) ∩ D(T m (?i ), α1 ) holds for each i}. y y Lemma 2.22. Let Q be a cube of side n and let CE ≤ m ≤ n1/9 . Then if |x0 ? x1 |∞ ≥ n2/9 (2.32) Pp (L(Q, m, x0 , x1 )c ) ≤ c exp(?c1 n1/11 ).

Proof. Whereas m is ?xed in this argument, we write T (?) for T m (?). x x ′ be such that mn′ ≥ n ≥ m(n′ ? 1). Let Q be a (macroscopic) cube of Let n side n′ such that Q ? {T (?), x ∈ Q}. Let xi be such that xi ∈ T (?i ) and x ? ? x let s = |?0 ? x1 |1 , so that m(s ? 1) ≤ |x0 ? x1 |∞ ≤ m(s + 1) and s ≥ n1/9 . x ? Let S = S(ψ (m) , Q, λ0 , x0 , x1 ) be the event de?ned from the process ψ (m) in ? ? the same way as S(·) in (2.25) is for ζ. Then, as S is an increasing event, by Lemmas 2.20 and 2.21, (2.33) Pp (S) ≤ c exp(?c′ (n′ )1/9 ) ≤ c exp(?c2 n1/11 ).

Let ω ∈ S and let x′ = y0 , . . . , yk = x′ be the open path (with respect to ?0 ? ? ?1 (m) ) given by the event S. Since S occurs we have |? ? x′ | ≤ (n′ )1/9 for ψ xi ?i ∞ xi i = 0, 1 and k ≤ λ|?0 ? x1 |∞ . Choose x′ ∈ T (?′ ) ∩ C ∨ (Q+ ); then x ? i |xi ? x′ |∞ ≤ m(1 + |?i ? x′ |∞ ) ≤ m(1 + (n′ )1/9 ) < n2/9 . x ?i i Also, since k ≤ λ0 s, mk ≤ msλ0 ≤ λ0 (m + |x0 ? x1 |∞ ) ≤ 2λ0 |x0 ? x1 |∞ . Thus ω ∈ L(Q, m, x0 , x1 ) and using (2.33), this proves the lemma. Now let α2 = (11(d + 2))?1 and let L(Q) = H(Q, α2 ) ∩ D(Q, α2 ) ∩ {L(Q, m, x, y) holds for every x, y ∈ Q, with |x ? y|∞ ≥ n2/9 and CE ≤ m ≤ n1/9 }. Theorem 2.23. Let Q be a cube of side n and let p > pc .

30

M. T. BARLOW

(a) There exists Pp (L(Q)c ) ≤ c1 exp(?c2 nα2 β ). (b) Let ω ∈ L(Q) and CE ≤ m ≤ n1/9 . Then if x0 , x1 ∈ Q ∩ C ∨ (Q+ ) with 1 dω (x0 , x1 ) ≥ 1 n1/4 there exist x′ ∈ C ∨ (Q+ ) with dω (xi , x′ ) ≤ 3 n1/4 and a path i i 3 ∨ (Q+ ) that connects x′ and x′ such that: γ = (z0 , . . . , zj ) in C 1 0 (i) For each 0 ≤ l ≤ j, the ball Bl = Bω (zl , m/16) is very good, with NBl ≤ CH mα1 . (ii) There exists j ≤ c3 |x0 ? x1 |∞ ≤ CF |x0 ? x1 |1 . Proof. (a) This is immediate from the bounds in Lemma 2.13, Proposition 2.17 and Lemma 2.22. (b) Since D(Q, α2 ) occurs, |x0 ? x1 |∞ ≥ cdω (x0 , x1 ) ≥ n2/9 and so ω ∈ ? L(Q, m, x0 , x1 ). Let x′ , y0 , . . . , yk be as in (2.31). Note that we can choose x′ i i ? to be within a distance m/8 of the center of the cubes T (?0 ) and T (?k ). Then, y y by Proposition 2.17(d), dω (xi , x′ ) ≤ CH ((1 + nα2 ) ∨ |xi ? x′ |∞ ) ≤ cn2/9 ≤ i i 1 1/4 . 3n We now show that the clusters C ∨ (Qi ) are all in C ∨ (Q+ ). Consider ?rst two adjacent cubes T (yi ) and T (yi+1 ). Since the event R(T (yi ))∩ R(T (yi+1 )) + + occurs, the clusters C ∨ (T (yi )) and C ∨ (T (yi+1 )) are connected. Thus there + exists a Q+ cluster C which contains each C ∨ (T (yi )), and so has diameter ′ ? x′ | ≥ |x ? x | ? 2n1/9 ≥ 1 n1/4 . It now follows, as in D with D ≥ |x0 0 1 ∞ 1 ∞ 2 Lemma 2.13(b), that C ? C ∨ (Q+ ). Since each event D(Qi , α1 ) holds, we can ?nd a path γ = {z0 , . . . , zj } ? T (yi ) that connects x′ and x′ with length j ≤ 2CH mk ≤ c|x0 ? x1 |∞ . 1 0 Each point zi is in a cube Qi for which H(Qi , α1 ) ∩ D(Qi , α1 ) occurs, so using Theorem 2.18(c), Bi = Bω (zi , m/16) is very good with NBi ≤ CH mα1 . Lemma 2.24. Let p > pc and, for each x ∈ Zd , let Nx be the largest n such that L(Q) fails for some Q with s(Q) = n and x ∈ Q. Then (2.34) Pp (Nx ≥ n) ≤ c1 exp(?c2 nα2 β ).

Proof. This is immediate from Theorem 2.23(a). 3. Upper bounds. We now consider a connected graph G = (G, E) that satis?es the conditions (1.1) and (1.2). We use the notation of Section 1, and study the transition density qt (x, y) of the continuous time r.w. Yt on G. Fix constants CV , CP and CW , and recall from De?nition 1.7 the de?nition of good and very good balls, and of NB . In this section the constants ci depend on the constants d, C0 , CV , CP and CW in (1.1), (1.15) and (1.16). As in Section 2, we assume without always stating it explicitly that the radius R of a ball B(x, R) is su?ciently large; that is, that R ≥ c0 =

RANDOM WALKS AND PERCOLATION

31

c0 (d, C0 , CP , CW ). All the bounds in this section hold for balls B(x, R) with R ≤ c0 , with a suitable choice of the constants ci in the bounds, by elementary arguments. We begin by investigating the on-diagonal decay of qt (x, x). We remark that a similar result was proved in Mathieu and Remy (2004), using an isoperimetric inequality directly. We give another proof here because it is quite short and also allows us to estimate the “initial time” TB directly in terms of NB . Proposition 3.1 [see Mathieu and Remy (2004), Theorem 1.2]. 2+d x0 ∈ G and let B = B(x0 , R) be very good, so that NB ≤ R. Then (3.1) qt (x1 , x1 ) ≤ c2 td/2

2d for c1 NB ≤ t ≤

Let

R2 and x1 ∈ B(x0 , 8 R). 9 log R

Proof. Let c3 < c1.1.5 , let t2 = c3 R2 / log R and suppose that t ≤ t2 . Then provided R ≥ c, we have t log t ≤ c3 R2 and hence that t ≤ exp(c3 R2 /t). Fix x1 ∈ B(x0 , 8 R), write ft (x) = qt (x1 , x) and let ψ(t) = qt (x1 , y)2 d? = 9 q2t (x1 , x1 ). Note that by (1.7), (3.2) c4 1 ≤ ψ(t) = q2t (x1 , x1 ) ≤ , d/2 (t log t) c5 t1/2 ?ft (x) =2 ?t t ≥ 1.

Using the discrete Gauss–Green formula, ψ ′ (t) = 2

x

ft (x)

ft (x)Lft (x) = ?

x x,y

axy (ft (x) ? ft (y))2

and, in particular, ψ(t) is decreasing (and continuous). 1/2 d De?ne t1 so that t1 = c6 NB , where c6 is chosen later, and choose r(t) so that 2 1 (3.3) ≥ r(t)d ψ(t) ≥ . c5 c6 c5 c6

d Then, if t ≥ t1 , by (3.2), ψ(t)?1 ≥ c5 c6 NB and so r(t) ≥ NB . If t ≤ t2 , then by d/2 (3.2) and (3.3), r(t)d ≤ c(t log t)d/2 ≤ cc3 R2 , so if the constant c3 is chosen small enough, we have r(t) ≤ R/18. Write B ′ = B(x0 , 17R/18). Let t ∈ [t1 , t2 ], so that r = r(t) ∈ [r0 , R/18], and let B(yi , r/2), i = 1, . . . , m, be a maximal collection of disjoint balls ? with centers in B ′ . Set Bi = B(yi , r) and Bi = B(yi , CW r). Note that B ′ ? ? , we have B(y , r/2) ? B(x, r(1 + C )), and so i W i Bi ? B. If x ∈ B ∩ Bi

C0 (1 + CW )d r d ≥ ?(B(x, r(1 + CW ))) ≥

i

?{x∈Bi? } ?(B(yi , r/2)) ≥ |{i : x ∈ Bi? }|CV 2?d rd .

32

M. T. BARLOW

? Thus any x ∈ B is in at most c7 of the Bi . The bounds on r above imply that each B(yi , r) is good. So, applying the ? Poincar? inequality (1.16) to each Bi and writing ft,i = ?(Bi )?1 Bi ft , we e have

? c7 ψ ′ (t) ≥

i

? E(Bi )

|?ft |2 ? |ft ? ft,i |2

?1 ft2 ? CP r ?2 i

(3.4)

?1 ≥ CP r ?2 i ?1 = CP r ?2 i

Bi

Bi

?(Bi )?1

2 Bi

ft

.

By Lemma 1.1(b),

G?B ′

ft2 ≤

G?B ′

sup ft

G?B ′

ft ≤ ct?d ,

while, by (3.2), ψ(t) ≥ c4 (t log t)?d/2 . So as t ≥ c1 ,

i Bi

ft2 ≥

B′

ft2 = ψ(t) ?

G?B ′

1 ft2 ≥ 2 ψ(t).

Also, since ft has total mass 1, ?(Bi )

i ?1 Bi 2 2

ft

≤ (CV r )

d ?1 i Bi

ft

≤ c7 (CV r d )?1 = c8 r(t)?d .

Combining these estimates, we obtain (3.5)

?1 ? c7 ψ ′ (t) ≥ CP r(t)?2 ( 1 ψ(t) ? c8 r(t)?d ). 2

Now let c6 = (4c5 c8 )?1 so that by the choice of r = r(t) in (3.3), ?ψ ′ (t) ≥ c9 r(t)?2 ψ(t) ≥ c10 ψ(t)1+2/d . Setting ?(t) = ψ(t)?2/d we have ?′ (t) ≥ 2c10 /d, from which it follows that ?(t) ≥ ?(t1 ) + (2c10 /d)(t ? t1 ) ≥ c11 t, 2t1 ≤ t ≤ t2 . Rearranging, this gives ψ(t) ≤ ct?d/2 for 2t1 ≤ t ≤ t2 . Since ψ is decreasing it follows, by adjusting the constant c, that qt (x1 , x1 ) = ψ(t/2) ≤ c11 t?d/2 for 4t1 ≤ t ≤ R2 / log R. We need a bound for y outside B(x0 , R). Corollary 3.2. (3.6) qt (x1 , y) ≤ c1 td/2 Let x0 ∈ G and let B = B(x0 , R) be very good. Then

2d for c2 NB ≤ t ≤

c3 R2 , x1 ∈ B(x0 , 7 R) and y ∈ G. 9 log R

RANDOM WALKS AND PERCOLATION

33

8 Proof. If y ∈ B(x0 , 9 R), then qt (x1 , y) ≤ qt (x1 , x1 )1/2 qt (y, y)1/2 ≤ ct?d/2 by Proposition 3.1. If y ∈ B(x1 , 8 R), then d(x, y) ≥ R/9 and we use Lemma 1.1(a). / 9

2d ′ For a very good ball B, let TB = c3.2.2 NB and TB = c3.2.3 R2 / log R.

Remark. It is natural to ask if the bounds in (3.1) and (3.6) hold for t ≤ cR2 rather than just t ≤ cR2 / log R. However, in this paper this restriction on t does not matter, since we ultimately apply (3.6) in the situation where B(x0 , R) is very good for all su?ciently large R. We now use the method of Bass (2002) to obtain o?-diagonal upper bounds. Following Nash (1958) and Bass (2002), we introduce the functions, for x1 ∈ G, t > 0, M (t) = M (x1 , t) =

y

d(x1 , y)qt (x1 , y)?(y), qt (x1 , y) log qt (x1 , y)?(y).

y

Q(t) = Q(x1 , t) = ?

We can extend M and Q to t = 0 by continuity: M (0) = 0, while since qt (x1 , x1 ) → ?(x1 )?1 as t ↓ 0, Q(0) = log ?(x1 ) ≥ 0. Lemma 3.3. (a) We have Q(x1 , t) ≥ c1 + 1 d log t, 2 (b) We have M (x1 , t) ≥ c2 exp(Q(x1 , t)/d), t ≥ c3 .

′ TB ≤ t ≤ TB . 7 Let B(x0 , R) be very good and let x1 ∈ B(x0 , 9 R).

Proof. Fix x1 ∈ B(x0 , 7 R). Part (a) follows directly from the upper 9 bound (3.6). The proof of (b) is similar to that in Nash (1958) or Bass (2002). Let 0 < a < 1, and set D0 = {x0 } and Dn = B(x0 , 2n ) ? B(x0 , 2n?1 ) for n ≥ 1. Then using (1.1) to bound ?(Dn ), we have, for a ≤ 2,

∞

exp(?ad(x1 , y))?(y) ≤

y∈G n=0 y∈Dn ∞

exp(?a2n )?(y) C0 2nd exp(?a2n ) ≤ c4 a?d .

n=0

≤

34

M. T. BARLOW

Now note that u(log u + λ) ≥ ?e?1?λ for u > 0. So, setting λ = ad(x0 , y) + b, where a ≤ 2, ?Q(x1 , t) + aM (x1 , t) + b =

y

qt (x1 , y)(log qt (x1 , y) + ad(x1 , y) + b)?(y) exp(?1 ? ad(x1 , y) ? b)?(y)

y

≥?

≥ ?e?1?b

y

exp(?ad(x1 , y))?(y) ≥ ?c5 e?b a?d .

1 2

Since M (x1 , t) ≥ P x1 (Yt = x1 ), using (1.7) we have M (x1 , t) ≥ Setting a = 1/M (x1 , t) and eb = M (x1 , t)d = a?d , we obtain ?Q(x1 , t) + 1 + d log M (x1 , t) ≥ ?c4 , and rearranging gives (b). Proposition 3.4. (3.7)

when t ≥ c5 .

Let x0 ∈ G and let B(x0 , R) be very good. Then

′ for x ∈ B(x0 , 7 R) and TB log TB ≤ t ≤ TB . 9

c1 t1/2 ≤ M (x1 , t) ≤ c2 t1/2

Proof. For the moment we just write Q(t) and M (t). Set ft (x) = qt (x1 , x) and let bt (x, y) = ft (x) + ft (y). We have M ′ (t) =

y

d(x1 , y) 1 2

?ft (y) ?(y) = ?t

d(x1 , y)Lft (y)?(y)

y

(3.8)

=? ≤ 1 2

axy (d(x1 , y) ? d(x1 , x))(ft (y) ? ft (x))

x y

axy |ft (y) ? ft (x)|

x y 1/2

1 ≤ 2 (3.9) ≤c

axy bt (x, y)

x y x y 1/2

(ft (y) ? ft (x))2 axy bt (x, y) .

1/2

axy

x y

(ft (y) ? ft (x))2 ft (x) + ft (y)

In the calculation above the use of the discrete Gauss–Green formula to obtain (3.8) is valid since, by (1.5), qt (x1 , ·) decays exponentially. Since we have, for u, v > 0, (u ? v)2 ≤ (u ? v)(log u ? log v), u+v

RANDOM WALKS AND PERCOLATION

35

we deduce M ′ (t)2 ≤

x y

axy (ft (y) ? ft (x))(log ft (y) ? log ft (x)).

On the other hand [again using (1.5) and the discrete Gauss–Green formula], Q′ (t) = ? (3.10)

y

(1 + log ft (y))Lft (y)

x y

=

1 2

axy (log ft (y) ? log ft (x))(ft (y) ? ft (x)) ≥ 1 M ′ (t)2 . 2

The remainder of this proof is similar to that in Nash (1958) or Bass (2002), except that we have to control the growth of M for small t. Set 1 ′ R(t) = d?1 (Q(t) ? c3.3.1 ? 2 d log t), so that R(t) ≥ 0 if TB ≤ t ≤ TB . De?ne T0 = 1, ′ sup{t ≤ TB : R(t) < 0},

T0 0 T0 0 T0 0 ′ if R(t) ≥ 0 on [1, TB ], otherwise.

If T0 > 1, then T0 ≤ TB and, by (3.10), M (T0 ) = M ′ (s) ds ≤ 21/2 Q′ (s) ds

1/2

Q′ (s)1/2 ds

≤ 21/2 ≤

T0

1/2

1/2 c3 T0 (Q(T0 ) ? Q(0))1/2 1/2

1 ≤ c3 T0 (c3.3.1 + 2 d log T0 )1/2 ≤ c4 (TB log TB )1/2 .

If T0 = 1, then M (T0 ) = E x d(x, Y1 ) ≤ c5 by elementary arguments. ′ By Lemma 3.3(b) and (3.10), if T0 < t < TB , then ct1/2 eR(t) = eQ(t)/d ≤ M (t) ≤ M (T0 ) + 21/2 ≤ M (T0 ) + (2d)1/2

t T0 t T0 1/2

Q′ (s)1/2 ds ds.

R′ (s) +

1 2s

t

Using the inequality (a + b)1/2 ≤ b1/2 + a/(2b)1/2 gives (3.11) ct1/2 eR(t) ≤ M (t) ≤ M (T0 ) + ct1/2 + c s1/2 R′ (s) ds.

T0

′ Integrating by parts, and using the fact that R ≥ 0 on [T0 , TB ], the ?nal term in (3.11) is bounded by c(1 + R(t)t1/2 ). Combining these estimates, for ′ TB ≤ t ≤ TB ,

ct1/2 eR(t) ≤ M (t) ≤ c(1 + R(t))t1/2 + c4 (TB log TB )1/2 .

36

′ So for TB log TB ≤ t ≤ TB ,

M. T. BARLOW

ct1/2 eR(t) ≤ M (t) ≤ c(1 + R(t))t1/2 . Thus R(t) is bounded and this implies (3.7). As in Bass (2002) we can use the moment bounds in (3.7) to obtain o?diagonal upper bounds on qt by the method of Barlow and Bass (1989, 1992). We de?ne τ (x, r) = inf{t : Yt ∈ B(x, r)}, / x ∈ G, r > 0,

and begin by controlling the probability that τ (x, r) is small.

d Lemma 3.5. Let x0 ∈ G and let B(x0 , R) be very good. Let c1 NB × 1/2 ≤ r ≤ R. Then (log NB )

(3.12) P x (τ (x, r) < t) ≤

1 c2 t + 2 2 r

′ for x ∈ B(x0 , 6 R) and 0 ≤ t ≤ 1 TB . 9 2

6 Proof. Suppose ?rst that r < R/9. Let x ∈ B(x0 , 9 R), A = B(x, r) ∪ 1 ′ ?B(x, r) and τ = τ (x, r). Then if TB log TB ≤ t ≤ 2 TB , since A ? B(x0 , 7 R), 9

c3 t1/2 ≥ E x d(x, Y2t ) ≥ E x (d(x, Yt∧τ ) ? d(Yt∧τ , Y2t )) ≥ E x ?(τ <t) d(x, Yτ ) ? E x (E Yt∧τ d(Yt∧τ , Y2t?t∧τ )) ≥ P x (τ < t)r ? sup E z d(z, Y2t?s )

z∈A,s≤t

≥ P x (τ < t)r ? c3 t1/2 . Thus (3.13) P x (τ < t) ≤ 2c3 t1/2 /r.

1 Since λ ≤ 2 (1 + λ2 ), (3.12) is immediate. If t ≤ TB log TB , then

P x (τ (x, r) < t) ≤ P x (τ (x, r) < TB log TB ) ≤ 2c3 (TB log TB )1/2 r ?1 ≤ 1 , 2

d provided r ≥ c(TB log TB )1/2 = c′ NB (log NB )1/2 . Finally if R/9 ≤ r ≤ R, we have τ (x, r) ≥ τ (x, R/9), so (adjusting the constant c2 ) we deduce (3.12).

Remark. In the end we gain nothing useful by using the stronger bound (3.13). We need the following estimate.

RANDOM WALKS AND PERCOLATION

37

Lemma 3.6 [Barlow and Bass (1989), Lemma 1.1]. Let ξ1 , ξ2 , . . . , ξn , V be nonnegative r.v. such that V ≥ n ξi . Suppose that for some p ∈ (0, 1), 1 a > 0, P (ξi ≤ t|σ(ξ1 , . . . , ξi?1 )) ≤ p + at, Then log P (V ≤ t) ≤ 2 ant p

1/2

t > 0.

1 ? n log . p

Proposition 3.7. Let x0 ∈ G and let B(x0 , R) be very good. If x ∈ B(x0 , 5 R) and t > 0, ρ > 0 satisfy 9 (3.14) then (3.15) P x (τ (x, ρ) < t) ≤ c2 exp(?c3 ρ2 /t). ρ ≤ R,

d c1 NB (log NB )1/2 ρ ≤ t

and

′ t ≤ TB ,

d Proof. Let r1 = c3.5.1 NB (log NB )1/2 . Suppose ?rst that, in addition, ρ < R/9. Let m ≥ 1 be chosen later, and let s = t/m and r = ?ρ/m?. De?ne stopping times

S0 = 0,

Si = inf{t ≥ Si?1 : d(YSi?1 , Yt ) = r},

i ≥ 1.

Set ξi = Si ? Si?1 and write Ft = σ(Ys , s ≤ t) for the ?ltration of Y . By Lemma 3.5, 1 c4 u P x (ξi > u|FSi?1 ) ≤ + 2 , (3.16) u > 0, 2 r ′ provided r1 ≤ r ≤ R, u ≤ TB and YSi?1 ∈ B(x0 , 6 R). Whereas d(Y0 , YSm ) ≤ 9 mr ≤ ρ < R/9, we have Sm ≤ τ (x, ρ) and YSj ∈ B(x0 , 6 R) for 0 ≤ j ≤ m. 9 Using Lemma 3.6 and writing p = 1 , a = c4 /r 2 , we deduce that 2 log P x (τ (x, ρ) < t) ≤ log P x (Sm < t) ≤ 2(amt/p)1/2 ? m log p?1 . Simplifying this expression, we obtain (3.17) log P x (τ (x, ρ) < t) ≤ ?c5 m 1 ? c6 tm ρ2

1/2

.

Let λ = ρ2 /(2c6 t). If we can choose m ∈ N with 1 λ ≤ m < λ and so that the 2 estimate (3.16) is valid, then (3.17) implies (3.15). If λ ≤ 1, then, adjusting the constant c2 appropriately, (3.15) is immedi′ ate. If λ > 1, then let m = ? 1 λ? + 1. Since then m ≥ 1, we have s ≤ t ≤ TB 2 and r ≤ ρ ≤ R, while the condition ρ ≤ c6 t/r1 ensures that r ≥ r1 . Finally let ρ satisfy (3.14) but with ρ ≥ R/9. Then (adjusting c1 if necessary) we can apply the argument above to ρ0 = R/9 and adjust the constant c3 to obtain (3.15).

38

M. T. BARLOW

Theorem 3.8. Let x0 ∈ G and let B(x0 , R) be very good. Let x ∈ B(x0 , 1 R), 2 let y ∈ G and assume that (3.18) Then (3.19) qt (x, y) ≤ c2 t?d/2 exp(?c3 d(x, y)2 /t).

2d+1 NB ∨ d(x, y) ≤ t ≤

c1 R2 . log R

Proof. Let D = d(x, y). Using (1.5) we have, since D ≤ t, qt (x, y) ≤ c5 exp(?2c4 D2 /t). If t log t ≤ 2c4 d?1 D2 , then exp(?c4 D2 /t) ≤ t?d/2 and we deduce that qt (x, y) ≤ c5 t?d/2 exp(?c4 D 2 /t). Suppose therefore that t log t ≥ 2c4 d?1 D2 . Note that this implies that y ∈ B(x, 5 R), provided R ≥ c and c1 in (3.18) is chosen small enough. Let 9 Ax = {z : d(x, z) ≤ d(y, z)}, Ay = G ? Ax , s = t/2 and ρ = D/2. Note that B(x, ρ) ? Ax . Then (3.20) ?(x)P x (Yt = y) = ?(x)P x (Yt = y, Ys ∈ Ay ) + ?(x)P x (Yt = y, Ys ∈ Ax ).

To bound the ?rst term in (3.20) we write P x (Yt = y, Ys ∈ Ay ) = P x (τ (x, ρ) < s, Ys ∈ Ay , Yt = y) ≤ E x (?{τ (x,ρ)<s} P Yτ (Yt?τ = y)) (3.21) ≤ P x (τ (x, ρ) < s) sup q2t?s (z, y)?(y).

z∈?B(x,ρ),s≤t 1 2d+1 ′ Since 2TB < 2 NB ≤ s < TB , by Corollary 3.2 the second term in (3.21) is bounded by ct?d/2 . To control the ?rst term we use Proposition 3.7. We 2d+1 ′ have ρ < D < R and s < TB , while, since t ≥ NB , d d 1 c3.7.1 NB (log NB )1/2 ρ ≤ cNB (log NB )1/2 (t log t)1/2 ≤ 2 t = s,

the three conditions in (3.14) are satis?ed and, by (3.15), P x (Yt = y, Ys ∈ Ay ) ≤ ct?d/2 exp(?c′ D2 /s). By symmetry the second term in (3.20) equals (3.22) ?(y)P y (Yt = x, Ys ∈ Ax )

and so can be bounded in the same way as the ?rst term. Combining these estimates completes the proof.

RANDOM WALKS AND PERCOLATION

39

4. A weighted Poincar? inequality. While the weak Poincar? inequality e e of Section 2 is enough for upper bounds on the transition density, to obtain lower bounds we need a weighted Poincar? inequality, which we derive using e the methods of Jerison (1986) and Salo?-Coste and Stroock (1991). We continue with the notation and assumptions of the previous section. Fix x0 ∈ G, ?x R ∈ N and let B = B(x0 , R) be a very good ball with R0 = NB ≤ R1/(1+d) . For each x, y ∈ G we write γ(x, y) for a shortest path x = z0 , . . . , zd(x,y) = y between x and y. We begin with a Whitney decomposition of B, which we need to adapt to our situation. We have two di?erences from Jerison (1986), which both arise on small length scales. The ?rst—minor—di?culty is that in our discrete setting we cannot use balls of size smaller than 1. The second di?culty is that we do not have any volume doubling estimate for balls smaller than R0 . Let (X, d) be the metric space obtained as the “cable system” of G. This is the metric space obtained by replacing each edge e by a copy of (0, 1), linked in the obvious way at the vertices x ∈ G. We de?ne a measure ? on ? X by taking ? to be Lebesgue measure on each cable. See, for example, ? Barlow and Bass (2004) for further details of this construction. We write B(x, r) for balls in X. Since R ∈ N, the boundary of B is contained in G. For x ∈ B = B(x0 , R) we write ρ(x) = d(x, B c ). Note that if x ∈ G, then ρ(x) = d(x, G ? B). We frequently use the inequality (4.1) |ρ(x) ? ρ(y)| ≤ d(x, y).

Let λ ≥ 103 ∨ (21CW ) and let 10 ≤ K ≤ λ/10 be ?xed constants. We can assume that R0 > λ. Lemma 4.1. There exists a sequence of disjoint balls Bi = B(xi , ri ), i ≥ 1, such that r1 ≥ r2 ≥ · · · and: (a) There exists B = ∞ B(xi , 2ri ). i=1 (b) For each i, ρ(xi ) = λri . (c) If y ∈ B(xi , Kri ), then (4.2) (λ ? K)ri ≤ ρ(y) ≤ (λ + K)ri .

Proof. This is standard. We start by choosing a ball B1 of maximal size that satis?es (b) and continue, so that Bn is chosen to be the largest n?1 ball contained in B ? 1 Bi that satis?es (b). To prove (a), suppose y ∈ / B(xi , 2ri ). Since ?(B) < ∞ and ?(Bi ) ≥ ri , we must have ri → 0. Let ? ? ti = d(y, xi ) ≥ 2ri , t = inf ti . Then by (4.1),

1 ρ(y) ≤ ρ(xi ) + ti = λri + ti ≤ (1 + 2 λ)ti ,

40

M. T. BARLOW

so that t ≥ cρ(y) = t′ > 0, contradicting the de?nition of Bi if ri < t. Part (c) follows immediately from (4.1) and (b). We now adapt this construction to our discrete setup. Let N be de?ned by rN ≥ R0 + 1 > rN +1 . For each i ≤ N the center xi of Bi = B(xi , ri ) lies ′ ′ on a cable [yi , yi ], where yi , yi ∈ G. We label these so that yi is the point in G closest to xi and we set si = ri ? d(xi , yi ). Then we have B(yi , si ) ? B(xi , ri ) ∩ G and ri ≥ si ≥ ri ? 1 > R0 . 2 We set λ1 = λ ? 2K and λ2 = λ + 2K. Lemma 4.2. The sequence of disjoint balls Bi = B(yi , si ), 1 ≤ i ≤ N , de?ned above satis?es the following statements. (a) For each i ≤ N , (4.3) (b) If x ∈ B ? λsi ?

N 1 2

≤ ρ(yi ) ≤ 1 (1 + λ) + λsi . 2 then ρ(x) < λ2 R0 . Furthermore,

N

N i=1 B(yi , 3si ),

(4.4) B(x0 , R ? λ2 R0 ) ?

i=1

B(yi , 3si ? 1) ?

i=1

B(yi , λ1 si ) ? B(x0 , R).

(c) If x ∈ B(yi , Ksi ), then (4.5) (4.6) λ1 si ≤ (λ ? K)si ?

1 2

≤ ρ(x) ≤ (λ + K)si + 1 (1 + λ) ≤ λ2 si . 2

(d) There exists a constant c1 such that |{i ≤ N : x ∈ B(yi , Ksi )}| ≤ c1 .

1 Proof. Since ρ(xi ) = λri and |ρ(yi ) ? ρ(xi )| ≤ 2 , (a) is immediate.

(b) Let x ∈ B. Then x ∈ B(xi , 2ri ) for some i and so ρ(x) ≤ (2 + λ)ri . If i > N , then ri < 1 + R0 and so ρ(x) ≤ (2 + λ)(1 + R0 ) < λ2 R0 , which implies that d(x0 , x) > R ? λ2 R0 . So, if ρ(x) ≥ λ2 R0 , then x ∈ B(xi , 2ri ) for some i ≤ N . We then have d(x, yi ) ≤ d(x, xi )+ d(xi , yi ) < 2si + 3 . Since each si > 3, 2 this implies that x ∈ B(yi , 3si ? 1). The ?nal inclusion in (4.4) is immediate from (a) and the ?rst follows from the inequality d(x0 , x) + ρ(x) ≥ R. (c) This is immediate from (a) and (4.1). (d) If x ∈ B(yi , Ksi ), then by (c), ρ(x) ≥ λ1 si and Bi ? B(x, (1+K)c1 ρ(x)), where c2 = (1 + K)/λ1 . Also, ?(Bi ) ≥ CV sd ≥ CV λ?1 ρ(x)d . So writing I = i 2 {i : x ∈ B(yi , Ksi )}, we have (4.7) C0 cd ρ(x)d ≥ ?(B(y, c2 ρ(x))) ≥ 2

i∈I

?(Bi ) ≥ |I|CV λ?d ρ(x)d , 2

which proves (4.6).

RANDOM WALKS AND PERCOLATION

41

Let

′ Bi = B(yi , 3si ),

1 ≤ i ≤ N.

Let η = 2λ2 and set

′′ Bi = B(yi , 10si )

if si ≥ ηR0 .

′′ If si < ηR0 , we call Bi a boundary ball and de?ne Bi to be the connected component of B(yi , 2λsi ) ∩ B which contains yi . (While balls are connected, the intersection of two balls need not be.) We relabel the balls Bi so that x0 ∈ B1 , and Bi is a boundary ball if and only if M + 1 ≤ i ≤ N . ′′ ′ Lemma 4.3. (a) There exists B = ( M Bi ) ∪ ( N +1 Bi ). i=M i=1 (b) There exists a constant c1 such that for any x ∈ B,

(4.8)

′′ |{i ≤ N : x ∈ Bi }| ≤ c1 .

′ Proof. (a) Suppose x ∈ B but x ∈ i Bi . Then ρ(x) < λ2 R0 = t. Now / ′ ∈ γ(x, x ) with 1 + t ≥ ρ(x′ ) > t and choose x′′ ∈ γ(x , x′ ) with choose x 0 0 d(x′ , x′′ ) = 1. Then x′ ∈ B(yi , 3si ? 1) for some i, so λsi < ρ(x′′ ) < t < λ2 R0 . Hence

R0 < s i ≤

λ2 R0 < ηR0 , λ1

so that Bi is a boundary ball. Now d(x, yi ) ≤ d(x, x′ ) + 3si ? 1 ≤ t + 3si < 2λsi , which proves that x ∈ B(yi , 2λsi ). The same argument proves that each y ∈ γ(x, x′ ) is also in B(yi , 2λsi ). Hence x is connected to x′ (and so yi ) by ′′ a path in B(yi , 2λsi ) ∩ B, and so x ∈ Bi . (b) Since K ≥ 10, we have a bound on |{i : x ∈ B(yi , 10si )}|. So it is enough to control |I ′ |, where I ′ = {i : x ∈ B(yi , 2λsi ), si < ηR0 }. The argument is almost exactly the same as in Lemma 4.2(d): If i ∈ I ′ , then si < ηR0 and d(x, yi ) ≤ 2λ2 si < 2λ2 ηR0 . So Bi ? B(x, cR0 ) and we use volume bounds as in (4.7). For each i de?ne F(i) = {j : γ(x0 , yi ) ∩ B(yj , Ksj ) = ?}, F(i, r) = {j ∈ F(i) : r ≤ sj ≤ 2r}. Lemma 4.4.

1 (a) If j ∈ F(i), then sj ≥ 1 λ1 λ?1 si ≥ 4 si . 2 2

(b) If j ∈ F(i), then d(yi , yj ) ≤ (K + λ2 )sj . (c) There exists |F(i, r)| ≤ c1 C0 /CV .

42

M. T. BARLOW

Proof. Let j ∈ F(i), so that there exists x ∈ B(yj , Ksj ) ∩ γ(x0 , yi ). Since x ∈ γ(x0 , yi ), d(x, yi ) = d(x0 , yi ) ? d(x0 , x) < R ? d(x0 , x) ≤ ρ(x). Thus using (4.5), λ1 si ≤ ρ(yi ) ≤ d(x, yi ) + ρ(x) ≤ 2ρ(x) ≤ 2λ2 sj , which proves (a). For (b) note that d(yi , yj ) ≤ d(yi , x) + Ksj ≤ ρ(x) + Ksj . From the estimates above, if j ∈ F(i, r), then yj ∈ B(yi , 2(K + λ2 )r), so that B(yj , sj ) ? B(yi , 3λr). Hence c2 C0 r d ≥

j∈F (i,r)

?(Bj ) ≥ |F(i, r)|CV r d ,

proving (c). Corollary 4.5. There exists |F(i)| ≤ c1 log(R/si ).

Proof. The proof follows easily from Lemma 4.4(c). Now let F ? (j) = {i : j ∈ F(i)}, F ? (j, r) = {i : j ∈ F(i), r ≤ si ≤ 2r}. Lemma 4.6. There exists α = α(d, C0 , CV ) > 0 such that for each 1 ≤ j ≤ N we have, for r > R0 ,

′ ′ ?(Bi ) ≤ c?(Bj ) i∈F ? (j,r)

r sj

α

.

Proof. This argument runs along the same lines as the proof of Lemma 5.9 in Jerison (1986). Note ?rst that we can assume that r ≤ 4sj , since if i ∈ F ? (j, r), then by Lemma 4.4(a) we have si ≤ 4sj . Write ?B = {y : d(x0 , y) = R}. Fix j. Using (4.5) we have ρ(yj ) ≤ λ2 sj , so we can choose z ∈ ?B with d(yj , z) ≤ λ2 sj . Set t = (4 + K + 4λ2 )sj and for u > 0, let Λ(u) = B(z, t + 2u) ∩ {x ∈ B : ρ(x) ≤ u}. Suppose that i ∈ F ? (j, r). By Lemma 4.4(b), d(yi , yj ) ≤ (K + λ2 )sj , so that d(yi , z) ≤ (K + 2λ2 )sj and

′ Bi ? B(z, (K + 2λ2 + 4)sj ).

RANDOM WALKS AND PERCOLATION

43

′ ′ ′ By (4.5), ρ(x) ≤ 2λ2 r on Bi , so Bi ? Λ(2λ2 r). Whereas the balls Bi are disjoint,

(4.9)

i∈F ? (j,r)

′ ?(Bi ) ≤ ?(Λ(2λ2 r)).

Now ?x u > 0 and choose a maximal set of points {z1 , . . . , zm } ? ?B such that B(zk , u) are disjoint and d(z, zk ) ≤ t + u for each k. We next show that

m

(4.10)

Λ(u/4) ?

k=1

B(zk , 3u).

Let x ∈ Λ(u/4), so that d(x, z) ≤ t + u/2 and there exists z ′ ∈ ?B with d(x, z ′ ) ≤ u/4. Hence d(z ′ , z) ≤ t + 3 u < t + u. Whereas {z1 , . . . , zm } is max4 imal, we must have d(z ′ , zk ) < 2u for some k. Thus d(x, zk ) < 2u + 1 u < 3u, 4 proving (4.10). For each zj we have d(x0 , zj ) = R by construction. Choose wj on γ(x0 , zj ) such that

2 1 2 u < d(zj , wj ) ≤ 3 u;

this is possible provided 6 < u < R. We have d(wk , wl ) > d(zk , zl ) ? 4 u ≥ 2 u, 3 3 1 so the balls B(wk , 4 u) are disjoint. The choice of wk implies that ρ(wk ) = d(wk , zk ) > 1 u and therefore 2 (4.11) We also have (4.12)

1 B(wk , 4 u) ? Λ(u).

B(wk , 1 u) ∩ Λ(u/4) = ?. 4

3 To check this, if x ∈ B(wk , 1 u), then ρ(x) ≤ d(x, zk ) ≤ 4 u, while d(x, z) ≤ 4 3 d(x, zk ) + d(zk , z) ≤ 4 u + t + u < t + 2u. By (4.10), we deduce that m

(4.13)

?(Λ(u/4)) ≤

k=1

?(B(zk , 3u)) ≤ mC0 (3u)d ,

while by (4.11) and (4.12),

m

(4.14) So, provided (4.15)

?(Λ(u)) ≥ ?(Λ(u/4)) +

k=1 1 4 u ≥ R0 ,

?(B(wk , 1 u)). 4

?(Λ(u)) ≥ ?(Λ(u/4)) + mCV ( 1 u)d ≥ (1 + c1 )?(Λ(u/4)). 4

Note that the constant c1 here depends only on C0 , CV and d.

44

M. T. BARLOW

Now let R0 ≤ r ≤ 4sj . Choose n ∈ Z+ as large as possible so that 4n r ≤ 4sj . Then (4.16) ?(Λ(2λ2 r)) ≤ (1 + c1 )?n ?(Λ(2λ2 4n r)) ≤ (1 + c1 )?n ?(Λ(2t)). Set α = log(1 + c1 )/ log 4. Then (1 + c1 )n ≥ (sj /r)α . We have ?(Λ(2t)) ≤ ′ ?(B(z, 5t)) ≤ C0 (5t)d and also ?(Bj ) ≥ CV sd ≥ ctd . Combining this with j (4.16) and (4.9) completes the proof of the lemma. Proposition 4.7. For each 1 ≤ j ≤ N we have |F(i)|

i∈F ? (j) ′ R ?(Bi ) . ′ ) ≤ c1 log s ?(Bj j

Proof. We can write

∞

F ? (j) =

n=?1

F ? (j, 2?n sj ).

Hence using Corollary 4.5 and Lemma 4.6,

∞ ′ |F(i)|?(Bi ) ≤ i∈F ? (j) n=?1 i∈F ? (j,2?n sj ) ∞ ′ |F(i)|?(Bi )

≤c

n=?1

log(2n R/sj )2?nα ?(Bj ) ≤ c?(Bj ) log(R/sj ).

Set (4.17) ?(y) = R ∧ ρ(y) R

2

,

y ∈ G.

? ? For any set A, let ?(A) = A ? d? and fA = ?(A)?1 A f ? d?. For an edge ? e = {x, y} de?ne ?(e) = ?(x) ∧ ?(y). Note that if e ∈ E(B), then ?(e) ≥ R?2 . ? ? Theorem 4.8. Then (4.18)

B

Let B = B(x0 , R) be very good and let R0 = NB ≤ R1/(d+2) . |?f |2 ? dν. ?

E(B)

? (f (x) ? fB )2 ?(x) d? ≤ c1 R2

Proof. We follow the proof in Salo?-Coste and Stroock (1991), but need some extra care close to the boundary of B. For 1 ≤ i ≤ N set

? Bi =

B(yi , 10CW si ), ′′ Bi ,

1 ≤ i ≤ M, M + 1 ≤ i ≤ N.

RANDOM WALKS AND PERCOLATION

45

? ? Whereas 10CW si ≤ 1 λsi ? 1, we have Bi ? B for each i ≤ M , while Bi ? B 2 by de?nition if M + 1 ≤ i ≤ N . Note that for any ball B(yi , csi ) with c ≤ 1 λ, 2 we have, from (4.17),

(4.19)

?(x) ≤

3λ2 ?(y) 2λ1

for all x, y ∈ B(yi , csi ).

Let Pj be the best constant in the weighted Poincar? inequality e (4.20) ? (f (x) ? fBj )2 ?(x) d? ≤ Pj |?f |2 ? dν. ?

′′ Bj

? E(Bj )

′′ Then for j ≤ M , as each Bj is good, we have, using (4.19), Pj ≤ cCP s2 . If j M < j ≤ N , then, using Corollary 1.5(a), ′′ Pj ≤ 2?(Bj )2 sup

(4.21)

′′ x,y∈Bj

?(x) 2d+2 d ≤ c(C0 λR0 )2 (λR0 )2 ≤ c′ R0 . ?(y)

Fix for the moment a ball Bn . We de?ne a sequence of balls Di = B(wi , ti ), 1 ≤ i ≤ Ln , with D1 = B1 and DLn = Bn , as follows. Suppose D1 , . . . , Dk?1 k?1 ′ have been de?ned. Let zk be the point in γ(x0 , yn )∩ i=1 Di which is furthest ′ ′ ′′ ′′ from x0 . (If Di = Bj , then Di = Bi and Di = Bj .) If zk = yn , then we let Dk = Bn , Ln = k and stop. Suppose that zk = yn . If zk ∈ B(yi , 3si ? 1) for some i, then we take Dk = Bi and continue. (We choose the largest such i if this i is not unique.) Note that Dk must be distinct from D1 , . . . , Dk?1 . Finally, suppose zk = yn and zk ∈ B(yi , 3si ? 1). Then ρ(zk ) < λ2 R0 and / ρ(yn ) < d(yn , zk ) + ρ(zk ) < 2λ2 R0 . Hence sn < 2λ2 λ?1 R0 < ηR0 , so that Bn 1 is a boundary ball. We also have ρ(wk?1 ) ≤ 3tk?1 + ρ(zk ), so that Dk?1 is also a boundary ball. In this case we take Dk = Bn , Ln = k and stop. Note that each Dk is a ball in {Bj : j ∈ F(n)}, so that Ln ≤ |F(n)|. We now show that (4.22)

′′ ′′ ′ ′ Dk?1 ∪ Dk ? Dk?1 ∩ Dk ,

2 ≤ k ≤ Ln .

First, if zk = yn and zk ∈ B(yi , 3si ? 1), then both Dk?1 and Dk are / boundary balls, and d(wk?1 , wk ) ≤ 3tk?1 + λ2 R0 , from which (4.22) follows easily. Now let zk ∈ B(wk , 3tk ? 1). Then d(wk?1 , zk ) < 3tk?1 and so d(wk?1 , wk ) < 3tk?1 + 3tk . Since zk ∈ B(wj , 4tj ) for j = k ? 1, k, by (4.5), λ2 (tk ∨ tk?1 ) ≥ ρ(zk ) ≥ λ1 (tk ∧ tk?1 ). Since λ2 /λ1 < 10/9 this implies (4.22).

46

M. T. BARLOW

? ? ′ Let f1 = f (B1 ). Then

′′ Bn

? (f (x) ? f1 )2 ?(x) d? = ? f (x) ? f1 +

Ln ?1 k=1 2

(4.23)

′′ Bn

? ′′ ? ′′ (f (Dk ) ? f (Dk+1 ))

Ln ?1 i=1

?(x) d?

≤ Ln

′′ Bn

? (f (x) ? f1 )2 ?(x) d? + Ln

? ′′ ? ′′ (f (Dk ) ? f (Dk+1 ))2 ?(Bn ). ? ′′

To control the terms in (4.23) note that ? ? ?(B ′′ )(f (D′′ ) ? f (D ′′ ))2 ?

n k k+1

=

?(Bn ) ? ′′ ′′ ?(Dk ∩ Dk+1 ) ? ′′ ?(Bn ) ? ′′ ′′ ?(Dk ∩ Dk+1 ) ? ′′ ?(Bn ) ? ′′ ?(Dk ) ? ′

′′ ′′ Dk ∩Dk+1

? ′′ ? ′′ (f (Dk ) ? f (Dk+1 ))2 ? d? ? ′′ (f ? f (Dk+1 ))2 ? d? ? ′′ (f ? f (Dk+1 ))2 ? d? .

≤2 ≤2

′′ Dk

? ′′ (f ? f (Dk ))2 ? d? +

′′ Dk+1

?(Bn ) ? ′′ ? ′′ (f ? f (Dk ))2 ? d? + 2 ′′ ?(Dk+1 ) ? ′ Dk

′′ Dk+1

Using (4.19) and Lemma 4.4(a),

′′ ′ ′′ ?(Bn ) ? ′′ ?(yn )2 ?(Bn ) ′ ?(Bn ) ′′ ?(Bn ) ′ ) ≤ c ?(w )2 ?(D ′ ) ≤ c ?(D ′ ) ≤ c ?(D ′ ) . ?(Dk ? k k k k

Combining these estimates we obtain, from (4.23),

′′ Bi

? (f (x) ? f1 )2 ?(x) d? ≤ c|F(i)| ≤ c|F(i)|

(4.24)

′ ?(Bi ) ′ ) Pj ?(Bj j∈F (i)

′ ?(Bi ) ′) ?(Bj j∈F (i)

′′ Bj

? ′′ (f ? f (Bj ))2 ? d? |?f |2 ? d?.

? E(Bj )

Summing inequalities (4.24) gives

B

? (f (x) ? f1 )2 ?(x) d?

N

≤c

i=1 N

′ |F(i)|?(Bi ) j∈F (i)

′ ?(Bj )?1 Pj

? E(Bj )

|?f |2 ? dν ?

=c

j=1 i∈F ? (j) N

|F(i)| R Pj sj

′ ?(Bi ) ′ ) Pj ?(Bj

? E(Bj )

|?f |2 ? dν ?

≤c

j=1

log

? E(Bj )

|?f |2 ? dν. ?

RANDOM WALKS AND PERCOLATION

47

If j ≤ M , then Pj ≤ cs2 CP and so since tp log(R/t) ≤ cRp for t > 0 we have j Pj log(R/sj ) ≤ cCP R2 . If M + 1 ≤ j ≤ N , then R0 ≤ sj ≤ ηR0 and Pj ≤ 2d+2 d+2 cR0 , and so since R0 ≤ R,

2d+2 log(R/sj )Pj ≤ cR0 log(R/R0 ) ≤ cR2 . ? So since any edge in B is contained in at most c of the Bi ,

B

? (f (x) ? f1 )2 ?(x) d? ≤ cCP R2 ≤ c′ CP R2

N j=1

? E(Bj )

|?f |2 ? dν ?

|?f |2 ? dν, ?

E(B)

which completes the proof of the theorem. We can also take ? = 1 in (4.17) and use the same argument to obtain a (strong) Poincar? inequality from the family of weak ones. The condition e 2d on NB is slightly weaker than in Theorem 4.8, since we have Pj ≤ cR0 in (4.21). Lemma 4.9. Then Let B = B(x0 , R) be very good and let R0 = NB ≤ R1/(d+2) . min

a B

(f (x) ? a)2 d? ≤ c1 R2

|?f |2 dν.

E(B)

Remark 4.10. The weight function ? in (4.17) is similar to that in Salo?-Coste and Stroock (1991). Fabes and Stroock (1986) and Stroock and Zheng (1997) used weight functions which are supported on the whole space. In particular, Stroock and Zheng (1997) used ψR (x) = exp(?d(x0 , x)/R), x ∈ Zd ,

and proved a weighted Poincar? inequality of the form e (4.25) min

a Zd

(f ? a)2 ψR d? ≤ c1 (d)R2

Ed

|?f |2 ψR dν.

It is interesting to note that this fails for percolation clusters when d ≥ 3. To see this, ?x a point x0 ∈ C∞ and R ≥ 1 large enough so that B(x0 , R) is good. If we look far enough from x0 we can, Pp -a.s., ?nd a cube Q of side R with Q ? C∞ and such that Q is only connected to the rest of C∞ by one edge {x1 , x2 }. We take x1 ∈ Qc , x2 ∈ Q and write s = d(x0 , x2 ); we can assume s ? R. We have e?(s+dR)/R ≤ ψ ≤ e?s/R on Q.

48

M. T. BARLOW

Let f = ?Q . Then as

ψR ? Rd , ? fR =

Q ψR

ψR

≤ ce?s/R ≤

1 4 ψR d? ≥ 1 e?d Rd e?s/R . 2

and min

a Zd

(f ? a)2 ψR d? =

? (f ? fR )2 ψR d? ≥

1 2

Q

On the other hand, R2 |?f |2 ψ dν = R2 ψ(x2 ) = R2 e?s/R .

Thus (4.25) cannot hold with a constant c1 independent of R. 5. Lower bounds. In this section we use the weighted Poincar? inequality e and the technique of Fabes and Stroock (1986) to prove lower bounds for qt (x, y). We continue with the notation and assumptions of Section 3. Proposition 5.1. Let x0 ∈ G and let B1 = B(x0 , R log R) be a very good ball with NB ≤ R1/(d+2) . Then if x1 ∈ B(x0 , 1 R log R), 2 (5.1) qt (x1 , x2 ) ≥ c1 t?d/2 for x2 ∈ B(x1 , R) and 1 R2 ≤ t ≤ R2 . 8

Proof. Let x3 be such that x1 , x2 ∈ B(x3 , R/2). Write R1 = R log R, let ρ = R/6 and let T = c2 R2 . Let x4 ∈ B(x3 , R/2). We apply Proposition 3.7 ′ 2 to B1 with t = T . Since TB1 = cR1 log R1 ≥ c′ R2 log R, the third condition in (3.14) holds, while the other two are evident. So if t ≤ T , then qt (x4 , x)?(x) = P x4 (Yt ∈ B(x1 , 2R/3)) /

x∈B(x3 ,2R/3)c

(5.2)

≤ P x4 (τ (x4 , R/6) < t) ≤ P x4 (τ (x4 , R/6) < T ) 1 ≤ c exp(c′ R2 /T ) ≤ 2 ,

provided c2 is chosen small enough. We can assume that c2 ≤ 1 . 8 Let B = B(x3 , R), set ρ(x) = d(x, B c ) for x ∈ B, and set ?(y) = Then we have (5.3) c3 Rd ≤ V0 ≤ ?(B) ≤ c4 Rd . Write u(s, x) = us (x) = qs (x4 , x), s ≥ 0 and x ∈ G. Set w(s, y) = wt (x) = log V0 u(s, y), H(t) = H(x4 , t) = V0?1

B

R ∧ ρ(y) R

2

,

y ∈ G and V0 =

x∈B

?(x)?(x).

?wt d?.

RANDOM WALKS AND PERCOLATION

49

Then V0 H ′ (t) = ?

B

?wt d? = ?t

?

B

1 ?ut ?(x) d? = Lut (x)?(x). ut ?t u (x) x∈B t

Hence, writing f = ?B ?/ut , we have

1 V0 H ′ (t) = ? 2 x∈G y∈G

axy (f (x) ? f (y))(ut (x) ? ut (u)).

Now we use the elementary inequality [see Stroock and Zheng (1997)] ? (d ? c)2 1 d c ? , (b ? a) ≥ (c ∧ d)(log b ? log a)2 ? b a 2 2(c ∧ d)

which holds for any strictly positive a, b, c, d. If f (x) > 0 and f (y) > 0, then x, y ∈ B and ?(f (x) ? f (y))(ut (x) ? ut (u)) =? ?(x) ?(y) ? (ut (x) ? ut (u)) ut (x) ut (y)

(?(x) ? ?(y))2 1 . ≥ (?(x) ∧ ?(y))(log ut (x) ? log ut (y))2 ? 2 2(?(x) ∧ ?(y)) If both x ∈ B c and y ∈ B c , then f (x) = f (y) = 0, while if x ∈ B and y ∈ B c , then ?(f (x) ? f (y))(ut (x) ? ut (u)) = ??(x) 1 ? We therefore have (5.4) V0 H ′ (t) ≥ 1 axy (?(x) ∧ ?(y))(wt (x) ? wt (y))2 4 x∈B y∈B ? ? 1 (?(x) ? ?(y))2 axy 4 x∈B y∈B ?(x) ∧ ?(y) 1 ut (y) axy ?(x) 1 ? . 2 x∈B y∈B c ut (x) ut (y) . ut (x)

(5.5) (5.6)

The sums in (5.4), (5.5) and (5.6) are called S1 , S2 and S3 , respectively. To bound S2 note that if x ? y with x, y ∈ B and k = ρ(x) ∧ ρ(y), then k ≥ 1 and (2k + 1)2 9 (?(x) ? ?(y))2 = R?2 ≤ 2. 2 ?(x) ∧ ?(y) k R

50

M. T. BARLOW

So S2 ≥ ? 9 R?2 4

x∈B 9 ?(x) ≥ ? 4 R?2 ?(B) ≥ ?cR?2 V0 .

Also, if x ∈ ?i (B), then ?(x) = R?2 , so that S3 ≥ ?

x∈B y∈B c

axy R?2 ≥ ?R?2

x∈B

?(x) ≥ ?cR?2 V0 .

So the terms S2 and S3 are controlled by bounds of the same size, and we deduce, using Theorem 4.8, H ′ (t) ≥ 1 V0?1 4 (5.7) axy (?(x) ∧ ?(y))(wt (x) ? wt (y))2 ? c5 R?2 (wt (x) ? H(t))2 ?(x)?(x).

x∈B x∈B y∈B ?2 ≥ ?c5 R + c6 R?2 V0?1

Let I = [ 1 T, T ]. We use only 2 we have V0 ut (x) ≤ c7 for t ∈ I. [e2+h , ∞), we have

x∈B

(5.7) for t ∈ I. Note that by Theorem 3.8 Since v → (log v ? h)2 /v is decreasing on

(wt (x) ? H(t))2 ?(x)?(x) (5.8) = (log V0 ut (x) ? H(t))2 ?(x)ut (x)?(x) ut (x) x∈B (log c7 ? H(t))2 ≥ ?(x)V0 ut (x)?(x). c7 2+H(t)

x∈B : V0 ut (x)>e 1 9

Then since ?(x) ≥

x∈B : V0 ut

on B(x1 , 2R/3), ?(x)ut (x)?(x)

(x)>e2+H(t)

(5.9)

?(x)ut (x)?(x) ? ?(x)ut (x)?(x) 2+H(t) x∈B x∈B : V0 ut (x)≤e ≥1 ?(x)V0?1 e2+H(t) ?(x) ut (x)?(x) ? 9 x∈B x∈B(x1 ,2R/3) ≥

1 9

=

1?

x∈B(x1 ,2R/3)c

ut (x)?(x) ? e2+H(t) ≥

1 18

? e2+H(t) ,

where we used (5.2) in the ?nal line. Combining the estimates (5.7), (5.8) and (5.9) we deduce that

1 T H ′ (t) ≥ ?c5 + c8 (log c7 ? H(t))2 ( 18 ? e2+H(t) ). 1 Since (a ? h)2 ≥ 2 h2 if h < ?a, this implies that there exists c9 such that

(5.10)

T H ′ (t) ≥ c10 H(t)2 provided H(t) < ?c9 ,

t ∈ I.

RANDOM WALKS AND PERCOLATION

51

If supt∈I H(t) < ?c9 , then (5.10) implies that H(T ) ≥ ?c11 , while since H(t) + c5 T ?1 t is increasing, if supt∈I H(t) ≥ ?c9 , then H(T ) ≥ ?c9 ? c2 c5 . We therefore deduce that H(T ) ≥ ?c12 and it follows that (5.11) H(t) = H(x4 , t) ≥ ?c13 , V0 q2t (x1 , x2 ) = V0?1

y

T ≤ t ≤ R2 .

To conclude the argument, we have, for x1 , x2 ∈ B(x3 , R/2), t ∈ [T, R2 ], V0 qt (x1 , y)V0 qt (x2 , y)?(dy) V0 qt (x1 , y)V0 qt (x2 , y)?(y)?(dy).

y∈B

≥ V0?1 So, using Jensen’s inequality, log(V0 q2t (x1 , x2 )) ≥ V0?1

y∈B

(log(V0 qt (x1 , y)) + log(V0 qt (x2 , y)))?(y)?(dy)

= H(x1 , t) + H(x2 , t) ≥ ?2c13 . Using (5.3) completes the proof of (5.1). Lemma 5.2. Let x, y ∈ G. Suppose there exist r ≥ 1 and a path x = z0 , . . . , zm = y such that for each i = 0, . . . , m, B(zi , r log r) is very good with d+2 NB(zi ,r log r) ≤ r. Then (5.12) qmr (x, y) ≥ c(mr)?d/2 exp(?c1 m/r).

Proof. This uses a chaining argument similar to that in Proposition 3.7. Choose points x = w0 , w1 , . . . , wk = y along the path {z0 , . . . , zm }, such that d(wi?1 , wi ) < r/3 and 3m/r ≤ k ≤ 4m/r. Let s = mr/k, so that 1 r 2 ≤ 4 s ≤ 1 r 2 . Let Bj = B(wj , r/3). We have d(x′ , y ′ ) < r whenever x′ ∈ Bj?1 and 3 y ′ ∈ Bj . So by Proposition 5.1, qs (x′ , y ′ ) ≥ c2 s?d/2 ≥ c3 ?(Bj )?1 , So P x (Ys ∈ Bj ) ≥ c3 and therefore

k?1

′

x′ ∈ Bj?1 , y ′ ∈ Bj .

P x (Yks = y) ≥ P x (Yjs ∈ Bj , 1 ≤ j ≤ k ? 1, Yks = y) ≥

j=1

c3 c3 s?d/2 .

Theorem 5.3. Let B = B(x0 , R log R) be a very good ball with NB ≤ R1/(d+2) . Let d(x0 , x1 ) ≤ 1 R log R and x, y ∈ B(x1 , R). Then 2 (5.13) provided (5.14) NB

2(2+d)

qt (x, y) ≥ c1 t?d/2 exp(?c2 d(x, y)2 /t), ≤ t ≤ R2

52

M. T. BARLOW

and (5.15)

2+d NB d(x, y) ≤ t.

Proof. Let D = d(x, y). If D 2 /t ≤ 1, then set r = t1/2 and apply Proposition 5.1 to B1 = B(x, r log r). Since D ≤ r ≤ R, B1 ? B and NB1 ≤ NB . So d+2 d+2 NB1 ≤ NB ≤ t1/2 = r, the hypotheses of Proposition 5.1 hold and we deduce that qt (x, y) ≥ ct?d/2 . For the case D2 /t > 1 we use Lemma 5.2. Let x = z0 , . . . , zD = y be a d+2 shortest path between x and y, and let r = t/D. By (5.15), r ≥ NB , while 1/2 ≤ R, so that B = B(z , r log r) ? B and hence N d+2 ≤ N d+2 ≤ r. So r≤t i i Bi B the hypotheses of Lemma 5.2 are satis?ed and we obtain (5.13). Remark. The restriction in (5.15) is weaker than the hypotheses in the upper bound of Theorem 3.8, where we were able to use global upper bounds on qt to restrict to the case when t was close to D 1/2 . The lower bound 2+d argument for cD ≤ t ≤ cDNB requires the existence of a chain of small balls (of size roughly r = t/D) on which the lower bounds of Proposition 5.1 are valid. If r = O(1) so that t ? D, then we can just use a path in the graph and (1.4) to deduce that qt (x, y) ≥ e?ct ? exp(?c′ D2 /t). However, if D = t1?ε , then we need r ? tε ? 1 and elementary bounds are not enough. For a very good ball B, the volume condition or the Poincar? inequality e fails for some subballs of size r < NB . However, these may hold for enough small subballs so that we can still use Lemma 5.2, and in Section 2 it is proved, in the percolation context, that it is possible to ?nd such a chain. Fix CE ≥ 1 and CF > 1. Definition 5.4. A ball B = B(x0 , R1 ) is exceedingly good if:

10(d+2)

1. We have B is very good with NB ≤ R1 . 1/4 2+d 2. For each x1 , x2 ∈ B(x0 , R1 ) with d(x1 , x2 ) ≥ R1 and CE ≤ r ≤ NB there exists a path y1 = z0 , . . . , zk = y2 with the following properties:

2+d (a) Bi = B(zi , r log r) is very good with NBi ≤ r; (b) k ≤ CF d(x, y); 1/4 (c) d(xj , yj ) ≤ R1 , j = 1, 2. 2+d Remark 5.5. If B is very good and NB ≤ r ≤ R/ log R, then taking m = d(x, y) and z0 , . . . , zm to be a shortest path between x1 and x2 , we get a path satisfying 2(a)–(c) above, with yj = xj .

RANDOM WALKS AND PERCOLATION

53

Theorem 5.6. Let B = B(x0 , R log R) be exceedingly good and let x1 , x2 ∈ B(x0 , R). Then there exist constants ci (depending on CE and CF ) such that (5.16) qt (x1 , x2 ) ≥ c1 t?d/2 exp(?c2 d(x1 , x2 )2 /t), R1/2 ∨ d(x, y) ≤ t ≤ R2 .

Proof. Let R1 = R log R and D = d(x1 , x2 ). By Theorem 5.3 it is enough to consider the case when t satis?es the condition in (5.16) but fails (5.14) or 1/3 3(d+2) , (5.14) must hold. So we just need (5.15). Since t ≥ R1/2 ≥ R1 ≥ NB 2+d to consider the case when D ≤ t ≤ NB D. Note that this implies that (5.17) D≥ t R1/2 D2 1/4 ≥ 2(d+2) ≥ 1/5 ≥ R1 . t NB R1

In particular, D2 /t ? log t so that, for this range of t and D, terms of the order t?d/2 can be absorbed into the constant c2 in (5.16). 2+d Let r = t/(2CF D), so that (2CF )?1 ≤ r ≤ (2CF )?1 NB . We have to consider two cases. If 1 ≤ r ≤ CE , then we use (1.4) directly. Set s = t/D, so that c3 ≤ s ≤ c4 , and we obtain qDs ≥ e?cD ≥ exp(?cD2 /t). If CE ≤ r, then we use the fact that B is exceedingly good so that there exists a path z0 , . . . , zk that satis?es conditions 2(a)–(c) of De?nition 5.4. 1 2 We have D ≤ k ≤ CF D, so that (2CF )?1 t ≤ kr ≤ 2 t and k/r ≤ 2CF D 2 /t. Applying Lemma 5.2, (5.18) qkr (z0 , zk ) ≥ c(kr)?d/2 exp(?c5 k/r) ≥ ct?d/2 exp(?cD2 /t).

1/4

By (5.17), D2 /t ≥ R1 ≥ d(x1 , y1 ) ∨ d(x2 , y2 ). Let Di = d(xi , zi ). By (5.17), 1/4 Di ≤ R1 ≤ t/8. Using (1.4), (5.19) qmj (xj , zj ) ≥ exp(?c6 mj ) ≥ exp(?c6 D 2 /t). qt (x0 , x1 ) ≥ qu (x0 , x1 )qt?u (x1 , x1 ) ≥ qD0 (x0 , z0 )qkr (z0 , zk )qD1 (zk , x1 )qt?u (x1 , x1 ). Using the bounds (5.18) and (5.19), and Proposition 5.1 to control the ?nal term, we obtain (5.16). Theorem 5.7. Let d ≥ 2, and let C0 , CV , CP and CW be constants. Let G = (G, E) be an in?nite graph that satis?es (1.1) and let x ∈ G. (a) Suppose that there exists R0 = R0 (x) such that B(x, R) is very good 3(d+2) with NB(x,R) ≤ R for each R ≥ R0 . There exist constants ci = ci (d, C0 , CV , CP , CW ) such that if t satis?es (5.20) t ≥ R0 ,

2/3 3 Let u = D1 + D2 + kr. Then 1 t ≤ u ≤ 4 t. By (1.4), 2

54

M. T. BARLOW

then (5.21) and (5.22) qt (x, y) ≥ c3 t?d/2 exp(?c4 d(x, y)2 /t), d(x, y)3/2 ≤ t. (b) If in addition there exists RE = RE (x) such that B(x, R) is exceedingly good for each R ≥ RE , then the lower bound (5.22) holds (with constants depending in addition on CE and CF ) whenever t ≥ RE and t ≥ d(x, y). Proof. Fix y ∈ G and let D = d(x, y). We need to check that we can ?nd R so that we can apply Theorems 3.8, 5.3 and 5.6. We begin with (a). Let R = t2/3 , so that R ≥ D; by (5.20) we have R ≥ R0 . Let B = B(x, R); 2(2+d) then NB ≤ R. To apply Theorem 3.8 and obtain (5.21) we need (3.18) to hold, but this is clear since t ≥ D 3/2 ≥ D and

2d+1 ≤ NB NB 2(d+2)

qt (x, y) ≤ c1 t?d/2 exp(?c2 d(x, y)2 /t),

d(x, y) ≤ t,

≤ R = t2/3 ≤ t = R3/2 ≤

cR2 . log R

To obtain (5.22), we use Theorem 5.3. Since R ≥ D we have y ∈ B(x, R). Let 3(2+d) R1 = R log R. Since R1 ≥ R0 , B1 = B(x, R1 ) is very good with NB1 ≤ R1 . So NB1

2(2+d)

≤ R log R ≤ R3/2 = t ≤ 1 R2 4

1/3

and (5.14) is veri?ed. Also,

2+d cDNB1 ≤ c′ t2/3 R1

≤ t2/3 R1/2 ≤ t,

so (5.15) holds and we obtain (5.22). (b) We now take R = t, so that R ≥ RE . Then D ≤ t = R and we apply Theorem 5.6 to B(x, R1 ) with R1 = R log R. Condition (5.16) is easily veri?ed and the bound in (5.22) is immediate. We now prove an elliptic Harnack inequality for graphs satisfying the conditions of Theorem 5.3. The approach is the same as in Section 5 of Fabes and Stroock (1986), but we need an additional argument in Theorem 5.11. Lemma 5.8. Let B = B(x0 , R log R) be a very good ball with NB ≤ 1 0 (x, y) be the density R. Let d(x0 , x1 ) ≤ 2 R log R, let B0 = B(x1 , R) and let qt of the process Y killed at the exit time τ of Y from B0 . Then (5.23)

0 qt (x, y) ≥ c1 t?d/2 , 2(d+2)

x, y ∈ B(x1 , 3R/4),

c2 R2 ≤ t ≤ R2 .

RANDOM WALKS AND PERCOLATION

55

Proof. Whereas this argument is quite standard [see, e.g., Lemma 5.1 of Fabes and Stroock (1986)], we only give a sketch. For x, y ∈ B0 we have (5.24)

0 qt (x, y) ≥ qt (x, y) ? E x ?(τ <t) qt?τ (Yτ , y) ≥ qt (x, y) ? sup sup qs (z, y). 0≤s≤t z∈?B0

1 Let δ ∈ (0, 8 ), d(x, y) ≤ δR and t = δ2 R2 . Using the estimates in Theorems 5.3 and 3.8 in (5.24) we obtain, provided t and s = θt satisfy conditions (3.18), (5.14) and (5.15), 0 qt (x, y) ≥ t?d/2 c1 exp(?c2 ) ? sup c3 θ ?d/2 exp(?c4 /(θδ2 )) . 0≤θ≤1

[If s is too small to satisfy (3.18), we use Lemma 1.1.] Hence if δ is chosen small enough, we obtain (5.25)

0 qt (x, y) ≥ c5 t?d/2 .

A chaining argument now gives (5.23). Definition. Write B(x, R) = B(x, R)∪?e (B(x, R)). A function h : B(x, R) → R is harmonic on B(x, R) if Lh(x) = 0, We write Osc(h, A) = supA h ? inf A h. The following oscillation bound follows from (5.23) just as in Lemma 5.2 of Fabes and Stroock (1986). ≤ Lemma 5.9. Let B = B(x0 , R log R) be a very good ball with NB 1 R. Let d(x0 , x1 ) ≤ 1 R log R, B0 = B(x1 , R), B1 = B(x1 , 2 R) and h be har2 monic in B0 . There exists c1 ∈ (0, 1) such that Osc(h, B1 ) ≤ (1 ? c1 ) Osc(h, B0 ). Proof. By a linear transformation we can assume minB0 h = 0, maxB0 h = 1 and B1 h d? ≥ 1 ?(B1 ). Then if x ∈ B1 , by Lemma 5.8, 2 h(x) ≥

B1 0 1 qR2 (x, y)h(y)?(dy) ≥ 2 c2 R?d ?(B1 ) ≥ c3 . 2(d+2)

x ∈ B(x, R).

So Osc(h, B1 ) ≤ (1 ? c3 ). We also need an intermediate range Harnack inequality.

56

M. T. BARLOW

3(d+2)

Lemma 5.10. Let B = B(x0 , R log R) be a very good ball with NB ≤ 1 R. Let d(x0 , x1 ) ≤ 2 R log R, B0 = B(x1 , R), and h be nonnegative and harmonic in B0 . Then if d(x1 , y) ≤ R/2 and r = R1/2 , sup h ≤ c1 inf h.

B(y,r) B(y,r)

Proof. Let B1 = B(x1 , 3R/4). We can assume inf B1 h = 1. The local Harnack inequality for graphs implies that if x ? y and x, y ∈ B1 , then h(x) ≤ C0 h(y). Hence h(x) ≤ exp(c2 R), x ∈ B 0 . We extend h to G by taking h ≡ 1 outside B 0 . Let τ be the ?rst exit time of Y from B1 . Whereas h(Yt ) is a martingale on [0, τ0 ], for x ∈ B(x1 , R/2), h(x) = E x h(Yt∧τ )

c = E x ?B1 (Yt )h(Yt∧τ ) + E x ?B1 (Yt )h(Yτ ) c = E x ?B1 (Yt )h(Yt ) + E x ?B1 (Yt )?(τ <t) (h(Yτ ? h(Yt ))) + E x ?B1 (Yt )h(Yτ )

≤

B1

qt (x, y)h(y) d?(y) + exp(c2 R)P x (τ < t).

Using Proposition 3.7, the ?nal term above is bounded by c3 exp(c2 R ? c4 R2 /t) ≤ c3 if t ≤ Rc2 /c4 . Now let d(z, x) ≤ R1/2 and s = λt with λ > 1. Then if λθ ≤ c2 /c4 , h(z) ≥ E z ?B1 (Ys )h(Ys ) ? E z ?(τ <t) h(Ys ) ≥

B1

qs (z, y)h(y) d?(y) ? exp(c2 R)P x (τ < s) qs (z, y)h(y) d?(y) ? c3 .

≥

B1

Also, if u = R2 , by Lemma 5.8, h(z) ≥

B1 0 qt (z, y)h(y) d?(y) ≥ B1

c5 u?d/2 h(y) d?(y).

Hence 2h(z) ≥

B1

(c5 R?d + qs (z, y))h(y) d?(y) ? c3 .

Using Lemma 1.1, and Theorems 3.8 and 5.3, we can choose λ so that c5 R?d + qs (z, y) ≥ c6 qt (x, y), y ∈ B1 .

It follows that 2h(z) ≥ c6 (h(x) ? c3 ) ? c3 , so that [as h(z) ≥ 1] we have c6 h(x) ≤ (2 + c3 (1 + c6 ))h(z).

RANDOM WALKS AND PERCOLATION

4(d+2)

57

Theorem 5.11. Let B = B(x0 , R log R) be very good, with NB ≤ R. Then if d(x0 , x1 ) ≤ 1 R log R, B0 = B(x1 , R), and h : B 0 → R is nonnegative 3 and harmonic in B0 , sup

B(x1 ,R/2)

h ≤ c1

B(x1 ,R/2)

inf

h.

Proof. We begin in the same way as in Fabes and Stroock (1986). Let α3 = 1/4. From Lemma 5.9 it follows that there exists A such that Osc(h, B(y, r)) ≤ (2ed )?1 Osc(h, B(y, Ar)) if Rα3 ≤ r, Ar ≤ R/8. (5.26) We normalize h so that minB(x1 ,R/4) h = 1. Let x2 ∈ B(x1 , R/4) satisfy h(x2 ) = 1. By Lemma 5.8, if y ∈ B(x1 , 3R/4) and B(y, s) is good, 1 = h(x2 ) ≥ c2 Thus (5.27) Mr = 2c?1 erd 3

B(y,s) B(y,s)

R?d h(y) d?(y) ≥ c3 (s/R)d inf h.

B(y,s)

inf h ≤ c?1 3

Rd . sd

Now let and ar = Re?r . Suppose that there exists yr ∈ B(x1 , R/2) with h(yr ) ≥ Mr . Then, by (5.27), Osc(h, B(yr , ar )) ≥ c?1 erd . So, provided 3 (5.28) ar ≥ Rα3 and Aar ≤ R/8, it follows that Osc(h, B(yr , Aar )) ≥ 2ed c?1 erd = Mr+1 . Hence there exists 3 yr+1 with d(yr , yr+1 ) ≤ Aar with h(yr+1 ) ≥ Mr+1 . Choose r0 so that ∞ Aar ≤ R/8. Then if supB(x1 ,R/4) h ≥ Mr0 , the arr0 gument above implies that we can construct a sequence yj , r0 ≤ j ≤ k with yj ∈ B(x1 , R/2) and h(yj ) ≥ Mj . Here k is the largest r such that (5.28) holds. We have Mk = 2c?1 ekd = cRd a?d ≥ c4 Rd(1?α3 ) . The local Harnack 3 k inequality in Lemma 5.10 implies that (5.29)

B(yk ,R1/2 )

inf

h ≥ c5 c4 Rd(1?α3 ) ,

while (5.27) implies that (5.30) Since α3 <

1 2 B(yk ,R1/2 )

inf

h ≤ c?1 Rd/2 . 3

this gives a contradiction if R ≥ c6 . So we deduce that sup

B(x1 ,R/4)

h ≤ Mr0

B(x1 ,R/4)

inf

h.

The Harnack inequality for B(x1 , R/2) ? B(x1 , R) now follows by an easy chaining argument.

58

M. T. BARLOW

6. Random walks and percolation. In this section we tie together the results of Section 2 and Sections 3–5 and prove Theorems 1–5. We recall the notation for bond percolation: We take ? = {0, 1}Ed , write ηe , e ∈ Ed , for the coordinate maps and take Pp to be the probability on ? which makes the ηe i.i.d. Bernoulli r.v. with mean p. We assume that p > pc so that there exists ?0 with Pp (?0 ) = 1 such that for ω ∈ ?0 there is a unique in?nite cluster C∞ (ω). As in the Introduction we take Y to be the Markov process with ω generator Lω given by (0.1), and write qt (x, y) for the transition density of x Y as given by (0.2). Write (Pω , x ∈ C∞ ) for the probability law of Y . Proposition 6.1. Let p > pc . There exists ?1 ? ? with Pp (?1 ) = 1 and Sx , x ∈ Zd , such that Sx (ω) < ∞ for each ω ∈ ?1 , x ∈ C∞ (ω). There exist constants ci = ci (d, p) such that for x, y ∈ C∞ (ω), t ≥ 1 with (6.1) Sx (ω) ∨ dω (x, y) ≤ t, the transition density qt (x, y) of Y satis?es

ω (6.2) c1 t?d/2 exp(?c2 dω (x, y)2 /t) ≤ qt (x, y) ≤ c3 t?d/2 exp(?c4 dω (x, y)2 /t).

Proof. Let the constants C0 , CV , CP , CW , CE and CF (depending on d and p) be as in Section 2 and, as in Section 2, let α?1 = 11(d + 2). Let 2 (Nx , x ∈ Zd ) be as in Lemma 2.24 and let ?1 = {ω : Nx (ω) < ∞ for all x}. Let ω ∈ ?1 and x ∈ C∞ (ω). We set Sx = RE (x) = Nx and check the hypotheses of Theorem 5.7. Let R ≥ RE , B = Bω (x, R) ? Q = Q(x, R) and n = 2R = s(Q). Whereas ω ∈ L(Q) ? D(Q, α2 ) ∩ H(Q, α2 ), applying Theorem 2.18(c) we deduce that B is very good with NB ≤ CF nα2 ≤ R1/(10(d+2)) . d+2 Now let CE ≤ r ≤ NB and let x1 , x2 ∈ B with dω (x1 , x2 ) ≥ R1/4 . Whereas D(Q, α2 ) holds, dω (x1 , x2 ) ≥ 1 n1/4 . Choose m so that m/16 = ?r log r? and 3 apply Theorem 2.18(b) to deduce that there exists a path x′ = z0 , . . . , zj = x′ 2 1 that satis?es condition (b) of Theorem 2.23. We need to verify (a)–(c) of Definition 5.4.2. Part (a) holds since Bi = Bω (zi , r log r) = Bω (zi , m/16) is very good and NBi ≤ m1/(d+4) < r 1/(d+2) . For (b) we have j ≤ c2.23.3 |x1 ? x2 |∞ ≤ dc2.23.3 dω (x1 , x2 ). Part (c) is easily veri?ed as dω (xi , x′ ) ≤ 1 n1/4 < R1/4 . i 3 Thus B is exceedingly good. So we can apply Theorem 5.7 to deduce that the bound (6.2) holds for t ≥ RE and y such that d(x, y) ≤ t. Proof of Theorem 1. Given Proposition 6.1, all that remains is to replace the chemical distance dω (x, y) by |x ? y|1 . Whereas |x ? y|1 ≤ dω (x, y), the upper bound in (0.4) is immediate. For the lower bound, we take Sx as in Proposition 6.1, and let x, y ∈ C∞ and t ≥ Sx with |x ? y|1 ≤ t. Choose the smallest cube Q of side n that contains x and y and with n ≥ Sx . Since Q is very good, by Proposition 2.17(d) we have |x ? y|∞ ≤ CF (nα2 + 1) ∨ |x ? y|∞ .

RANDOM WALKS AND PERCOLATION

59

If |x ? y|∞ ≥ 1 + nα2 , then it follows that dω (x, y) ≤ c|x ? y|∞ , so the lower bound in (0.4) follows. If |x ? y|∞ < 1 + nα2 , then dω (x, y) ≤ nα2 . So because t ≥ Sx , both dω (x, y)2 /t and |x ? y|2 /t are less than 1, and again the lower 1 bound in (0.4) holds. Proof of Theorem 2. Fix x, y ∈ Zd , let D = |x ? y|1 and ?x t ≥ D. Write A = {x, y ∈ C∞ }. Since c5 ≤ Pp (x, y ∈ C∞ ) ≤ 1, it is enough to prove ω (0.6) for Ep (qt (x, y)?A ). Let Sx be as in the proof of Theorem 1 and let n = t/2. Then

ω ω ω (6.3) Ep (qt (x, y)?A ) = Ep (qt (x, y)?A ?{Sx <n} ) + Ep (qt (x, y)?A ?{Sx ≥n} ). ω By the proof of Theorem 1, if ω ∈ A and Sx (ω) ≤ n, then qt (x, y) satis?es the bounds in (0.4). So the lower bound in (0.6) is immediate since ω ω Ep (qt (x, y)?A ) ≥ Ep (qt (x, y)?A ?{Sx <n} ) ≥ c6 exp(?c7 D2 /t)(Pp (A) ? Pp (A ∩ {Sx ≥ n})) ≥ c6 exp(?c7 D2 /t)(c5 ? c8 exp(?tαβ )).

(6.4)

1 So if t ≥ c9 , then the ?nal term in (6.4) is greater than 2 and we obtain (0.6). If t < c9 , then with probability at least c5 pD there is a path of length D in C∞ that joins x and y, and using (1.4), we deduce that on this event, ω qt (x, y) ≥ cD ≥ c′ . So (0.6) holds in this case also. To prove the upper bound, recall that by Lemma 1.1 we always have (when t ≥ D) that ω qt (x, y) ≤ c exp(?2c10 dω (x, y)2 /t) ≤ c exp(?2c10 D 2 /t).

If exp(?c10 D 2 /t) ≤ t?d/2 , then the upper bound in (0.6) is then immediate. If not, then by Theorem 1 we have

ω Ep (qt (x, y)?A ?{Sx <n} ) ≤ c11 exp(?c12 D 2 /t),

so it remains to control the second term in (6.3). We have, provided t ≥ c13 ,

ω Ep (qt (x, y)?A ?{Sx ≥n} ) ≤ Pp (Sx ≥ n) ≤ c exp(?ctα2 β )

≤ t?d ≤ t?d/2 exp(?c10 D2 /t). If t ≤ c13 , then (as D ≤ t) we obtain the upper bound in (0.6) by taking the constant c2 large enough. Proof of Theorem 3. The proof follows easily from Theorem 5.11 and Proposition 6.1. Proof of Theorem 4. (a) This is a well-known consequence of the Harnack inequality. Let h : C∞ → (0, ∞) be a global harmonic function. Replacing h by ch if necessary, we can assume h ≥ 1 everywhere and that there

60

M. T. BARLOW

exists x0 with h(x0 ) < 2. Applying Theorem 3 to Bω (x0 , R) ? Bω (x0 , 2R) ? C∞ , when R is large enough we deduce that supBω (x0 ,R) h ≤ 2c1 , so that h is bounded. Let Bn = B(x0 , 2n R). Then Theorem 3 implies the oscillation inequality inf Bn h ≥ c?1 supBn+1 h, so that Osc(h, Bn ) ≤ (1 ? c?1 ) Osc(h, Bn+1 ). By it1 1 ?1 ?n erating we deduce that Osc(h, Bn ) ≥ (1 ? c1 ) Osc(h, B0 ) and since h is bounded, this implies that Osc(h, B0 ) = 0. Thus h is constant on any large ball and so is constant. (b) This is also standard [Lemma 5.2 of Fabes and Stroock (1986)]. Lemma 5.8 gives lower bounds on the transition probability 0,ω qt (x, y) for Y killed outside a ball B(x0 , R) for all su?ciently large R. Let F be an event in the tail ?eld and set f (s, x) = Pω (F |Ys = x). Then 0 ≤ f ≤ 1 and f satis?es (6.5) f (s, x) =

C∞ ω qt?s (x, y)f (t, y)?(dy),

s < t.

Fix x0 ∈ C, t0 ≥ 0 and set

1 A(R) = B(x0 , 2 R) × [t0 , t0 + R2 ].

Let g(s, x) satisfy (6.5) with minA(2R) g = 0, maxA(2R) g = 1 and

B(x0 ,R/2)

g(t0 + 4R2 , y)?(dy) ≥ 1 . 2

Then if (s, x) ∈ A(R), g(s, x) ≥

B(x0 ,R/2) 0,ω qt0 +4R2 ?s (x, y)g(t0 + 4R2 , y)?(dy)

≥

B(x0 ,R/2)

c2 R?d g(t0 + 4R2 , y)?(dy)

1 ≥ 2 c2 R?d ?(B(x0 , 1 R)) ≥ c3 . 2

Hence there exists δ > 0 such that Osc(f, A(R)) ≤ (1 ? δ) Osc(f, A(2R)), and by iterating, it follows that f is constant. Proof of Theorem 5. The proof is immediate on integrating the bounds (0.4) and using (1.6) to control qt (x, y) for |x ? y|1 ≥ t. REFERENCES

Antal, P. and Pisztora, A. (1996). On the chemical distance for supercritical Bernoulli percolation. Ann. Probab. 24 1036–1048. MR1404543 Barlow, M. T. and Bass, R. F. (1989). The construction of Brownian motion on the Sierpinski carpet. Ann. Inst. H. Poincar? Probab. Statist. 25 225–257. MR1023950 e

RANDOM WALKS AND PERCOLATION

61

Barlow, M. T. and Bass, R. F. (1992). Transition densities for Brownian motion on the Sierpinski carpet. Probab. Theory Related Fields 91 307–330. MR1151799 Barlow, M. T. and Bass, R. F. (2004). Stability of parabolic Harnack inequalities. Trans. Amer. Math. Soc. 356 1501–1533. MR2034316 Bass, R. F. (2002). On Aronsen’s upper bounds for heat kernels. Bull. London Math. Soc. 34 415–419. MR1897420 Benjamini, I. and Mossel, E. (2003). On the mixing time of simple random walk on the super critical percolation cluster. Probab. Theory Related Fields 125 408–420. MR1967022 Benjamini, I., Lyons, R. and Schramm, O. (1999). Percolation perturbations in potential theory and random walks. In Random Walks and Discrete Potential Theory 56–84 (M. Dicardello and W. Woess, eds.). Cambridge Univ. Press. MR1802426 Carlen, E. A., Kusuoka, S. and Stroock, D. W. (1987). Upper bounds for symmetric Markov transition functions. Ann. Inst. H. Poincar? Probab. Statist. 23 245–287. e MR898496 Coulhon, T. and Grigor’yan, A. (2003). Pointwise estimates for transition probabilities of random walks on in?nite graphs. In Fractals (P. Grabner and W. Woess, eds.) 119– 134. Birkh¨user, Boston. MR2091701 a e Couronn?, O. and Messikh, R. J. (2003). Surface order large deviations for 2D FKpercolation and Potts models. Preprint. MR2078538 Davies, E. B. (1993). Large deviations for heat kernels on graphs. J. London Math. Soc. (2) 47 65–72. MR1200978 De Gennes, P. G. (1976). La percolation: Un concept uni?cateur. La Recherche 7 919– 927. Delmotte, T. (1999). Parabolic Harnack inequality and estimates of Markov chains on graphs. Rev. Mat. Iberoamericana 15 181–232. MR1681641 De Masi, A., Ferrari, P. A., Goldstein, S. and Wick, W. D. (1989). An invariance principle for reversible Markov processes. Applications to random motions in random environments. J. Statist. Phys. 55 787–855. MR1003538 Deuschel, J.-D. and Pisztora, A. (1996). Surface order large deviations for high-density percolation. Probab. Theory Related Fields 104 467–482. MR1384041 Fabes, E. B. and Stroock, D. W. (1986). A new proof of Moser’s parabolic Harnack inequality via the old ideas of Nash. Arch. Rational. Mech. Anal. 96 327–338. MR855753 Grigor’yan, A. A. (1992). Heat equation on a noncompact Riemannian manifold. Math. USSR-Sb. 72 47–77. MR1098839 Grimmett, G. R. (1999). Percolation, 2nd ed. Springer, Berlin. MR1707339 Grimmett, G. R., Kesten, H. and Zhang, Y. (1993). Random walk on the in?nite cluster of the percolation model. Probab. Theory Related Fields 96 33–44. MR1222363 Heicklen, D. and Hoffman, C. (2000). Return probabilities of a simple random walk on percolation clusters. Preprint. Jerison, D. (1986). The weighted Poincar? inequality for vector ?elds satisfying e H¨rmander’s condition. Duke Math. J. 53 503–523. MR850547 o Kaimanovitch, V. A. (1990). Boundary theory and entropy of random walks in random environment. In Probability Theory and Mathematical Statistics 573–579. Mokslas, Vilnius. Kesten, H. (1986a). The incipient in?nite cluster in two-dimensional percolation. Probab. Theory Related Fields 73 369–394. MR859839 Kesten, H. (1986b). Subdi?usive behavior of random walks on a random cluster. Ann. Inst. H. Poincar? Probab. Statist. 22 425–487. MR871905 e

62

M. T. BARLOW

Kusuoka, S. and Zhou, X. Y. (1992). Dirichlet form on fractals: Poincar? constant and e resistance. Probab. Theory Related Fields 93 169–196. MR1176724 Liggett, T. M., Schonmann, R. H. and Stacey, A. M. (1997). Domination by product measures. Ann. Probab. 25 71–95. MR1428500 Mathieu, P. and Remy, E. (2004). Isoperimetry and heat kernel decay on percolation clusters. Ann. Probab. 32 100–128. MR2040777 Nash, J. (1958). Continuity of solutions of parabolic and elliptic equations. Amer. J. Math. 80 931–954. MR100158 Penrose, M. D. and Pisztora, A. (1996). Large deviations for discrete and continuous percolation. Adv. in Appl. Probab. 28 29–52. MR1372330 Pisztora, A. (1996). Surface order large deviations for Ising, Potts and percolation models. Probab. Theory Related Fields 104 427–466. MR1384040 Saloff-Coste, L. (1992). A note on Poincar?, Sobolev, and Harnack inequalities. Intere nat. Math. Res. Notices 2 27–38. MR1150597 Saloff-Coste, L. (1997). Lectures on ?nite Markov chains. Lectures on Probability The? ory and Statistics. Ecole d’Ete de Probabilit?s de Saint-Flour XXVI. Lecture Notes in e Math. 1665 301–408. Springer, Berlin. MR1490046 Saloff-Coste, L. and Stroock, D. W. (1991). Op?rateurs uniform?ment souse e elliptiques sur les groupes de Lie. J. Funct. Anal. 98 97–121. MR1111195 Sidoravicius, V. and Sznitman, A.-S. (2003). Quenched invariance principles for walks on clusters of percolation or amoung random conductances. Preprint. MR2063376 Stroock, D. W. and Zheng, W. (1997). Markov chain approximations to symmetric di?usions. Ann. Inst. H. Poincar? Probab. Statist. 33 619–649. MR1473568 e Thomassen, C. (1992). Isoperimetric inequalities and transient random walks on graphs. Ann. Probab. 20 1592–1600. MR1175279

Department of Mathematics University of British Columbia Vancouver, British Columbia Canada V6T 1Z2 e-mail: barlow@math.ubc.ca url: www.math.ubc.ca/?barlow

更多相关文档:

3.2 Resistance of the 2D *supercritical* *percolation* *cluster* Consider ...Fill, Reversible Markov Chains and *Random* *Walks* *on* Graphs. http://stat-...

Return probabilities of a simple *random* *walk* *on* *percolation* *clusters*_专业资料...The phase where p > pc is called *supercritical* Bernoulli *percolation*. ...

An interlacing technique for spectra of *random* *walks* and its application to ?nite *percolation* *clusters* Florian Sobieczky 1 arXiv:math/0504518v4 [math.PR...

according to bond *percolation*, and then paint *randomly* and independently the ...nite *clusters*. In both subcritical case and *supercritical* case, we can ...

Maximal *clusters* in non-... 暂无评价 27页 ... *Random* *walks* *on* supercri... 暂无评价 62页 ...[math.PR] 16 Feb 2007 *Percolation* *on* *random* ...

For d ≥ dc , one expects the mean ?eld (MF) values g1 = g1 = g1 = 0, since *percolation* *clusters* behave similarly to simple *random* *walks* ...

(Zd ), then simple *random* *walk* *on* the in nite *percolation* *cluster* C1 (...*On* the chemical distance in *supercritical* Bernoulli *percolation*. Ann. Probab....

nitely large *percolation* *clusters*. The most studied *percolation* systems have been involved with totally *random* and uncorrelated models. But, in some important...

Excess number of *percolation* *clusters* *on* the surface of a sphere_专业资料。...For each unit square, a *random* number n was generated and if n > 0, ...

In analogy with the theory of self-avoiding *random* *walks* (SAWs) [911]...dr. For the case of the in?nite *percolation* *cluster*, i. e. when we ...

and entanglement *percolation*, as well as to the Gibbs theory of *random*-*cluster* measures, and to the central limit theorem for *random* *walks* in *random* re...

Asymmetry of near-critical *percolation* interfaces ...Recall that this can be viewed as a *random* ...nite *cluster* of black sites (*supercritical* regime)...

Diffusion processes and *random* *walks* ? Searching ... such as *percolation*, phase diagram, mean-field,...“best takes all” existence of *clusters* (G. ...

First examples are a *random* *walk*, i.e., a Brownian motion, a self-avoiding *walk*, or a critical *percolation* *cluster*. The generalized dimensions D (n)...

We present a new view of Feynman diagrams for the field theory of transport *on* *percolation* *clusters*. The diagrams for *random* resistor networks are ...

ned local *clusters* and short global connections. ...[6], *percolation* [7], spreading of diseases [...*random* *walks* *on* small-world networks, in ...

更多相关标签:

相关文档

- Quenched invariance principles for random walks on percolation clusters
- Multifractality of self-avoiding walks on percolation clusters
- random_walks_on_graphs
- On the Eigenspaces of Lamplighter Random Walks and Percolation Clusters on Graphs
- Random walks, capacity, and percolation on trees
- Return probabilities of a simple random walk on percolation clusters
- Meeting times of random walks on graphs
- Random walks on finite groups
- Diffusion on random site percolation clusters. Theory and NMR microscopy experiments with m
- Critical dimensions for random walks on random-walk chains

文档资料共享网 nexoncn.com
copyright ©right 2010-2020。

文档资料共享网内容来自网络，如有侵犯请联系客服。email:zhit325@126.com