It is the cache of ${baseHref}. It is a snapshot of the page. The current page could have changed in the meantime.
Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar.

Strong Law of Large Numbers of the Offspring Empirical Measure for Markov Chains Indexed by Homogeneous Tree
ISRN Applied Mathematics
Volume 2012 (2012), Article ID 536530, 9 pages
http://dx.doi.org/10.5402/2012/536530
Research Article

Strong Law of Large Numbers of the Offspring Empirical Measure for Markov Chains Indexed by Homogeneous Tree

College of Mathematics and Information Science, Wenzhou University, Zhejiang 325035, China

Received 24 December 2011; Accepted 26 January 2012

Academic Editors: K. Karamanos and E. Yee

Copyright © 2012 Huilin Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We study the limit law of the offspring empirical measure and for Markov chains indexed by homogeneous tree with almost everywhere convergence. Then we prove a Shannon-McMillan theorem with the convergence almost everywhere.

1. Introduction

A tree is a graph 𝐺 = { 𝑇 , 𝐸 } which is connected and contains no circuits, where 𝑇 and 𝐸 denote the vertex set and the edge set, respectively. Given any two vertices 𝛼 β‰  𝛽 ∈ 𝑇 , let 𝛼 𝛽 be the unique path connecting 𝛼 and 𝛽 . Define the graph distance 𝑑 ( 𝛼 , 𝛽 ) to be the number of edges contained in the path 𝛼 𝛽 .

Let 𝐺 be an infinite tree with root 0. The set of all vertices with distance 𝑛 from the root is called the 𝑛 th generation of 𝑇 , which is denoted by 𝐿 𝑛 . We denote by 𝑇 ( 𝑛 ) the union of the first 𝑛 generations of 𝑇 . For each vertex 𝑑 , there is a unique path from 0 to 𝑑 , and | 𝑑 | for the number of edges on this path. We denote the first predecessor of 𝑑 by 1 𝑑 , the second predecessor of 𝑑 by 2 𝑑 and denote by 𝑛 𝑑 the 𝑛 th predecessor of 𝑑 . The degree of a vertex is defined to be the number of neighbors of it. If the degree sequence of a tree is uniformly bounded, we call the tree a uniformly bounded tree. Let 𝑑 be a positive integer. If every vertex of the tree has 𝑑 neighbors in the next generation, we say it Cayley tree, which is denoted by 𝑇 𝐢 , 𝑑 . Thus on Cayley tree, every vertex has degree 𝑑 + 1 except that the root index which has degree 𝑑 . For any two vertices 𝑠 and 𝑑 of tree 𝑇 , write 𝑠 ≀ 𝑑 if 𝑠 is on the unique path from the root 0 to 𝑑 . We denote by 𝑠 ∧ 𝑑 the vertex farthest from 0 satisfying 𝑠 ∧ 𝑑 ≀ 𝑠 and 𝑠 ∧ 𝑑 ≀ 𝑑 . 𝑋 𝐴 = { 𝑋 𝑑 , 𝑑 ∈ 𝐴 } and denote by | 𝐴 | the number of vertices of 𝐴 .

Definition 1.1 (see [1]). Let 𝐺 be an infinite Cayley tree 𝑇 𝐢 , 𝑑 , 𝑆 a finite state space, and { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } be a collection of S-valued random variables defined on probability space ( Ξ© , β„± , 𝐏 ) . Let 𝑝 = { 𝑝 ( π‘₯ ) , π‘₯ ∈ 𝑆 } ( 1 . 1 ) be a distribution on 𝑆 and 𝑃 = ( 𝑃 ( 𝑦 ∣ π‘₯ ) ) , π‘₯ , 𝑦 ∈ 𝑆 , ( 1 . 2 ) be a stochastic matrix on 𝑆 2 . If for any vertex 𝑑 , 𝐏 ξ€· 𝑋 𝑑 = 𝑦 ∣ 𝑋 1 𝑑 = π‘₯ a n d 𝑋 𝑠 f o r 𝑑 ∧ 𝑠 ≀ 1 𝑑 ξ€Έ ξ€· 𝑋 = 𝐏 𝑑 = 𝑦 ∣ 𝑋 1 𝑑 ξ€Έ 𝐏 ξ€· 𝑋 = π‘₯ = 𝑃 ( 𝑦 ∣ π‘₯ ) βˆ€ π‘₯ , 𝑦 ∈ 𝑆 , 0 ξ€Έ = π‘₯ = 𝑝 ( π‘₯ ) βˆ€ π‘₯ ∈ 𝑆 , ( 1 . 3 ) { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } will be called S-valued Markov chains indexed by an infinite tree 𝐺 with the initial distribution (1.1) and transition matrix (1.2) or called tree-indexed Markov chains with state-space 𝑆 . Furthermore, if transition matrix 𝑃 is ergodic, then we call { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } an ergodic Markov chains indexed by the infinite tree 𝑇 .

The above definition is the extension of the definitions of Markov chain fields on trees (see [1, page 456] and [2]). In this paper, we always suppose that the tree-indexed Markov chain is ergodic.

The subject of tree-indexed processes is rather young. Benjamini and Peres [3] have given the notion of the tree-indexed Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [4] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger (see [5, 6]), by using Pemantle’s result [7] and a combinatorial approach, have studied the Shannon-McMillan theorem with convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Yang and Liu [8] have studied a strong law of large numbers for the frequency of occurrence of states for Markov chains fields on a homogeneous tree (a particular case of tree-indexed Markov chains and PPG-invariant random fields). Takacs (see [9]) have studied the strong law of large numbers for the univariate functions of finite Markov chains indexed by an infinite tree with uniformly bounded degree. Subsequently, Huang and Yang (see [10]) has studied the Shannon-McMillan theorem of finite homogeneous Markov chains indexed by a uniformly bounded infinite tree. Dembo et al., (see [11]) has showed the large deviation principle holds for the empirical offspring measure of Markov chains on random trees and demonstrated the explicit rate function, which is defined in terms of specific relative entropy (see [12]) and Cramér’s rate function.

In this paper, we study the strong law of large numbers for the offspring empirical measure and the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇 𝐢 , 𝑑 by using a method similar to that of [10].

2. Statements of the Results

For every vertex 𝑑 ∈ 𝑇 , the random vector of offspring states is defined as 𝐂 𝑑 = ξ€· 𝑋 1 ( 𝑑 ) , 𝑋 2 ( 𝑑 ) , … , 𝑋 𝑑 ξ€Έ ( 𝑑 ) ∈ 𝑆 𝑑 . ( 2 . 1 )

Let 𝐜 = ( 𝑐 1 , 𝑐 2 , … , 𝑐 𝑑 ) be a 𝑑 -dimensional vector on 𝑆 𝑑 .

Now we also let the distribution (1.1) serve as the initial distribution. Define the offspring transition kernel 𝑄 from 𝑆 to 𝑆 𝑑 . We define the law 𝐏 of a tree-indexed process 𝑋 by the following rules.(i)The state of the root random variable 𝑋 0 is determined by distribution (1.1).(ii)For every vertex 𝑑 ∈ 𝑇 with state π‘₯ , the offspring states are given independently of everything else, by the offspring law 𝑄 ( β‹… ∣ π‘₯ ) on 𝑆 𝑑 , where ξ€· 𝐂 𝑄 ( 𝐜 ∣ π‘₯ ) ∢ = 𝑄 𝑑 = ξ€· 𝑐 1 , 𝑐 2 , … , 𝑐 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€Έ = = π‘₯ 𝑑  𝑖 = 1 𝑃 ξ€· 𝑐 𝑖 ξ€Έ . ∣ π‘₯ ( 2 . 2 ) Here the last equation holds because of the property of conditional independence.

For every finite 𝑛 ∈ 𝐍 , let { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } be 𝑆 -valued Markov chains indexed by an infinite tree 𝑇 . Now we define the offspring empirical measure 𝐿 𝑛 βˆ‘ ( π‘₯ , 𝐜 ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝐈 𝑋 ξ€½ ξ€· 𝑑 , 𝐂 𝑑 ξ€Έ = ξ€Ύ ( π‘₯ , 𝐜 ) | | 𝑇 ( 𝑛 ) | | βˆ€ ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑 . ( 2 . 3 ) For any state π‘₯ ∈ 𝑆 , 𝑆 𝑛 ( π‘₯ ) is the empirical measure, which is defined as follows: 𝑆 𝑛 βˆ‘ ( π‘₯ ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝐈 ξ€½ 𝑋 𝑑 ξ€Ύ = π‘₯ | | 𝑇 ( 𝑛 ) | | βˆ€ π‘₯ ∈ 𝑆 , ( 2 . 4 ) where 𝐈 { β‹… } denotes the indicator function as usual and 𝐜 = ( 𝑐 1 , 𝑐 2 , … , 𝑐 𝑑 ) .

In the rest of this paper, we consider the limit law of the random sequence of { 𝐿 𝑛 ( π‘₯ , 𝐜 ) , 𝑛 β‰₯ 1 } , which is defined as above.

Theorem 2.1. Let 𝐺 be a Cayley tree 𝑇 𝐢 , 𝑑 , 𝑆 a finite state space, and { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } be tree-indexed Markov chain with initial distribution (1.1) and ergodic transition matrix 𝑃 . Let 𝐿 𝑛 ( π‘₯ , 𝐜 ) be defined as (2.3). Thus one has l i m 𝑛 β†’ ∞ 𝐿 𝑛 ( π‘₯ , 𝐜 ) = πœ‹ ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) a . e . , ( 2 . 5 ) where πœ‹ is the stationary distribution of the ergodic matrix 𝑃 , that is, πœ‹ = πœ‹ 𝑃 , and Ξ£ π‘₯ ∈ 𝑆 πœ‹ ( π‘₯ ) = 1 .

Corollary 2.2. Under the condition of Theorem 2.1, suppose that 𝑓 ( π‘₯ , 𝐜 ) is any function defined on 𝑆 × π‘† 𝑑 . Denote 𝐻 𝑛 (  πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝑓 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ . ( 2 . 6 ) Then l i m 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | =  ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑 πœ‹ ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) 𝑓 ( π‘₯ , 𝐜 ) . a . e . ( 2 . 7 )

Proof. Noting that 𝐻 𝑛 (  πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝑓 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ =  ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐈 𝑋 ξ€½ ξ€· 𝑑 , 𝐂 𝑑 ξ€Έ ξ€Ύ = ( π‘₯ , 𝐜 ) 𝑓 ( π‘₯ , 𝐜 ) , ( 2 . 8 ) thus by using Theorem 2.1 we get l i m 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | =  ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑 𝑓 ( π‘₯ , 𝐜 ) l i m 𝑛 β†’ ∞ 𝐿 𝑛 =  ( π‘₯ , 𝐜 ) ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑 πœ‹ ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) 𝑓 ( π‘₯ , 𝐜 ) . a . e . ( 2 . 9 )
Let 𝐺 = { 𝑇 , 𝐸 } be a tree graph, ( 𝑋 𝑑 ) 𝑑 ∈ 𝑇 be a stochastic process indexed by tree 𝐺 with state space 𝑆 . Denote π‘Œ 𝑑 = ( 𝑋 𝑑 , 𝐂 𝑑 ) to be the offspring processes derived by ( 𝑋 𝑑 ) 𝑑 ∈ 𝑇 . It is easy to see that 𝐏 ξ‚€ 𝑦 𝑇 ( 𝑛 )  ξ‚€ π‘Œ = 𝐏 T ( 𝑛 ) = 𝑦 𝑇 ( 𝑛 )  ξ€· π‘₯ = 𝑝 0 ξ€Έ  𝑑 ∈ 𝑇 ( 𝑛 + 1 ) ⧡ { 0 } 𝑃 ξ€· π‘₯ 𝑑 ∣ π‘₯ 1 𝑑 ξ€Έ ξ€· π‘₯ = 𝑝 0 ξ€Έ  𝑑 ∈ 𝑇 ( 𝑛 ) 𝑄 ξ€· 𝐜 𝑑 ∣ π‘₯ 𝑑 ξ€Έ , ( 2 . 1 0 ) where 𝐜 𝑑 ∈ 𝑆 𝑑 . Let 𝑓 𝑛 1 ( πœ” ) = βˆ’ | | 𝑇 ( 𝑛 ) | | ξ‚€ π‘Œ l n 𝐏 𝑇 ( 𝑛 )  . ( 2 . 1 1 ) 𝑓 𝑛 ( πœ” ) will be called the entropy density of π‘Œ 𝑇 ( 𝑛 ) . If ( 𝑋 𝑑 ) 𝑑 ∈ 𝑇 is a tree-indexed Markov chain with state space 𝑆 defined by Definition 1.1, we have by (2.10) 𝑓 𝑛 1 ( πœ” ) = βˆ’ | | 𝑇 ( 𝑛 ) | |  ξ€· 𝑋 l n 𝑝 0 ξ€Έ +  𝑑 ∈ 𝑇 ( 𝑛 ) ξ€· 𝐂 l n 𝑄 𝑑 ∣ 𝑋 𝑑 ξ€Έ ξƒ­ . ( 2 . 1 2 )
The convergence of 𝑓 𝑛 ( πœ” ) to a constant in a sense ( 𝐿 1 convergence, convergence in probability, a.e. convergence) is called the Shannon-McMillan theorem or the entropy theorem or the AEP in information theory. Here from Corollary 2.2, if we let 𝑓 ( π‘₯ , 𝐜 ) = βˆ’ l n 𝑄 ( 𝐜 ∣ π‘₯ ) , ( 2 . 1 3 ) we can easily obtain the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇 𝐢 , 𝑑 .

Corollary 2.3. Under the condition of Corollary 2.2, let 𝑓 𝑛 ( πœ” ) be defined as (2.12). Then l i m 𝑛 β†’ ∞ 𝑓 𝑛 (  πœ” ) = βˆ’ ( π‘₯ , 𝐜 ) ∈ 𝑆 × π‘† 𝑑 πœ‹ ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) l n 𝑄 ( 𝐜 ∣ π‘₯ ) . a . e . ( 2 . 1 4 )

3. Proof of Theorem 2.1

Let 𝑇 𝐢 , 𝑑 be a Cayley tree, 𝑆 a finite state space, and { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃 . Let 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) be functions defined on 𝑆 × π‘† 𝑑 . Letting πœ† be a real number, 𝐿 0 = { 0 } , β„± 𝑛 = 𝜎 ( 𝑋 𝑇 ( 𝑛 ) ) , now we can define a nonnegative martingale as follows: 𝑑 𝑛 𝑒 ( πœ† , πœ” ) = πœ† βˆ‘ 𝑑 ∈ 𝑇 ( 𝑛 βˆ’ 1 ) 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∏ 𝑑 ∈ 𝑇 ( 𝑛 βˆ’ 1 ) 𝐸 ξ€Ί 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» . ( 3 . 1 ) At first we come to prove the above fact.

Theorem 3.1. { 𝑑 𝑛 ( πœ† , πœ” ) , β„± 𝑛 , 𝑛 β‰₯ 1 } is a nonnegative martingale.

Proof of Theorem 3.1. Note that, by Markov property and the property of conditional independence, we have 𝐸  𝑒 πœ† βˆ‘ 𝑛 𝑑 ∈ 𝐿 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ β„± 𝑛 ξ‚„ =  π‘₯ 𝐿 𝑛 + 1 𝑒 πœ† βˆ‘ 𝑛 𝑑 ∈ 𝐿 𝑔 𝑑 ( 𝑋 𝑑 , 𝐜 𝑑 ) 𝐏 ξ‚€ 𝑋 𝐿 𝑛 + 1 = π‘₯ 𝐿 𝑛 + 1 ∣ 𝑋 𝑇 ( 𝑛 )  =  π‘₯ 𝐿 𝑛 + 1  𝑑 ∈ 𝐿 𝑛 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐜 𝑑 ) 𝑄 ξ€· 𝐜 𝑑 ∣ 𝑋 𝑑 ξ€Έ =  𝑑 ∈ 𝐿 𝑛  𝐜 𝑑 ∈ 𝑆 𝑑 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐜 𝑑 ) 𝑄 ξ€· 𝐜 𝑑 ∣ 𝑋 𝑑 ξ€Έ =  𝑑 ∈ 𝐿 𝑛 𝐸 ξ€Ί 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» a . e . ( 3 . 2 ) On the other hand, we also have 𝑑 𝑛 + 1 ( πœ† , πœ” ) = 𝑑 𝑛 𝑒 ( πœ† , πœ” ) πœ† βˆ‘ 𝑛 𝑑 ∈ 𝐿 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∏ 𝑑 ∈ 𝐿 𝑛 𝐸 ξ€Ί 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» . ( 3 . 3 ) Combining (3.2) and (3.3), we get 𝐸 ξ€Ί 𝑑 𝑛 + 1 ( πœ† , πœ” ) ∣ β„± 𝑛 ξ€» = 𝑑 𝑛 ( πœ† , πœ” ) a . e . ( 3 . 4 ) Thus we complete the proof of this theorem.

Theorem 3.2. Let ( 𝑋 𝑑 ) 𝑑 ∈ 𝑇 and { 𝑔 𝑑 ( π‘₯ , 𝐜 ) , 𝑑 ∈ 𝑇 } be defined as above, and denote 𝐺 𝑛 (  πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€» . ( 3 . 5 ) Let 𝛼 > 0 , denote ξƒ― 𝐷 ( 𝛼 ) = l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 2 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ 𝑒 𝛼 | 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) | ∣ 𝑋 𝑑 ξ€» ξƒ° , 𝐻 = 𝑀 ( πœ” ) < ∞ ( 3 . 6 ) 𝑛  ( πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ . ( 3 . 7 ) Then l i m 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) βˆ’ 𝐺 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | = 0 a . e . o n 𝐷 ( 𝛼 ) . ( 3 . 8 )

Proof. By Theorem 3.1, we have known that { 𝑑 𝑛 ( πœ† , πœ” ) , β„± 𝑛 , 𝑛 β‰₯ 1 } is a nonnegative martingale. According to Doob martingale convergence theorem, we have l i m 𝑛 𝑑 𝑛 ( πœ† , πœ” ) = 𝑑 ( πœ† , πœ” ) < ∞ a . e . ( 3 . 9 ) so that l i m s u p 𝑛 β†’ ∞ l n 𝑑 𝑛 + 1 ( πœ† , πœ” ) | | 𝑇 ( 𝑛 ) | | ≀ 0 a . e . ( 3 . 1 0 ) Combining (3.1), (3.7), and (3.10), we arrive at l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | | ξƒ― πœ† 𝐻 𝑛  ( πœ” ) βˆ’ 𝑑 ∈ 𝑇 ( 𝑛 ) ξ€Ί 𝐸 ξ€Ί 𝑒 l n πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξƒ° ξ€» ξ€» ≀ 0 a . e . ( 3 . 1 1 ) Let πœ† > 0 . Dividing two sides of above equation by πœ† , we get l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | | ξƒ― 𝐻 𝑛  ( πœ” ) βˆ’ 𝑑 ∈ 𝑇 ( 𝑛 ) ξ€Ί 𝐸 ξ€Ί 𝑒 l n πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» ξ€» πœ† ξƒ° ≀ 0 a . e . ( 3 . 1 2 ) By (3.12) and inequalities l n π‘₯ ≀ π‘₯ βˆ’ 1 ( π‘₯ > 0 ) , 0 ≀ 𝑒 π‘₯ βˆ’ 1 βˆ’ π‘₯ ≀ 2 βˆ’ 1 π‘₯ 2 𝑒 | π‘₯ | , as 0 < πœ† ≀ 𝛼 , it follows that l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝐻 𝑛  ( πœ” ) βˆ’ 𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€» ξƒ­ ≀ l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝑑 ∈ 𝑇 ( 𝑛 ) ξƒ― ξ€Ί 𝐸 ξ€Ί 𝑒 l n πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» ξ€» πœ† ξ€Ί 𝑔 βˆ’ 𝐸 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€» ξƒ° ≀ l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝑑 ∈ 𝑇 ( 𝑛 ) ξƒ― 𝐸 ξ€Ί 𝑒 πœ† 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) ∣ 𝑋 𝑑 ξ€» βˆ’ 1 πœ† ξ€Ί 𝑔 βˆ’ 𝐸 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€» ξƒ° ≀ πœ† 2 l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 2 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ 𝑒 πœ† | 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) | ∣ 𝑋 𝑑 ξ€» ≀ πœ† 2 l i m s u p 𝑛 β†’ ∞ 1 | | 𝑇 ( 𝑛 ) | |  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 2 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ 𝑒 𝛼 | 𝑔 𝑑 ( 𝑋 𝑑 , 𝐂 𝑑 ) | ∣ 𝑋 𝑑 ξ€» ≀ πœ† 2 𝑀 ( πœ” ) a . e . πœ” ∈ 𝐷 ( 𝛼 ) . ( 3 . 1 3 ) Letting πœ† β†’ 0 + in (3.13), by (3.5) we have l i m s u p 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) βˆ’ 𝐺 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | ≀ 0 a . e . πœ” ∈ 𝐷 ( 𝛼 ) . ( 3 . 1 4 ) Let βˆ’ 𝛼 ≀ πœ† < 0 . Similarly to the analysis of the case 0 < πœ† ≀ 𝛼 , it follows from (3.12) that l i m i n f 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) βˆ’ 𝐺 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | β‰₯ πœ† 2 𝑀 ( πœ” ) a . e . πœ” ∈ 𝐷 ( 𝛼 ) . ( 3 . 1 5 ) Letting πœ† β†’ 0 βˆ’ , we can arrive at l i m i n f 𝑛 β†’ ∞ 𝐻 𝑛 ( πœ” ) βˆ’ 𝐺 𝑛 ( πœ” ) | | 𝑇 ( 𝑛 ) | | β‰₯ 0 a . e . πœ” ∈ 𝐷 ( 𝛼 ) . ( 3 . 1 6 ) Combining (3.14) and (3.16), we obtain (3.8) directly.

Corollary 3.3. Under the conditions of Theorem 3.2, one has l i m 𝑛 β†’ ∞ ξ€Ί 𝐿 𝑛 ( π‘₯ , 𝐜 ) βˆ’ 𝑆 𝑛 ξ€» ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) = 0 a . e . , ( 3 . 1 7 ) where πœ‹ is the stationary distribution of the ergodic matrix 𝑃 , that is, πœ‹ = πœ‹ 𝑃 , and Ξ£ π‘₯ ∈ 𝑆 πœ‹ ( π‘₯ ) = 1 .

Proof. For any 𝑑 ∈ 𝑇 , let 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ 𝑋 = 𝐈 ξ€½ ξ€· 𝑑 , 𝐂 𝑑 ξ€Έ = ξ€Ύ ξ€½ 𝑋 ( π‘₯ , 𝐜 ) = 𝐈 𝑑 ξ€Ύ ξ€½ 𝐂 = π‘₯ β‹… 𝐈 𝑑 ξ€Ύ . = 𝐜 ( 3 . 1 8 ) Then we have 𝐺 𝑛 (  πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝐸 ξ€Ί 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ ∣ 𝑋 𝑑 ξ€» =  𝑑 ∈ 𝑇 ( 𝑛 )  𝐜 𝑑 ∈ 𝑆 𝑑 𝐈 ξ€½ 𝑋 𝑑 ξ€Ύ ξ€½ 𝐜 = π‘₯ β‹… 𝐈 𝑑 ξ€Ύ 𝑄 ξ€· 𝐜 = 𝐜 𝑑 ∣ 𝑋 𝑑 ξ€Έ =  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐈 ξ€½ 𝑋 𝑑 ξ€Ύ = | | 𝑇 = π‘₯ 𝑄 ( 𝐜 ∣ π‘₯ ) ( 𝑛 ) | | β‹… 𝑆 𝑛 𝐻 ( π‘₯ ) 𝑄 ( 𝐜 ∣ π‘₯ ) ( 3 . 1 9 ) 𝑛  ( πœ” ) = 𝑑 ∈ 𝑇 ( 𝑛 ) 𝑔 𝑑 ξ€· 𝑋 𝑑 , 𝐂 𝑑 ξ€Έ =  𝑑 ∈ 𝑇 ( 𝑛 ) 𝐈 𝑋 ξ€½ ξ€· 𝑑 , 𝐂 𝑑 ξ€Έ ξ€Ύ = | | 𝑇 = ( π‘₯ , 𝐜 ) ( 𝑛 ) | | β‹… 𝐿 𝑛 ( π‘₯ , 𝐜 ) . ( 3 . 2 0 ) Combing (3.19) and (3.20), we can derive our conclusion by Theorem 3.2.
In our proof, we will use Lemma 3.4.

Lemma 3.4 (see [10]). Let 𝑇 𝐢 , 𝑑 be a Cayley tree, 𝑆 a finite state space, and { 𝑋 𝑑 , 𝑑 ∈ 𝑇 } tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃 . Let 𝑆 𝑛 ( π‘₯ ) be defined as (2.4). Thus one has

l i m 𝑛 β†’ ∞ 𝑆 𝑛 ( π‘₯ ) = πœ‹ ( π‘₯ ) a . e . ( 3 . 2 1 )

Proof of Theorem 2.1. Combining Corollary 3.3 and Lemma 3.4, we arrive at our conclusion directly.

Acknowledgment

This work was supported by National Natural Science Foundation of China (Grant no. 11071104).

References

  1. J. G. Kemeny, J. L. Snell, and A. W. Knapp, Denumerable Markov Chains, Springer, New York, NY, USA, Second edition, 1976. View at Zentralblatt MATH
  2. F. Spitzer, β€œMarkov random fields on an infinite tree,” Annals of Probability, vol. 3, no. 3, pp. 387–398, 1975. View at Zentralblatt MATH
  3. I. Benjamini and Y. Peres, β€œMarkov chains indexed by trees,” Annals of Probability, vol. 22, no. 1, pp. 219–243, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. T. Berger and Z. X. Ye, β€œEntropic aspects of random fields on trees,” IEEE Transactions on Information Theory, vol. 36, no. 5, pp. 1006–1018, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. Z. Ye and T. Berger, β€œErgodicity, regularity and asymptotic equipartition property of random fields on trees,” Journal of Combinatorics, Information & System Sciences, vol. 21, no. 2, pp. 157–184, 1996. View at Zentralblatt MATH
  6. Z. Ye and T. Berger, Information Measures for Discrete Random Fields, Science Press, Beijing, China, 1998.
  7. R. Pemantle, β€œAutomorphism invariant measures on trees,” Annals of Probability, vol. 20, no. 3, pp. 1549–1566, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. W. Yang and W. Liu, β€œStrong law of large numbers for Markov chains field on a Bethe tree,” Statistics & Probability Letters, vol. 49, no. 3, pp. 245–250, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. C. Takacs, β€œStrong law of large numbers for branching Markov chains,” Markov Processes and Related Fields, vol. 8, no. 1, pp. 107–116, 2001. View at Zentralblatt MATH
  10. H. Huang and W. Yang, β€œStrong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree,” Science in China A, vol. 51, no. 2, pp. 195–202, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. A. Dembo, P. Mörters, and S. Sheffield, β€œLarge deviations of Markov chains indexed by random trees,” Annales de l'Institut Henri Poincaré B, vol. 41, no. 6, pp. 971–996, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, vol. 38 of Applications of Mathematics, Springer, New York, NY, USA, 2nd edition, 1998.