- A short introduction to Markov chains for dynamic programming
- Definition, Markov Property, some Potential Theory.
Markov Chains
Introductory example: snakes and ladders
We highlight some of the key properties of Markov chains: how to calculate transitions, how the past effects the current movement of the processes, how to construct a chain, what the long run behavior of the process may (or may not) look like. We give an initial example to better position our intuition.
Below in Figure [Snakes_and_ladders], we are given a game of snakes and ladders (or shoots and ladders in the US). Here a counter (coloured red) is placed on the board at the start. You roll a dice. You move along the numbered squares by an amount given by the dice. The objective is to get to the finish. If the counter lands on a square with a snake’s head, you must go back to the square at the snakes tail and, if you land on a square at the bottom of a ladder, go to the top of the ladder.
[Snakes_and_ladders]
We let be the position of the counter on the board after the dice has been thrown
times. The processes
is a discrete time Markov chain. The following exercises should be fairly straight-forward.
Ex 1 [Transition Probabilities] Given that the counter is in square we let
be the probability that you next go to square
. I.e.
Calculate:
a) b)
c)
Ans 1 a) b)
c)
.
Ex 2 [Continued] Calculate
a) b)
c) d)
.
Express your answer both numerically and in terms of sum of multiples of .
(Note that many probabilities can be calculated via matrix multiplication on .)
Ans 2
a)
b)
(Nb. the fact the process started at time does not effect the probability).
c)
(Nb. For times and states in series we just multiply entries)
d) . (Nb. We have to sum over entries for times that we miss out) (Nb. Also, note that
is the
entry of the matrix
).
Ex 3 [Markov property] Show that
We see that given we are on square , the probability of reaching square
is not effected by the path by which we reached square
.
Ans 3
The calculation for is very similar and
is [[MC:Ladders2]] part b).
In general, given that counter is on a square we will see that the next square reached by the counter on the next turn is not effected by the path that was used to reach the square. This is called the Markov Property.
Ex 4 [Jump chain construction] Convince yourself that if you were given the current square of the Markov chain and the throw of the dice
, then this completely determines the next square
.
This give a method for constructing and simulating Markov chains. We just need to keep track of the current state and generate an independent random variable to determine the next state.
Ans 4 Note that this essentially says that
for some function .
Certainly there is more than one could say here. However we will cover more in future exercises with a formal definition in place.
Markov Chain Definition
Let be a countable set.
Def [Initial Distribution/Transition Matrix] An initial distribution
is a positive vector whose components sums to one. A transition matrix is a postive matrix whose rows sum to one, that is, for
Def [Discrete Time Markov Chain] We say that a sequence of random variables is a discrete time Markov chain, with initial distribution
and transition matrix
if for
,
and
The first equality above is often called the Markov property and is the key defining feature of a Markov chain or, more generally, Markov process. It states that the past and future
are conditionally independent of the present
. Otherwise stated, is says that, when we know the past and present states
, the distribution of the future states
is only determined by the present state
. Think of a board game like snakes and ladders, where you go in the future is only determined by where you are now and not how you got there; this is the Markov property.
Ex 5 [Constructing Markov Chains] Take a function ,
a random variable on
, and
, independent uniform
random variables. Construct the sequence
with the recursion
Show that is a discrete time Markov chain.
Ex 6 [continued] Show that all discrete time Markov chains can be constructed in this way.
Ex 7 [Markov Chains and Martingale Problems] Show that a sequence of random variables is a Markov chain if and only if, for all bounded functions
, the process
is a Martingale with respect to the natural filtration of . Here for any matrix, say
, we define
Markov Chains & Potential Theory
This is really just more resolvants.
Ex 8 [Markov Chains and Potential Functions] Let be a bounded function. Argue that for
solves the equation
Ans 8
Ex 9 [Continued][MC:Potential1.2] Show that is the unique bounded solution this equation.
Ans 9 Take any then
. So
which, since , only holds if
.
Ex 10 [Continued] Show that if the bounded function satisfies
then ,
.
Ans 10 Suppose that is a positive fn such that
. Repeated substitution gives
Ex 11 Let be a subset of
and let
be the hitting time on
i.e.
and take
argue that
solves the equation
Ex 12 [Continued] Argue that
solves the equation
(Compare the above with Bellman’s equation.)
One thought on “Markov Chains”