# Markov Chains¶

This is a brief introduction to working with Markov Chains from the prob140 library.

## Getting Started¶

As always, this should be the first cell if you are using a notebook.

# HIDDEN

from datascience import *
from prob140 import *
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')


## Constructing Markov Chains¶

### Explicitly assigning probabilities¶

To assign the possible states of a markov chain, use Table().states().

In : Table().states(make_array("A", "B"))
Out:
State
A
B


A markov chain needs transition probabilities for each transition state i to j. Note that the sum of the transition probabilities coming out of each state must sum to 1

In : mc_table = Table().states(make_array("A", "B")).transition_probability(make_array(0.5, 0.5, 0.3, 0.7))

In : mc_table
Out:
Source | Target | Probability
A      | A      | 0.5
A      | B      | 0.5
B      | A      | 0.3
B      | B      | 0.7


To convert the Table into a MarkovChain object, call .to_markov_chain().

In : mc = mc_table.to_markov_chain()

In : mc
Out:
A    B
A  0.5  0.5
B  0.3  0.7


### Using a transition probability function¶

Just like single variable distributions and joint distributions, we can assign a transition probability function.

In : def identity_transition(x,y):
...:     if x==y:
...:         return 1
...:     return 0
...:

In : transMatrix = Table().states(np.arange(1,4)).transition_function(identity_transition)

In : transMatrix
Out:
Source | Target | P(Target | Source)
1      | 1      | 1
1      | 2      | 0
1      | 3      | 0
2      | 1      | 0
2      | 2      | 1
2      | 3      | 0
3      | 1      | 0
3      | 2      | 0
3      | 3      | 1

In : mc2 = transMatrix.to_markov_chain()

In : mc2
Out:
1    2    3
1  1.0  0.0  0.0
2  0.0  1.0  0.0
3  0.0  0.0  1.0


## Distribution¶

To find the state of the markov chain after a certain point, we can call the .distribution method which takes in a starting condition and a number of steps. For example, to see the distribution of mc starting at “A” after 2 steps, we can call

In : mc.distribution("A", 2)
Out:
State | Probability
A     | 0.4
B     | 0.6


Sometimes it might be useful for the starting condition to be a probability distribution. We can set the starting condition to be a single variable distribution.

In : start = Table().states(make_array("A", "B")).probability(make_array(0.8, 0.2))

In : start
Out:
State | Probability
A     | 0.8
B     | 0.2

In : mc.distribution(start, 2)
Out:
State | Probability
A     | 0.392
B     | 0.608

In : mc.distribution(start, 0)
Out:
State | Probability
A     | 0.8
B     | 0.2


In : mc.steady_state()
Out:
Value | Probability
A     | 0.375
B     | 0.625