# Markov Chains¶

This is a brief introduction to working with Markov Chains from the prob140 library.

## Getting Started¶

As always, this should be the first cell if you are using a notebook.

# HIDDEN

from datascience import *
from prob140 import *
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')


## Constructing Markov Chains¶

### Explicitly assigning probabilities¶

To assign the possible states of a markov chain, use Table().states().

In [1]: Table().states(make_array("A", "B"))
Out[1]:
State
A
B


A markov chain needs transition probabilities for each transition state i to j. Note that the sum of the transition probabilities coming out of each state must sum to 1

In [2]: mc_table = Table().states(make_array("A", "B")).transition_probability(make_array(0.5, 0.5, 0.3, 0.7))

In [3]: mc_table
Out[3]:
Source | Target | Probability
A      | A      | 0.5
A      | B      | 0.5
B      | A      | 0.3
B      | B      | 0.7


To convert the Table into a MarkovChain object, call .to_markov_chain().

In [4]: mc = mc_table.to_markov_chain()

In [5]: mc
Out[5]:
A    B
A  0.5  0.5
B  0.3  0.7


### Using a transition probability function¶

Just like single variable distributions and joint distributions, we can assign a transition probability function.

In [6]: def identity_transition(x,y):
...:     if x==y:
...:         return 1
...:     return 0
...:

In [7]: transMatrix = Table().states(np.arange(1,4)).transition_function(identity_transition)

In [8]: transMatrix
Out[8]:
Source | Target | P(Target | Source)
1      | 1      | 1
1      | 2      | 0
1      | 3      | 0
2      | 1      | 0
2      | 2      | 1
2      | 3      | 0
3      | 1      | 0
3      | 2      | 0
3      | 3      | 1

In [9]: mc2 = transMatrix.to_markov_chain()

In [10]: mc2
Out[10]:
1    2    3
1  1.0  0.0  0.0
2  0.0  1.0  0.0
3  0.0  0.0  1.0


## Distribution¶

To find the state of the markov chain after a certain point, we can call the .distribution method which takes in a starting condition and a number of steps. For example, to see the distribution of mc starting at “A” after 2 steps, we can call

In [11]: mc.distribution("A", 2)
Out[11]:
State | Probability
A     | 0.4
B     | 0.6


Sometimes it might be useful for the starting condition to be a probability distribution. We can set the starting condition to be a single variable distribution.

In [12]: start = Table().states(make_array("A", "B")).probability(make_array(0.8, 0.2))

In [13]: start
Out[13]:
State | Probability
A     | 0.8
B     | 0.2

In [14]: mc.distribution(start, 2)
Out[14]:
State | Probability
A     | 0.392
B     | 0.608

In [15]: mc.distribution(start, 0)
Out[15]:
State | Probability
A     | 0.8
B     | 0.2


In [16]: mc.steady_state()
Out[16]:
Value | Probability
A     | 0.375
B     | 0.625