# Markov Chains (prob140.MarkovChain)¶

## Construction¶

### Using a Table¶

You can use a 3 column table (source state, target state, transition probability) to construct a Markov Chain. The functions Table.transition_probability() or Table.transition_function() are helpful for constructing such a Table. From there, call Markov_chain.from_table() to construct a Markov Chain.

In [1]: mc_table = Table().states(make_array("A", "B")).transition_probability(make_array(0.5, 0.5, 0.3, 0.7))

In [2]: mc_table
Out[2]:
Source | Target | Probability
A      | A      | 0.5
A      | B      | 0.5
B      | A      | 0.3
B      | B      | 0.7

In [3]: MarkovChain.from_table(mc_table)
Out[3]:
A    B
A  0.5  0.5
B  0.3  0.7


### Using a transition function¶

Often, it will be more useful to define a transition function that returns the probability of going from a source to a target state.

In [4]: states = ['state_1', 'state_2']

In [5]: def identity_transition(source, target):
...:     if source == target:
...:         return 1
...:     return 0
...:

In [6]: MarkovChain.from_transition_function(states, identity_transition)
Out[6]:
state_1  state_2
state_1      1.0      0.0
state_2      0.0      1.0


### Using a transition matrix¶

You can also explicitly define the transition matrix.

In [7]: import numpy

In [8]: states = ['rainy', 'sunny']

In [9]: transition_matrix = numpy.array([[0.1, 0.9],
...:                                  [0.8, 0.2]])
...:

In [10]: MarkovChain.from_matrix(states, transition_matrix)
Out[10]:
rainy  sunny
rainy    0.1    0.9
sunny    0.8    0.2

 Table.transition_probability(values) For a multivariate probability distribution, assigns transition probabilities, ie P(Y | X). MarkovChain.from_table(table) Constructs a Markov Chain from a Table MarkovChain.from_transition_function(states, …) Constructs a MarkovChain from a transition function. MarkovChain.from_matrix(states, …) Constructs a MarkovChain from a transition matrix.

## Utilities¶

 MarkovChain.distribution(starting_condition) Finds the distribution of states after n steps given a starting condition. MarkovChain.steady_state() Finds the stationary distribution of the Markov Chain. MarkovChain.expected_return_time() Finds the expected return time of the Markov Chain (1 / steady state). MarkovChain.prob_of_path(starting_condition, …) Finds the probability of a path given a starting condition. MarkovChain.log_prob_of_path(…) Finds the log-probability of a path given a starting condition. MarkovChain.get_transition_matrix([steps]) Returns the transition matrix after n steps as a numpy matrix. MarkovChain.transition_matrix([steps]) Returns the transition matrix after n steps visually as a Pandas df.

## Simulations¶

 MarkovChain.simulate_path(…[, plot_path]) Simulates a path of n steps with a specific starting condition.

## Visualizations¶

 MarkovChain.plot_path(starting_condition, path) Plots a Markov Chain’s path.