Steady State

Back to Markov Chains

The long-run probability distribution that a Markov chain converges to regardless of the initial state (for ergodic chains). Found by solving pi * P = pi. The steady state represents the proportion of time the chain spends in each state over an infinite horizon.

mathematics-for-cs probability markov-chains steady-state