MarkovProcessProperties

MarkovProcessProperties[mproc]

gives a summary of properties for the finite state Markov process mproc.

MarkovProcessProperties[mproc,"property"]

gives the specified "property" for the process mproc.

Details

  • MarkovProcessProperties can be used for finite state Markov processes such as DiscreteMarkovProcess and ContinuousMarkovProcess.
  • MarkovProcessProperties[mproc,"Properties"] gives a list of available properties.
  • MarkovProcessProperties[mproc,"property","Description"] gives a description of the property as a string.
  • Basic properties include:
  • "InitialProbabilities"initial state probability vector
    "TransitionMatrix"conditional transition probabilities m
    "TransitionRateMatrix"conditional transition rates q
    "TransitionRateVector"state transition rates μ
    "HoldingTimeMean"mean holding time for a state
    "HoldingTimeVariance"variance of holding time for a state
    "SummaryTable"summary of properties
  • For a continuous-time Markov process "TransitionMatrix" gives the transition matrix of the embedded discrete-time Markov process.
  • The holding time is the time spent in each state before transitioning to a different state. This takes into account self-loops which may cause the process to transition to the same state several times.
  • Structural properties include:
  • "CommunicatingClasses"sets of states accessible from each other
    "RecurrentClasses"communicating classes that cannot be left
    "TransientClasses"communicating classes that can be left
    "AbsorbingClasses"recurrent classes with a single element
    "PeriodicClasses"communicating classes with finite period greater than 1
    "Periods"period for each of the periodic classes
    "Irreducible"whether the process has a single recurrent class
    "Aperiodic"whether all classes are aperiodic
    "Primitive"whether the process is irreducible and aperiodic
  • The states of a finite Markov process can be grouped into communicating classes where from each state in a class there is a path to every other state in the class.
  • A communicating class can be transient when there is a path from the class to another class or recurrent when there is not. A special type of recurrent class, called absorbing, consist of a single element.
  • A state is periodic is if there is a non-zero probability that you return to the state after two or more steps. All the states in a class have the same period.
  • Transient properties before the process enters a recurrent class:
  • "TransientVisitMean"mean number of visits to each transient state
    "TransientVisitVariance"variance of number of visits to each transient state
    "TransientTotalVisitMean"mean total number of transient states visited
  • A Markov process will eventually enter a recurrent class. The transient properties characterize how many times each transient state is visited or how many different transient states are visited.
  • Limiting properties include:
  • "ReachabilityProbability"probability of ever reaching a state
    "LimitTransitionMatrix"Cesaro limit of the transition matrix
    "Reversible"whether the process is reversible
  • If a property is not available, this is indicated by Missing["reason"].

Examples

open allclose all

Basic Examples  (2)

Summary table of properties:

Find the values of a specific property:

Description of the property:

Scope  (5)

Find the communicating classes, highlighted in the graph through different colors:

The process is not irreducible:

Find the recurrent classes, represented by square and circular vertices in the graph:

Find the transient classes, represented by diamond vertices in the graph:

Find the absorbing classes, represented by square vertices in the graph:

Define a Markov process with self-loops:

The self-loops make the class aperiodic:

Markov process with no self-loops:

Here both classes are periodic:

Summary table of properties for a continuous-time Markov process:

Obtain the value for a specific property for a continuous Markov chain:

Find the conditional mean number of total transitions starting in state 1 and ending in state 4:

Compare with the results from simulation:

Find the conditional mean number of transitions from state 2 to state 3:

Compare with the results from simulation:

Applications  (2)

A gambler starts with $3 and bets $1 at each step. He wins $1 with a probability of 0.4:

Find the expected number of times the gambler has units:

Verify the answer using simulation:

Find the expected time until the gambler wins $7 or goes broke:

Total states visited before the gambler wins $7 or goes broke:

Verify the answer using simulation:

In a game of tennis between two players, suppose the probability of the server winning a point is . There are 17 possible states:

Visualize the random walk graph for :

Find the probability of the server winning the game if :

Find the mean time to absorption, that is, the number of points played:

Find the mean number of states visited:

Find the average number of times the score will be tied at deuce:

Verify the answer using simulation:

Properties & Relations  (1)

The transition matrix of this Markov process is not irreducible:

Hence the stationary distribution depends on the initial state probabilities:

Possible Issues  (1)

Some property values may not be available:

This property is available only for continuous Markov processes:

Wolfram Research (2012), MarkovProcessProperties, Wolfram Language function, https://reference.wolfram.com/language/ref/MarkovProcessProperties.html.

Text

Wolfram Research (2012), MarkovProcessProperties, Wolfram Language function, https://reference.wolfram.com/language/ref/MarkovProcessProperties.html.

CMS

Wolfram Language. 2012. "MarkovProcessProperties." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/MarkovProcessProperties.html.

APA

Wolfram Language. (2012). MarkovProcessProperties. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/MarkovProcessProperties.html

BibTeX

@misc{reference.wolfram_2024_markovprocessproperties, author="Wolfram Research", title="{MarkovProcessProperties}", year="2012", howpublished="\url{https://reference.wolfram.com/language/ref/MarkovProcessProperties.html}", note=[Accessed: 21-December-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_markovprocessproperties, organization={Wolfram Research}, title={MarkovProcessProperties}, year={2012}, url={https://reference.wolfram.com/language/ref/MarkovProcessProperties.html}, note=[Accessed: 21-December-2024 ]}