Back to Search Start Over

Markov chains with memory, tensor formulation, and the dynamics of power iteration

Authors :
Moody T. Chu
Sheng-Jhih Wu
Source :
Applied Mathematics and Computation. 303:226-239
Publication Year :
2017
Publisher :
Elsevier BV, 2017.

Abstract

A Markov chain with memory is no different from the conventional Markov chain on the product state space. Such a Markovianization, however, increases the dimensionality exponentially. Instead, Markov chain with memory can naturally be represented as a tensor, whence the transitions of the state distribution and the memory distribution can be characterized by specially defined tensor products. In this context, the progression of a Markov chain can be interpreted as variants of power-like iterations moving toward the limiting probability distributions. What is not clear is the makeup of the second dominant eigenvalue that affects the convergence rate of the iteration, if the method converges at all. Casting the power method as a fixed-point iteration, this paper examines the local behavior of the nonlinear map and identifies the cause of convergence or divergence. As an application, it is found that there exists an open set of irreducible and aperiodic transition probability tensors where the Z-eigenvector type power iteration fails to converge.

Details

ISSN :
00963003
Volume :
303
Database :
OpenAIRE
Journal :
Applied Mathematics and Computation
Accession number :
edsair.doi...........68f9d6e9f0ebb465dbd96e801684dc50
Full Text :
https://doi.org/10.1016/j.amc.2017.01.030