108 results on '"Markov chains"'
Search Results
2. Biased opinion dynamics: when the devil is in the details.
- Author
-
Anagnostopoulos, Aris, Becchetti, Luca, Cruciani, Emilio, Pasquale, Francesco, and Rizzo, Sara
- Subjects
- *
SYSTEM dynamics , *MARKOV processes , *SOCIAL networks , *TOPOLOGY - Abstract
We study opinion dynamics in multi-agent networks when a bias toward one of two possible opinions exists, for example reflecting a status quo versus a superior alternative. Our aim is to investigate the combined effect of bias, network structure, and opinion dynamics on the convergence of the system of agents as a whole. Models of such evolving processes can easily become analytically intractable. In this paper, we consider a simple yet mathematically rich setting, in which all agents initially share an initial opinion representing the status quo. The system evolves in steps. In each step, one agent selected uniformly at random follows an underlying update rule to revise its opinion on the basis of those held by its neighbors, but with a probabilistic bias towards the superior alternative. We analyze convergence of the resulting process under well-known update rules. The framework we propose is simple and modular, but at the same time complex enough to highlight a nonobvious interplay between topology and underlying update rule. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. On the Lyapunov Foster Criterion and Poincaré Inequality for Reversible Markov Chains.
- Author
-
Taghvaei, Amirhossein and Mehta, Prashant G.
- Subjects
- *
MARKOV processes , *STABILITY criterion - Abstract
This article presents an elementary proof of stochastic stability of a discrete-time reversible Markov chain starting from a Foster–Lyapunov drift condition. Besides its relative simplicity, there are two salient features of the proof. 1) It relies entirely on functional-analytic non-probabilistic arguments. 2) It makes explicit the connection between a Foster–Lyapunov function and Poincaré inequality. The proof is used to derive an explicit bound for the spectral gap. An extension to the nonreversible case is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Reciprocal Chains: Foundations.
- Author
-
Erhardsson, Torkel, Saize, Stefane, and Yang, Xiangfeng
- Subjects
- *
MARKOV processes - Abstract
Reciprocal processes are stochastic processes such that the current state only depends on the nearest past and future. The research on continuous-time reciprocal processes has been comprehensive since the 1970s, while surprisingly discrete-time reciprocal processes (i.e., reciprocal chains) are rarely seen in the literature. What is worse is that in the scant literature where reciprocal chains are mentioned, a misunderstanding on how to formally define reciprocal chains has existed since 2008. This article aims to formally define reciprocal chains and provide foundations for the study of them. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Asymptotical Stability of Probabilistic Boolean Networks With State Delays.
- Author
-
Zhu, Shiyong, Lu, Jianquan, and Liu, Yang
- Subjects
- *
BOUND states , *STABILITY criterion , *COMPUTATIONAL complexity , *MARKOV processes - Abstract
This paper devotes to establishing a bridge between asymptotical stability of a probabilistic Boolean network (PBN) and a solution to its induced equations, which are induced from the PBN's transition matrix. By utilizing the semitensor product technique routinely, the dynamics of a PBN with coincident state delays can be equivalently converted into that of a higher dimensional PBN without delays. Subsequently, several novel stability criteria are derived from the standpoint of equations’ solution. The most significant finding is that a PBN is globally asymptotically stable at a predesignated one-point distribution if and only if a vector, obtained by adding 1 at the bottom of this distribution, is the unique nonnegative solution to PBN's induced equations. Moreover, the influence of coincident state delays on PBN's asymptotical stability is explicitly analyzed without consideration of the convergence rate. Consequently, such bounded state delays are verified to have no impact on PBN's stability, albeit delays are time-varying. Based on this worthwhile observation, the time computational complexity of the aforementioned approach can be reduced by removing delays directly. Furthermore, this universal procedure is summarized to reduce the time complexity of some previous results in the literature to a certain extent. Two examples are employed to demonstrate the feasibility and effectiveness of the obtained theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Markov Chains With Maximum Return Time Entropy for Robotic Surveillance.
- Author
-
Duan, Xiaoming, George, Mishel, and Bullo, Francesco
- Subjects
- *
MARKOV processes , *ENTROPY (Information theory) , *DISCRETE-time systems , *MAXIMUM entropy method , *RANDOM variables , *ROBOTICS , *LINEAR systems - Abstract
Motivated by robotic surveillance applications, this paper studies the novel problem of maximizing the return time entropy of a Markov chain, subject to a graph topology with travel times and stationary distribution. The return time entropy is the weighted average, over all graph nodes, of the entropy of the first return times of the Markov chain; this objective function is a function series that does not admit, in general, a closed form. This paper features theoretical and computational contributions. First, we obtain a discrete-time delayed linear system for the return time probability distribution and establish its convergence properties. We show that the objective function is continuous over a compact set and therefore admits a global maximum. We then establish upper and lower bounds between the return time entropy and the well-known entropy rate of the Markov chain. To compute the optimal Markov chain numerically, we establish the asymptotic equality between entropy, conditional entropy, and truncated entropy, and propose an iteration to compute the gradient of the truncated entropy. Finally, we apply these results to the robotic surveillance problem. Our numerical results show that for a model of rational intruder over prototypical graph topologies and test cases, the maximum return time entropy Markov chain outperforms several pre-existing Markov chains. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A collective filtering based content transmission scheme in edge of vehicles.
- Author
-
Wang, Xiaojie, Feng, Yufan, Ning, Zhaolong, Hu, Xiping, Kong, Xiangjie, Hu, Bin, and Guo, Yi
- Subjects
- *
MARKOV processes , *INTERNET , *SOCIAL interaction , *MOBILE apps , *VEHICLES - Abstract
With the emergence of the ever-increasing vehicular applications and booming Internet services, the requirements of low-latency and high efficient transmission among vehicles become urgent to meet, and their corresponding solutions need to be well investigated. To resolve the above challenges, we propose a fog computing-based content transmission scheme with collective filtering in edge of vehicles. We first provide a system model based on fog-based rode side units by considering location-awareness, content-caching and decentralized computing. Then, a content-caching strategy in RSUs is designed to minimize the downloading latency. Specifically, we model the moving vehicles with the two-dimensional Markov chains, and calculate the probabilities of file caching in RSUs to minimize the latency in file downloading. Each vehicle can also maintain a neighbor list to record the encounters with high similarities, and update it based on the historic and real-time contacts. Finally, we carry on the experiments based on the real-world taxi trajectories in Beijing and Shanghai, China. Simulation results demonstrate the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Minimals Plus: An improved algorithm for the random generation of linear extensions of partially ordered sets.
- Author
-
Combarro, Elías F., Hurtado de Saracho, Julen, and Díaz, Irene
- Subjects
- *
PARTIALLY ordered sets , *HEURISTIC algorithms , *FUZZY measure theory , *MARKOV processes , *MATHEMATICAL models - Abstract
• Development of an algorithm for the random generation of linear extensions of posets that can also be used to generate fuzzy measures randomly. • It is exact on all the posets for which previous existing methods are exact and on some that those previous algorithms are not. • It can be used to estimate the number of linear extensions of a poset with a low average error. In this work, Minimals Plus, an algorithm for the random generation of linear extensions from a poset is introduced. It improves a previously existing heuristic algorithm, Minimals, and its recent modification, Bottom-Up. Minimals Plus shares all the strengths of Bottom-Up and none of its weaknesses: it can be applied to any poset, has a fast initialization step (while Bottom-Up may require exponential time), and is exact at least when Bottom-Up is exact. In addition to mathematically proving these properties, we also conduct experiments on almost two hundred thousand different posets (including all non-isomorphic posets of up to nine elements and some posets directly related to fuzzy measures) to check the behavior of Minimals Plus. In addition, the results show evidence that Minimals Plus can help traditional Markov-based methods to mix faster and that it can be also used to estimate the number of linear extensions of a poset. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Estimating random walk centrality in networks.
- Author
-
Johnson, Brad C. and Kirkland, Steve
- Subjects
- *
RANDOM walks , *MARKOV processes , *CENTRALITY , *UNDIRECTED graphs , *DIRECTED graphs , *CONFIDENCE intervals - Abstract
Random walk centrality (equivalently, the accessibility index) for the states of a time-homogeneous irreducible Markov chain on a finite state space is considered. It is known that the accessibility index for a particular state can be written in terms of the first and second moments of the first return time to that state. Based on that observation, the problem of estimating the random walk centrality of a state is approached by taking realizations of the Markov chain, and then statistically estimating the first two moments of the corresponding first return time. In addition to the estimate of the random walk centrality, this method also yields the standard error, the bias and a confidence interval for that estimate. For the case that the directed graph of the transition matrix for the Markov chain has a cut-point, an alternate strategy for computing the random walk centrality is outlined that may be of use when the centrality values are of interest for only some of the states. In order to illustrate the effectiveness of the results, estimates of the random walk centrality arising from random walks for several directed and undirected graphs are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. On strong stationary times and approximation of Markov chain hitting times by geometric sums.
- Author
-
Daly, Fraser
- Subjects
- *
MARKOV processes , *STOCHASTIC orders - Abstract
Abstract For discrete-time, ergodic Markov chains started from stationarity, Fill and Lyzinski (2014) showed that hitting times may sometimes be represented as geometric sums. We extend this, giving explicit bounds in the approximation of hitting times by appropriately chosen geometric sums. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Designing group dose-response studies in the presence of transmission.
- Author
-
Price, David J., Bean, Nigel G., Ross, Joshua V., and Tuke, Jonathan
- Subjects
- *
DOSE-response relationship in biochemistry , *BAYESIAN analysis , *EPIDEMICS , *MARKOV processes , *CAMPYLOBACTER jejuni - Abstract
Dose-response studies are used throughout pharmacology, toxicology and in clinical research to determine safe, effective, or hazardous doses of a substance. When involving animals, the subjects are often housed in groups; this is in fact mandatory in many countries for social animals , on ethical grounds. An issue that may consequently arise is that of unregulated between-subject dosing (transmission), where a subject may transmit the substance to another subject. Transmission will obviously impact the assessment of the dose-response relationship, and will lead to biases if not properly modelled. Here we present a method for determining the optimal design – pertaining to the size of groups, the doses, and the killing times – for such group dose-response experiments, in a Bayesian framework. Our results are of importance to minimising the number of animals required in order to accurately determine dose-response relationships. Furthermore, we additionally consider scenarios in which the estimation of the amount of transmission is also of interest. A particular motivating example is that of Campylobacter jejuni in chickens. Code is provided so that practitioners may determine the optimal design for their own studies. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. MONTE CARLO METHODS IN THEORETICAL HIGH-ENERGY PHYSICS.
- Author
-
Lautrup, B.
- Subjects
- *
MONTE Carlo method , *NUCLEAR physics , *NUMERICAL analysis , *MATHEMATICAL models , *QUANTUM theory , *PARTICLES (Nuclear physics) - Abstract
The article examines the basic theory of Monte Carlo method in theoretical high energy physics. Theoretical high-energy physics is almost synonymous with relativistic quantum field theory. Models of nature consist of a multitude of interacting fields, some corresponding to observable particles and others to unobservable particles out of which other observable particles are composed. Each field has one or more components at every space-time point. A relativistic quantum field theory can be cast into the form of a statistical system by considering time as imaginary instead of real. A number of physical properties can be determined from the correlation functions of this statistical system. The Monte Carlo method has seen many applications in physics besides these more recent ones. In experimental physics, Monte Carlo methods are used to generate events according to theoretical models. In perturbative field theory, Monte Carlo methods have been used to evaluate the integrals that arise from higher order Feynman diagrams.
- Published
- 1985
- Full Text
- View/download PDF
13. Entangled Hidden Markov Models.
- Author
-
Souissi, Abdessatar and Soueidi, El Gheteb
- Subjects
- *
HIDDEN Markov models , *MARKOV processes - Abstract
In this paper, we aim to expand upon our prior work on quantum hidden Markov processes by introducing the concept of entangled hidden Markov processes. These are quantum hidden processes in which the hidden processes themselves are entangled Markov processes. We provide an explicit expression for the joint expectation of these processes. Additionally, we show that our approach recovers the classical case. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Convergence of series of conditional expectations.
- Author
-
Peligrad, M. and Peligrad, C.
- Subjects
- *
MARKOV operators , *CONDITIONAL expectations , *MARKOV processes , *RANDOM variables , *BANACH spaces , *PARTIAL sums (Series) , *MARTINGALES (Mathematics) - Abstract
This paper deals with almost sure convergence for partial sums of Banach space valued random variables. The results are then applied to solve similar problems for weighted partial sums of conditional expectations. They are further used to treat partial sums of powers of a reversible Markov chain operator. The method of proof is based on martingale approximation. The conditions are expressed in terms of moments of individual summands. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Safe Markov Chains for ON/OFF Density Control With Observed Transitions.
- Author
-
Demirer, Nazli, Chamie, Mahmoud El, and Acikmese, Behcet
- Subjects
- *
MARKOV processes , *MOBILE agent systems , *STOCHASTIC analysis , *DECENTRALIZED control systems , *DISCRETE-time systems - Abstract
This paper presents a convex optimization approach to control the density distribution of autonomous mobile agents (single or multiple) in a stochastic environment with two control modes: ON and OFF. The main new characteristic distinguishing this model from standard Markov decision models is the existence of the ON control mode and its observed actions. During the ON mode, the instantaneous outcome of one of the actions of the ON mode is measured and a decision is made to whether this action is taken or not based on this new observation. If this action is not taken, the OFF mode is activated where a transition occurs based on a predetermined set of transitional probabilities, without making any additional observations. In this decision-making model, an agent acts autonomously according to an ON/OFF decision policy, and the discrete probability distribution for the agent's state evolves according to a discrete-time Markov chain that is a linear function of the stochastic environment and the ON/OFF decision policy. The relevant policy synthesis is formulated as a convex optimization problem where safety and convergence constraints are imposed on the resulting Markov matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. On heat kernel decay for the random conductance model.
- Author
-
Boukhadra, Omar
- Subjects
- *
KERNEL functions , *RANDOM walks , *RANDOM variables , *MATHEMATICAL models , *PERCOLATION - Abstract
We study discrete time random walks in an environment of i.i.d. non-negative bounded conductances in Z d . We are interested in the anomaly of the heat kernel decay. We improve recent results and techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Evaluación de la costo-efectividad de un modelo integral de tratamiento ambulatorio en pacientes con síndrome coronario agudo: aplicación de un modelo de Markov probabilístico.
- Author
-
Salgado, Kelly, Salazar-Uribe, Juan Carlos, Gallo-Villegas, Jaime, Valencia, Ángela, Espíndola-Fernández, Diego, Mesa, Cristina, de la Calle, Juan, Montoya, Yanett, and Aristizábal, Dagnóvar
- Subjects
- *
TREATMENT of acute coronary syndrome , *OUTPATIENT medical care , *CARDIOVASCULAR diseases , *COST effectiveness , *CARDIAC rehabilitation , *HOSPITAL care , *INTEGRATED health care delivery , *RESEARCH methodology , *MEDICAL care costs , *MULTIVARIATE analysis , *DISEASE management , *QUALITY-adjusted life years , *STATISTICAL models - Abstract
Objective. To evaluate the cost-effectiveness of an integral model of ambulatory treatment in patients who presented an acute coronary syndrome. Methods. An economic evaluation was made from a quasi-experimental intervention study, which included 442 patients aged 30 to 70 years who presented an acute coronary syndrome. The intervention group (n = 165) received an integral model of ambulatory treatment based on managed care (disease management), while the control group (n = 277) received conventional cardiovascular rehabilitation. During one year of follow-up, the presentation of cardiovascular events and hospitalizations was evaluated. A probabilistic Markov model was developed. The study perspective was applied within the General System of Health Social Security in Colombia, including the direct health costs; the time horizon was 50 years with discounts of 3.42% for costs and effectiveness; and the measure of effectiveness was quality-adjusted life years (QALYs). A probabilistic and multivariate sensitivity analysis was performed using the Montecarlo simulation. Results. During the year of follow-up, the direct costs related to the value paid were, on average, USD 2 577 for the control group and USD 2 245 for the intervention group. In the probabilistic sensitivity analysis, 91.3% of the simulations were located in the quadrant corresponding to incremental negative costs and positive incremental effectiveness (evaluated intervention at a lower cost, more effective). In the simulations, an average annual savings per patient of USD 1 215 per QALY was observed. Conclusions. The integral model of ambulatory treatment implemented in patients who suffered an acute coronary syndrome was found to be less expensive and more effective compared to conventional care. Considering it is a dominant alternative, it is recommended as a model of care in this population. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Study of new rare event simulation schemes and their application to extreme scenario generation.
- Author
-
Agarwal, Ankush, De Marco, Stefano, Gobet, Emmanuel, and Liu, Gang
- Subjects
- *
ERGODIC theory , *PROOF theory , *SIMULATION methods & models , *STOCHASTIC convergence , *MARGINAL distributions , *MARKOV processes - Abstract
This is a companion paper based on our previous work on rare event simulation methods. In this paper, we provide an alternative proof for the ergodicity of shaking transformation in the Gaussian case and propose two variants of the existing methods with comparisons of numerical performance. In numerical tests, we also illustrate the idea of extreme scenario generation based on the convergence of marginal distributions of the underlying Markov chains and show the impact of the discretization of continuous time models on rare event probability estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. A sufficient condition for a unique invariant distribution of a higher-order Markov chain.
- Author
-
Geiger, Bernhard C.
- Subjects
- *
MARKOV processes , *APPROXIMATION theory , *SET theory , *THEORY of distributions (Functional analysis) , *MATHEMATICAL symmetry - Abstract
We derive a sufficient condition for a k th order homogeneous Markov chain Z with finite alphabet Z to have a unique invariant distribution on Z k . Specifically, let X be a first-order, stationary Markov chain with finite alphabet X and a single recurrent class, let g : X → Z be non-injective, and define the (possibly non-Markovian) process Y : = g ( X ) (where g is applied coordinate-wise). If Z is the k th order Markov approximation of Y , its invariant distribution is unique. We generalize this to non-Markovian processes X . [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. A generalization of Gerber’s inequality for ruin probabilities in risk-switching models.
- Author
-
Gajek, Lesław and Rudź, Marcin
- Subjects
- *
MATHEMATICAL inequalities , *PROBABILITY theory , *SWITCHING theory , *MARKOV processes , *DISCRETE-time systems - Abstract
In this paper, we investigate a risk-switching Sparre Andersen model which generalizes several discrete time- as well as continuous time risk models. A Markov chain is used as a ‘switch’ under the assumption that jumps change the amount and/or respective wait time distributions of claims while the insurer can adapt the premiums in response. A generalized Gerber-type inequality for the vector of ruin probabilities is proven showing that the risk-switching models allow sophisticated mathematical results in spite of their complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. On beta distributed limits of iterated linear random functions.
- Author
-
McKinlay, Shaun
- Subjects
- *
RANDOM functions (Mathematics) , *ITERATIVE methods (Mathematics) , *LINEAR statistical models , *FIXED point theory , *MARKOV processes , *ERGODIC theory - Abstract
We consider several special cases of iterations of random i.i.d. linear functions with beta distributed fixed points. When iterated in a backward direction we obtain a nested interval scheme, whilst the forward direction generates an ergodic Markov chain. Our approach involves relating the random equation satisfied by the beta distributed fixed point to a random equation with a gamma distributed fixed point. The paper extends many results available in the existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. Stratified Monte Carlo simulation of Markov chains.
- Author
-
Fakhereddine, Rana, Haddad, Rami El, Lécot, Christian, and Maalouf, Joseph El
- Subjects
- *
MONTE Carlo method , *MARKOV processes , *NUMERICAL integration , *HYPERCUBES , *COMPUTATIONAL statistics - Abstract
We present several Monte Carlo strategies for simulating discrete-time Markov chains with continuous multi-dimensional state space; we focus on stratified techniques. We first analyze the variance of the calculation of the measure of a domain included in the unit hypercube, when stratified samples are used. We then show that each step of the simulation of a Markov chain can be reduced to the numerical integration of the indicator function of a subdomain of the unit hypercube. Our approach for Markov chains simulates N copies of the chain in parallel using stratified sampling and the copies are sorted after each step, according to their successive coordinates. We analyze variance reduction on examples of pricing of European and Asian options: enhanced efficiency of stratified strategies is shown. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. Generalized Markov chain tree theorem and Kemeny's constant for a class of non-Markovian matrices.
- Author
-
Choi, Michael C.H. and Huang, Zhipeng
- Subjects
- *
MARKOV processes , *MATRICES (Mathematics) , *TREES , *MARKOV chain Monte Carlo - Abstract
Given an ergodic Markov chain with transition matrix P and stationary distribution π , the classical Markov chain tree theorem expresses π in terms of graph-theoretic parameters associated with the graph of P. For a class of non-stochastic matrices M 2 associated with P , recently introduced by the first author in Choi (2020) and Choi and Huang (2020), we prove a generalized version of Markov chain tree theorem in terms of graph-theoretic quantities of M 2. This motivates us to define generalized version of mean hitting time, fundamental matrix and Kemeny's constant associated with M 2 , and we show that they enjoy similar properties as their counterparts of P even though M 2 is non-stochastic. We hope to shed lights on how concepts and results originated from the Markov chain literature, such as the Markov chain tree theorem, Kemeny's constant or the notion of hitting time, can possibly be extended and generalized to a broader class of non-stochastic matrices via introducing appropriate graph-theoretic parameters. In particular, when P is reversible, the results of this paper reduce to the results of P. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Convergence Time of Quantized Metropolis Consensus Over Time-Varying Networks.
- Author
-
Basar, Tamer, Etesami, Seyed Rasoul, and Olshevsky, Alex
- Subjects
- *
TIME-varying networks , *CONSENSUS (Social sciences) , *METROPOLIS , *STOCHASTIC convergence , *GRAPH connectivity , *COMPUTER network protocols - Abstract
We consider the quantized consensus problem on undirected time-varying connected graphs with $n$ nodes, and devise a protocol with fast convergence time to the set of consensus points. Specifically, we show that when the edges of each network in a sequence of connected time-varying networks are activated based on Poisson processes with Metropolis rates, the expected convergence time to the set of consensus points is at most O(n^{2}\log^{2}n), where each node performs a constant number of updates per unit time. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. Using Markov chains to identify player's performance in badminton.
- Author
-
Galeano, Javier, Gómez, Miguel-Ángel, Rivas, Fernando, and Buldú, Javier M.
- Subjects
- *
MARKOV processes , *STOCHASTIC matrices , *BADMINTON players , *PROBABILITY theory - Abstract
We introduce a new way of quantifying the performance of badminton players by analysing their hitting sequences. Using the position of players during 3 consecutive strokes, we create length-3 patterns associated to the playing style of each player. Additionally, we extract from the video matches the information about the initiative gained by a player when performing a stroke, together with the player who won the point at the end of each rally. Next, we obtain the probability that a 3-order pattern is performed by a player and compared it with the average of the top-twenty players. We calculate the transition probabilities between patterns and construct the corresponding Markov chains including two absorbing states: winning and losing the rally. The Markov matrix allow us to obtain the probability of winning a point once a given pattern appears in the rally, which we call the Expected Pattern Value (EPV). Finally, we investigate the interplay between the EPV and the gain of initiative achieved by a player when performing each pattern. With this information, we are able to detect what patterns are better performed by a player and, furthermore, relate the values of the patterns with the actual probability of winning a rally. • We study 3-stroke pattern to understand the evolution of a rally in a badminton match. • Using Markov chains with absorbing states we obtain the probability of a winning point. • Using the Markov matrix, we define the Expected Pattern Value (EPV) in Badminton. • We study the interplay between EPVs and initiative gain to asses winning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Convergence rates for empirical measures of Markov chains in dual and Wasserstein distances.
- Author
-
Riekert, Adrian
- Subjects
- *
INVARIANT measures , *MARKOV processes , *RANDOM variables , *INTEREST rates - Abstract
We consider a Markov chain on R d with invariant measure μ. We are interested in the rate of convergence of the empirical measures towards the invariant measure with respect to various dual distances, including in particular the 1-Wasserstein distance. The main result of this article is a new upper bound for the expected distance, which is proved by combining a Fourier expansion with a truncation argument. Our bound matches the known rates for i.i.d. random variables up to logarithmic factors. In addition, we show how concentration inequalities around the mean can be obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. When Markov chains meet: A continuous-time model of network evolution.
- Author
-
Gilboa-Freedman, Gail and Hassin, Refael
- Subjects
- *
MARKOV processes , *CONTINUOUS time models , *MODULES (Algebra) , *NUMERICAL analysis , *STOCHASTIC processes - Abstract
We suggest a novel approach to model continuous time processes of the interactions of independent elements. The model assumes a finite number of independent Markov chains, each representing an element. Chains move among a common space of states. Sometimes chains intersect, being in the same state at the same time. These intersections relate the chains with each other and imply many interesting processes. In this paper, we examine our new approach in the context of network evolution. Our analytic study achieves a closed solution for the expected time until a node has any specific degree. Our numerical study demonstrates properties which are in agreement with real world networks. Thus we show the potential of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. Random migration processes between two stochastic epidemic centers.
- Author
-
Sazonov, Igor, Kelbert, Mark, and Gravenor, Michael B.
- Subjects
- *
STOCHASTIC processes , *EPIDEMICS , *MARKOV processes , *DISTRIBUTION (Probability theory) , *NUMERICAL analysis - Abstract
We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Convergence Time for Unbiased Quantized Consensus Over Static and Dynamic Networks.
- Author
-
Etesami, Seyed Rasoul and Basar, Tamer
- Subjects
- *
GRAPH connectivity , *UNDIRECTED graphs , *MARKOV processes , *RANDOM walks , *COMPUTER networks - Abstract
In this paper, the question of expected time to convergence is addressed for unbiased quantized consensus on undirected connected graphs, and some strong results are obtained. The paper first provides a tight expression for the expected convergence time of the unbiased quantized consensus over general but fixed networks. It is shown that the maximum expected convergence time lies within a constant factor of the maximum hitting time of an appropriate lazy random walk, using the theory of harmonic functions for reversible Markov chains. Following this, and using electric resistance analogy of the reversible Markov chains, the paper provides an upper bound of $O(nmD\log n)$ for the expected convergence time to consensus, where $n$, $m$ and $D$, denote, respectively, the number of nodes, the number of edges, and the diameter of the network. Moreover, the paper shows that the above upper bound is tight up to a factor of $\log n$ for some simple graphs such as line graph and cycle. Finally, the results are extended to bound the expected convergence time of the underlying dynamics in time-varying networks. Modeling such dynamics as the evolution of a time inhomogeneous Markov chain, the paper derives an upper bound for the expected convergence time of the dynamics using the spectral representation of the networks. This upper bound is significantly better than earlier ones given for different quantized consensus protocols over time-varying graphs. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
30. Solving the Pareto front for multiobjective Markov chains using the minimum Euclidean distance gradient-based optimization method.
- Author
-
Clempner, Julio B. and Poznyak, Alexander S.
- Subjects
- *
EUCLIDEAN distance , *EUCLIDEAN geometry , *LAGRANGE spaces , *DIFFERENTIAL geometry , *METHODOLOGY - Abstract
A novel method based on minimizing the Euclidean distance is proposed for generating a well-distributed Pareto set in multi-objective optimization for a class of ergodic controllable Markov chains. The proposed approach is based on the concept of strong Pareto policy. We consider the case where the search space is a non-strictly convex set. For solving the problem we introduce the Tikhonov’s regularization method and implement the Lagrange principle. We formulate the original problem introducing linear constraints over the nonlinear problem employing the c -variable method and constraining the cost-functions allowing points in the Pareto front to have a small distance from one another. As a result, the proposed method generates an even representation of the entire Pareto surface. Then, we propose an algorithm to compute the Pareto front and provide all the details needed to implement the method in an efficient and numerically stable way. As well, we prove the main Theorems for describing the dependence of the saddle point for the regularizing parameter and analyzes its asymptotic behavior. Moreover, we analyze the step size parameter of the Lagrange principle and also its asymptotic behavior. The suggested approach is validated theoretically and verified by a numerical example related to security patrolling that present a technique for visualizing the Pareto front. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. On the use of the Borel–Cantelli lemma in Markov chains.
- Author
-
Stepanov, Alexei
- Subjects
- *
MARKOV processes , *GENERALIZATION , *LIMITS (Mathematics) , *MATHEMATICAL optimization , *BOREL sets , *NUMERICAL analysis - Abstract
Abstract: In the present paper, we propose technical generalizations of the Borel–Cantelli lemma. These generalizations can be further used to derive strong limit results for Markov chains. In our work, we obtain some strong limit results. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
32. MTD models for aggregate data from higher order Markov chains.
- Author
-
Kaur, Inderdeep
- Subjects
- *
MIXTURE distributions (Probability theory) , *AGGREGATION (Statistics) , *MARKOV processes , *LEAST squares , *MULTINOMIAL distribution - Abstract
Abstract: We consider two higher order models for aggregate data on a finite state space. In the first model, aggregate data are obtained from i.i.d. individuals who follow Mixture Transition Distribution (MTD) Markov model of lag . In the second model, aggregate data are modeled as a MTD Markov model based on multinomial thinning. In both the cases, it is shown that Conditional Least Square Estimators are CAN for a fixed . [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
33. T–S fuzzy-model-based and filtering for networked control systems with two-channel Markovian random delays.
- Author
-
Liu, Mingxi, Liu, Xiaotao, Shi, Yang, and Wang, Shuqing
- Subjects
- *
MARKOV processes , *TIME delay systems , *FUZZY systems , *SIGNAL filtering , *DISCRETE-time systems , *LINEAR matrix inequalities - Abstract
Abstract: This paper is concerned with the two-mode-dependent filtering problem in networked control systems (NCSs) where the random external input-to-filter delay and the output-to-filter delay are modeled as Markov chains. The nonlinear discrete-time system is modeled by the Takagi–Sugeno fuzzy model. The overall filtering error system is formulated as a special jump linear system. Then definitions of and norms of the filtering error system are proposed. Moreover, the two-mode-dependent and filters are designed to incorporate the external input-to-filter delay and the output-to-filter delay using linear matrix inequality (LMI) techniques. The simulations and a numerical example illustrate the feasibility and effectiveness of the proposed method. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
34. Markovian interpretations of dual retrieval processes.
- Author
-
Gomes, C.F.A., Brainerd, C.J., Nakamura, K., and Reyna, V.F.
- Subjects
- *
RECOLLECTION (Psychology) , *INFORMATION retrieval , *MARKOV processes , *LEARNING , *RECOGNITION (Psychology) , *EPISODIC memory - Abstract
Abstract: A half-century ago, at the dawn of the all-or-none learning era, Estes showed that finite Markov chains supply a tractable, comprehensive framework for discrete-change data of the sort that he envisioned for shifts in conditioning states in stimulus sampling theory. Shortly thereafter, such data rapidly accumulated in many spheres of human learning and animal conditioning, and Estes’ work stimulated vigorous development of Markov models to handle them. A key outcome was that the data of the workhorse paradigms of episodic memory, recognition and recall, proved to be one- and two-stage Markovian, respectively, to close approximations. Subsequently, Markov modeling of recognition and recall all but disappeared from the literature, but it is now reemerging in the wake of dual-process conceptions of episodic memory. In recall, in particular, Markov models are being used to measure two retrieval operations (direct access and reconstruction) and a slave familiarity operation. In the present paper, we develop this family of models and present the requisite machinery for fit evaluation and significance testing. Results are reviewed from selected experiments in which the recall models were used to understand dual memory processes. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
35. Hoeffding’s inequalities for geometrically ergodic Markov chains on general state space.
- Author
-
Miasojedow, Błażej
- Subjects
- *
HOEFFDING'S inequalities , *GEOMETRIC analysis , *ERGODIC theory , *MARKOV processes , *DISTRIBUTION (Probability theory) , *SPECTRAL geometry , *MATHEMATICAL proofs - Abstract
Abstract: Let be a Markov chain on a general state space with stationary distribution and a spectral gap in the space . In this paper, we prove that the probabilities of large deviations of sums satisfy an inequality of Hoeffding type. We generalize results of León and Perron (2004) in two directions; in our paper the state space is general and we do not assume reversibility. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
36. Invariant distribution of stochastic Gompertz equation under regime switching.
- Author
-
Hu, Guixin
- Subjects
- *
INVARIANTS (Mathematics) , *DISTRIBUTION (Probability theory) , *STOCHASTIC analysis , *SWITCHING theory , *STOCHASTIC differential equations , *MARKOV processes - Abstract
Abstract: This paper is concerned with the asymptotic behaviors of a stochastic Gompertz model in random environments from the view of Itô stochastic differential equations with Markovian switching. Based upon the deterministic Gompertz model, we establish the corresponding stochastic model which is described as a stochastic Gompertz models with Markovian switching. We show that this model is asymptotically stable in distribution and that it displays an invariant probability distribution under certain conditions. Most importantly, we simulate the trajectories and the limits probability distribution of the solution with the method of Monte Carlo stochastic simulation. The simulation results illustrate that our conclusions are correct, and moreover the results reflect the statistical properties of the stochastic model. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
37. A note on the relaxation time of two Markov chains on rooted phylogenetic tree spaces.
- Author
-
Spade, David A., Herbei, Radu, and Kubatko, Laura S.
- Subjects
- *
MARKOV processes , *PHYLOGENY , *TREE graphs , *FUNCTION spaces , *BIOLOGICAL mathematical modeling , *NEAREST neighbor analysis (Statistics) - Abstract
Abstract: Phylogenetic trees are commonly used to model the evolutionary relationships among a collection of biological species. Over the past fifteen years, the convergence properties for Markov chains defined on phylogenetic trees have been studied, yielding results about the time required for such chains to converge to their stationary distributions. In this work we derive an upper bound on the relaxation time of two Markov chains on rooted binary trees: one defined by nearest neighbor interchanges (NNI) and the other defined by subtree prune and regraft (SPR) moves. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
38. When is a Markov chain regenerative?
- Author
-
Athreya, Krishna B. and Roy, Vivekananda
- Subjects
- *
MARKOV chain Monte Carlo , *RANDOM variables , *SEQUENCE analysis , *DISTRIBUTION (Probability theory) , *IRREDUCIBLE polynomials , *FUNCTION spaces - Abstract
Abstract: A sequence of random variables is called regenerative if it can be broken up into iid components. The problem addressed in this paper is that of determining under what conditions a Markov chain is regenerative. It is shown that an irreducible Markov chain with a countable state space is regenerative for any initial distribution if and only if it is recurrent (null or positive). An extension of this to the general state space case is also discussed. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
39. Stability Analysis of Stochastic Hybrid Jump Linear Systems Using a Markov Kernel Approach.
- Author
-
Tejada, Arturo, Gonzalez, Oscar R., and Gray, W. Steven
- Subjects
- *
STABILITY theory , *HYBRID computers (Computer architecture) , *FINITE state machines , *PROBABILITY theory , *MEAN square algorithms , *SIGNAL processing - Abstract
In this paper, the state dynamics of a supervisor implemented with a digital sequential system are represented with a finite state machine (FSM). The supervisor monitors a symbol sequence derived from a linear closed-loop system's performance and generates a switching signal for the closed-loop system. The effect of random events on the performance of the closed-loop system is analyzed by adding an exogenous Markov process input to the FSM, and by appropriately augmenting a switched system representation of the supervisor and the closed-loop system. For this class of hybrid jump linear systems, the switching signal is, in general, a non-Markovian process, making it hard to analyze its stability properties. This is ameliorated by introducing a sufficient mean square stability test that uses only upper bounds on the one-step transition probabilities of the switching signal. These bounds are explicitly derived from a Markov kernel associated with the hybrid system model. This stability test becomes necessary and sufficient when the switching signal is Markovian. To determine tighter stability bounds, procedures to determine the upper-bound transition probability matrices when the FSM has a Moore or a Mealy type output map are presented. Two examples illustrate the applicability of the presented results. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
40. Empirical processes of iterated maps that contract on average.
- Author
-
Durieu, Olivier
- Subjects
- *
ITERATIVE methods (Mathematics) , *MATHEMATICAL mappings , *MARKOV processes , *PROBABILITY theory , *STOCHASTIC convergence , *EMPIRICAL research - Abstract
Abstract: We consider a Markov chain obtained by random iterations of Lipschitz maps chosen with a probability depending on the current position . We assume this system has a property of “contraction on average”, that is for some . In the present note, we study the weak convergence of the empirical process associated to this Markov chain. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
41. Degree Fluctuations and the Convergence Time of Consensus Algorithms.
- Author
-
Olshevsky, Alex and Tsitsiklis, John N.
- Subjects
- *
AUTOMATIC control systems , *CONTROL theory (Engineering) , *STOCHASTIC processes , *AUTOMATION , *COMPUTER algorithms , *PROGRAMMABLE controllers , *PROCESS control systems - Abstract
We consider a consensus algorithm in which every node in a sequence of undirected, B-connected graphs assigns equal weight to each of its neighbors. Under the assumption that the degree of each node is fixed (except for times when the node has no connections to other nodes), we show that consensus is achieved within a given accuracy \epsilon on n nodes in time B+4n^3\ B\ln(2n/\epsilon). Because there is a direct relation between consensus algorithms in time-varying environments and in homogeneous random walks, our result also translates into a general statement on such random walks. Moreover, we give a simple proof of a result of Cao, Spielman, and Morse that the worst case convergence time becomes exponentially large in the number of nodes n under slight relaxation of the degree constancy assumption. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. Remarks on the speed of convergence of mixing coefficients and applications.
- Author
-
Longla, Martial
- Subjects
- *
STOCHASTIC convergence , *COPULA functions , *COEFFICIENTS (Statistics) , *DEPENDENCE (Statistics) , *MARKOV processes , *ALGORITHMS , *MATHEMATICAL symmetry - Abstract
Abstract: We study dependence coefficients for copula-based Markov chains. We provide new tools to check the convergence rates of mixing coefficients of copula-based Markov chains. We apply results to the Metropolis–Hastings algorithm. A necessary condition for symmetric copulas is given and mixtures of copulas are studied. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
43. The elitist non-homogeneous genetic algorithm: Almost sure convergence.
- Author
-
Rojas Cruz, J.A. and Pereira, A.G.C.
- Subjects
- *
EVOLUTIONARY algorithms , *GENETIC algorithms , *STOCHASTIC convergence , *PARAMETER estimation , *PROBABILITY theory , *DYNAMICS - Abstract
Abstract: Evolutionary algorithms are used to search for optimal points of functions. One of these algorithms, the non-homogeneous genetic algorithm, uses in its dynamics two parameters, namely mutation and crossover probabilities, which are allowed to change throughout the algorithm’s evolution. In this paper, we consider the elitist version of the non-homogeneous genetic algorithm and we prove its almost sure convergence to a population which has an optimum point in it. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
44. Lagrange Asymptotic Stability of Weak Detectable Markov Jump Linear Systems With Bounded Long Run Average Cost.
- Author
-
Barbosa, B.G. and Costa, Eduardo F.
- Subjects
- *
STABILITY of linear systems , *LAGRANGE equations , *MARKOV processes , *THERMAL stability , *OBSERVABILITY (Control theory) , *HILBERT space - Abstract
In this note we study the stability of Markov jump linear systems with additive noise. We show in a rather direct manner that the system is mean square Lagrange asymptotic stable if and only if the long run average cost is bounded and the system is weak detectable, generalizing previous results employing observability notions. In control applications this means that, for detectable systems, closed loop controls incurring in bounded long run average cost are ensured to be stabilizing. A numerical example is included. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
45. An almost sure local limit theorem for Markov chains
- Author
-
Giuliano-Antonini, Rita and Szewczak, Zbigniew S.
- Subjects
- *
LIMIT theorems , *MARKOV processes , *MATHEMATICAL analysis , *NUMERICAL analysis , *PROBABILITY theory - Abstract
Abstract: An almost sure local limit theorem for Markov chains is investigated. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
46. A simple variance inequality for -statistics of a Markov chain with applications
- Author
-
Fort, G., Moulines, E., Priouret, P., and Vandekerkhove, P.
- Subjects
- *
ANALYSIS of variance , *MATHEMATICAL inequalities , *STATISTICS , *MARKOV processes , *RANDOM variables , *MATHEMATICAL bounds - Abstract
Abstract: We establish a simple variance inequality for -statistics whose underlying sequence of random variables is an ergodic Markov Chain. The constants in this inequality are explicit and depend on computable bounds on the mixing rate of the Markov Chain. We apply this result to derive the strong law of large numbers for -statistics of a Markov Chain under conditions which are close to being optimal. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
47. An order-theoretic mixing condition for monotone Markov chains
- Author
-
Kamihigashi, Takashi and Stachurski, John
- Subjects
- *
MARKOV processes , *MONOTONE operators , *MIXING , *DISTRIBUTION (Probability theory) , *STATIONARY processes , *IRREDUCIBLE polynomials - Abstract
Abstract: We discuss the stability of discrete-time Markov chains satisfying monotonicity and an order-theoretic mixing condition that can be seen as an alternative to irreducibility. A chain satisfying these conditions has at most one stationary distribution. Moreover, if there is a stationary distribution, then the chain is stable in an order-theoretic sense. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
48. Approximate Abstractions of Stochastic Hybrid Systems.
- Author
-
Abate, Alessandro, D'Innocenzo, Alessandro, and Di Benedetto, Maria D.
- Subjects
- *
APPROXIMATION theory , *HYBRID systems , *STOCHASTIC systems , *MARKOV processes , *KERNEL functions , *ERGODIC theory , *ABSTRACT thought - Abstract
We present a constructive procedure for obtaining a finite approximate abstraction of a discrete-time stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov set-Chain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finite-time algorithm that computes the abstraction. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. Non-reversible Monte Carlo simulations of spin models
- Author
-
Fernandes, Heitor C.M. and Weigel, Martin
- Subjects
- *
MARKOV processes , *RANDOM walks , *MAGNETIZATION , *GENERALIZATION , *MONTE Carlo method , *MATHEMATICAL models - Abstract
Abstract: Monte Carlo simulations are used to study simple systems where the underlying Markov chain satisfies the necessary condition of global balance but does not obey the more restrictive condition of detailed balance. Here, we show that non-reversible Markov chains can be set up that generate correct stationary distributions, but reduce or eliminate the diffusive motion in phase space typical of the usual Monte Carlo dynamics. Our approach is based on splitting the dynamics into a set of replicas with each replica representing a biased movement in reaction-coordinate space. This introduction of an additional bias in a given replica is compensated for by choosing an appropriate dynamics on the other replicas such as to ensure the validity of global balance. First, we apply this method to a mean-field Ising model, splitting the system into two replicas: one trying to increase magnetization and the other trying to decrease it. For this simple test system, our results show that the altered dynamics is able to reduce the dynamical critical exponent. Generalizations of this scheme to simulations of the Ising model in two dimensions are discussed. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
50. On simulated annealing with temperature-dependent energy and temperature-dependent communication
- Author
-
Robini, Marc C. and Reissman, Pierre-Jean
- Subjects
- *
SIMULATED annealing , *MONTE Carlo method , *COMBINATORIAL optimization , *MARKOV processes , *STOCHASTIC convergence , *TEMPERATURE , *COST , *CALCULUS of variations - Abstract
Abstract: Simulated annealing (SA) is a generic optimization method that is quite popular because of its ease of implementation and its optimal convergence properties. Still, SA is widely reported to converge very slowly and it is common practice to allow extra freedom in its design at the expense of losing global convergence guarantees. In this paper, we derive simple sufficient conditions for the global convergence of SA when the cost function and the candidate solution generation mechanism are temperature-dependent. These conditions are surprisingly weak–they do not involve the variations of the cost function with temperature–and exponential cooling makes it possible to be arbitrarily close to the best possible convergence exponent of standard SA. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.