Back to Search
Start Over
DAG-Inducing Problems and Algorithms
- Publication Year :
- 2023
-
Abstract
- Consider the execution of a sequential algorithm that requires the program to converge to an optimal state, and then terminate/stutter. To design such an algorithm, we need to ensure that the state space that it traverses forms a directed acyclic graph (DAG) and its sink nodes are optimal states. However, if we run the same algorithm on multiple computing nodes running in parallel, and without synchronization, it may not reach an optimal state. In most parallel processing algorithms designed in the literature, a synchronization primitive is assumed. Synchronization ensures that the nodes read fresh value, and the execution proceeds systematically, such that the subject algorithm traverses a DAG induced among the global states. With this observation, we investigate the conditions that guarantee that the execution of an algorithm is correct even if it is executed in parallel and without synchronization. To this end, we introduce DAG-inducing problems and DAG-inducing algorithms. We show that induction of a $\prec$-DAG (induced among the global states -- that forms as a result of a partial order induced among the local states visited by individual nodes) is a necessary and sufficient condition to allow an algorithm to run in asynchrony. In the paper, we first give a comprehensive description of DAG-inducing problems and DAG-inducing algorithms, along with some simple examples. Then we show some properties of an algorithm that is tolerant to asynchrony, which include the above-mentioned condition.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2302.14834
- Document Type :
- Working Paper