3,542 results
Search Results
2. Comments on a paper “A Hermitian Morita theorem for algebras with anti-structure”
- Author
-
Dasgupta, Bhanumati
- Subjects
- *
ALGEBRA , *MATHEMATICS , *MATHEMATICAL analysis , *ALGORITHMS - Abstract
Abstract: In 1.9 of the paper [A. Hahn, A Hermitian Morita theorem for algebras with anti-structure, J. Algebra 93 (1985) 215–235], should be replaced by . This leads to minor changes in the rest of the paper where the ring should be replaced by its opposite and vice versa. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
3. Almost optimal query algorithm for hitting set using a subset query.
- Author
-
Bishnu, Arijit, Ghosh, Arijit, Kolay, Sudeshna, Mishra, Gopinath, and Saurabh, Saket
- Subjects
- *
HYPERGRAPHS , *INDEPENDENT sets , *COMBINATORIAL optimization , *ALGORITHMS - Abstract
In this paper, we focus on Hitting-Set , a fundamental problem in combinatorial optimization, through the lens of sublinear time algorithms. Given access to the hypergraph through a subset query oracle in the query model, we give sublinear time algorithms for Hitting-Set with almost tight parameterized query complexity. In parameterized query complexity , we estimate the number of queries to the oracle based on the parameter k , the size of the Hitting-Set. The subset query oracle we use in this paper is called Generalized d -partite Independent Set query oracle (GPIS) and it was introduced by Bishnu et al. (ISAAC'18). GPIS is a generalization to hypergraphs of the Bipartite Independent Set query oracle (BIS) introduced by Beame et al. (ITCS'18 and TALG'20) for estimating the number of edges in graphs. Since its introduction GPIS query oracle has been used for estimating the number of hyperedges independently by Dell et al. (SODA'20 and SICOMP'22) and Bhattacharya et al. (STACS'22), and for estimating the number of triangles in a graph by Bhattacharya et al. (ISAAC'19 and TOCS'21). Formally, GPIS is defined as follows: GPIS oracle for a d-uniform hypergraph H takes as input d pairwise disjoint non-empty subsets A 1 , ... , A d of vertices in H and answers whether there is a hyperedge in H that intersects each set A i , where i ∈ { 1 , 2 , ... , d }. For d = 2 , the GPIS oracle is nothing but BIS oracle. We show that d - Hitting-Set , the hitting set problem for d -uniform hypergraphs, can be solved using O ˜ d (k d log n) GPIS queries. Additionally, we also showed that d - Decision-Hitting-Set , the decision version of d - Hitting-Set can be solved with O ˜ d (min { k d log n , k 2 d 2 }) GPIS queries. We complement these parameterized upper bounds with an almost matching parameterized lower bound that states that any algorithm that solves d - Decision-Hitting-Set requires Ω (( k + d d )) GPIS queries. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Valuative dimension, constructive points of view.
- Author
-
Lombardi, Henri, Neuwirth, Stefan, and Yengui, Ihsen
- Subjects
- *
COMMUTATIVE rings , *CONSTRUCTIVE mathematics , *ABSTRACT algebra , *MATHEMATICS - Abstract
There are several classical characterisations of the valuative dimension of a commutative ring. Constructive versions of this dimension have been given and proven to be equivalent to the classical notion within classical mathematics, and they can be used for the usual examples of commutative rings. To the contrary of the classical versions, the constructive versions have a clear computational content. This paper investigates the computational relationship between three possible constructive definitions of the valuative dimension of a commutative ring. In doing so, it proves these constructive versions to be equivalent within constructive mathematics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Computing free non-commutative Gröbner bases over [formula omitted] with Singular:Letterplace.
- Author
-
Levandovskyy, Viktor, Metzlaff, Tobias, and Abou Zeid, Karim
- Subjects
- *
GROBNER bases , *INTEGRAL domains , *ASSOCIATIVE algebras , *ASSOCIATIVE rings , *COMMUTATIVE rings - Abstract
With this paper we present an extension of our recent ISSAC paper about computations of Gröbner(-Shirshov) bases over free associative algebras Z 〈 X 〉. We present all the needed proofs in details, add a part on the direct treatment of the ring Z / m Z as well as new examples and applications to e.g. Iwahori-Hecke algebras. The extension of Gröbner bases concept from polynomial algebras over fields to polynomial rings over rings allows to tackle numerous applications, both of theoretical and of practical importance. Gröbner and Gröbner-Shirshov bases can be defined for various non-commutative and even non-associative algebraic structures. We study the case of associative rings and aim at free algebras over principal ideal rings. We concentrate on the case of commutative coefficient rings without zero divisors (i.e. a domain). Even working over Z allows one to do computations, which can be treated as universal for fields of arbitrary characteristic. By using the systematic approach, we revisit the theory and present the algorithms in the implementable form. We show drastic differences in the behaviour of Gröbner bases between free algebras and algebras, which are close to commutative. Even the process of the formation of critical pairs has to be reengineered, together with implementing the criteria for their quick discarding. We present an implementation of algorithms in the Singular subsystem called Letterplace , which internally uses Letterplace techniques (and Letterplace Gröbner bases), due to La Scala and Levandovskyy. Interesting examples and applications accompany our presentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. An algorithmic approach to small limit cycles of nonlinear differential systems: The averaging method revisited.
- Author
-
Huang, Bo and Yap, Chee
- Subjects
- *
NONLINEAR systems , *LIMIT cycles , *MAPLE , *ALGORITHMS - Abstract
This paper introduces an algorithmic approach to the analysis of bifurcation of limit cycles from the centers of nonlinear continuous differential systems via the averaging method. We develop three algorithms to implement the averaging method. The first algorithm allows one to transform the considered differential systems to the normal form of averaging. Here, we restricted the unperturbed term of the normal form of averaging to be identically zero. The second algorithm is used to derive the computational formulae of the averaged functions at any order. The third algorithm is based on the first two algorithms and determines the exact expressions of the averaged functions for the considered differential systems. The proposed approach is implemented in Maple and its effectiveness is shown by several examples. Moreover, we report some incorrect results in published papers on the averaging method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. A physical study of the LLL algorithm.
- Author
-
Ding, Jintai, Kim, Seungki, Takagi, Tsuyoshi, Wang, Yuntao, and Yang, Bo-yin
- Subjects
- *
GEOMETRIC series , *ALGORITHMS , *FINITE size scaling (Statistical physics) - Abstract
This paper presents a study of the LLL algorithm from the perspective of statistical physics. Based on our experimental and theoretical results, we suggest that interpreting LLL as a sandpile model may help understand much of its mysterious behavior. In the language of physics, our work presents evidence that LLL and certain 1-d sandpile models with simpler toppling rules belong to the same universality class. This paper consists of three parts. First, we introduce sandpile models whose statistics imitate those of LLL with compelling accuracy, which leads to the idea that there must exist a meaningful connection between the two. Indeed, on those sandpile models, we are able to prove the analogues of some of the most desired statements for LLL, such as the existence of the gap between the theoretical and the experimental RHF bounds. Furthermore, we test the formulas from finite-size scaling theory (FSS) against the LLL algorithm itself, and find that they are in excellent agreement. This in particular explains and refines the geometric series assumption (GSA), and allows one to extrapolate various quantities of interest to the dimension limit. In particular, we obtain the estimate that the empirical average RHF converges to ≈1.02265 as the dimension goes to infinity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Unsupervised question-retrieval approach based on topic keywords filtering and multi-task learning.
- Author
-
Shang, Aiguo, Zhu, Xinjuan, Danner, Michael, and Rätsch, Matthias
- Subjects
- *
INFORMATION retrieval , *LEARNING , *QUESTION answering systems , *CONTRASTIVE linguistics , *ALGORITHMS - Abstract
Currently, the majority of retrieval-based question-answering systems depend on supervised training using question pairs. However, there is still a significant need for further exploration of how to employ unsupervised methods to improve the accuracy of retrieval-based question-answering systems. From the perspective of question topic keywords, this paper presents TFCSG, an unsupervised question-retrieval approach based on topic keyword filtering and multi-task learning. Firstly, we design the topic keyword filtering algorithm, which, unlike the topic model, can sequentially filter out the keywords of the question and can provide a training corpus for subsequent unsupervised learning. Then, three tasks are designed in this paper to complete the training of the question-retrieval model. The first task is a question contrastive learning task based on topic keywords repetition strategy, the second is questions and its corresponding sequential topic keywords similarity distribution task, and the third is a sequential topic keywords generation task using questions. These three tasks are trained in parallel in order to obtain quality question representations and thus improve the accuracy of question-retrieval task. Finally, our experimental results on the four publicly available datasets demonstrate the effectiveness of the TFCSG, with an average improvement of 7.1%, 4.4%, and 3.5% in the P@1, MAP, and MRR metrics when using the BERT model compared to the baseline model. The corresponding metrics improved by 5.7%, 3.5% and 3.0% on average when using the RoBERTa model. The accuracy of unsupervised similar question-retrieval task is effectively improved. In particular, the values of P@1, P@5, and P@10 are close, the retrieved similar questions are ranked more advance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A general positivity-preserving algorithm for implicit high-order finite volume schemes solving the Euler and Navier-Stokes equations.
- Author
-
Huang, Qian-Min, Zhou, Hanyu, Ren, Yu-Xin, and Wang, Qian
- Subjects
- *
NAVIER-Stokes equations , *CORRECTION factors , *FINITE volume method , *EULER equations , *ALGORITHMS - Abstract
• A novel positivity-preserving algorithm for implicit NS solver. • A residual correction to compute the correction factor. • A flux correction to enforce the positivity of the solution conservatively. • Positivity-preserving combined with implicit iterations. • Numerical experiments to verify the positivity-preserving capability. This paper presents a general positivity-preserving algorithm for implicit high-order finite volume schemes that solve compressible Euler and Navier-Stokes equations to ensure the positivity of density and internal energy (or pressure). Previous positivity-preserving algorithms are mainly based on the slope limiting or flux limiting technique, which rely on the existence of low-order positivity-preserving schemes. This dependency poses serious restrictions on extending these algorithms to temporally implicit schemes since it is difficult to know if a low-order implicit scheme is positivity-preserving. In the present paper, a new positivity-preserving algorithm is proposed in terms of the flux correction technique. And the factors of the flux correction are determined by a residual correction procedure. For a finite volume scheme that is capable of achieving a converged solution, we show that the correction factors are in the order of unity with additional high-order terms corresponding to the spatial and temporal rates of convergence. Therefore, the proposed positivity-preserving algorithm is accuracy-reserving and asymptotically consistent. The notable advantage of this method is that it does not rely on the existence of low-order positivity-preserving baseline schemes. Therefore, it can be applied to the implicit schemes solving Euler and especially Navier-Stokes equations. In the present paper, the proposed technique is applied to an implicit dual time-stepping finite volume scheme with temporal second-order and spatial high-order accuracy. The present positivity-preserving algorithm is implemented in an iterative manner to ensure that the dual time-stepping iteration will converge to the positivity-preserving solution. Another similar correction technique is also proposed to ensure that the solution remains positivity-preserving at each sub-iteration. Numerical results demonstrate that the proposed algorithm preserves positive density and internal energy in all test cases and significantly improves the robustness of the numerical schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. GPU-accelerated scalable solver with bit permutated cyclic-min algorithm for quadratic unconstrained binary optimization.
- Author
-
Yasudo, Ryota, Nakano, Koji, Ito, Yasuaki, Katsuki, Ryota, Tabata, Yusuke, Yazane, Takashi, and Hamano, Kenichiro
- Subjects
- *
QUANTUM annealing , *RANDOM numbers , *ISING model , *COMBINATORIAL optimization , *ALGORITHMS , *GRAPHICS processing units , *IMAGE encryption - Abstract
• The adaptive bulk search is a QUBO solver alternative to quantum annealing. • The cyclic-min algorithm is a SA-like algorithm without random number generation. • The bit permutated cyclic-min algorithm provides near-optimal solutions for QUBO. • Our scalable implementation attains linear improvement of the search rate. A wide range of combinatorial optimization problems can be reduced to the Ising model, and equivalently the quadratic unconstrained binary optimization (QUBO) problem. Thus, in recent years, researchers have proposed to solve QUBO on FPGAs, GPUs, and special-purpose processors. The adaptive bulk search (ABS) is a previously-proposed framework for solving QUBO in parallel on multiple GPUs. In the ABS, a CPU host performs a GA-based global search while GPUs asynchronously perform many local searches in parallel. An original ABS adopts a simple local search algorithm called a cyclic-min algorithm, which does not use pseudo random numbers. However, the lack of randomness may cause a potential drawback of restricted bit-flipping operations in a local search. To avoid this drawback, this paper proposes a cyclic-min algorithm with randomly-generated multiple bit permutations, which enables a more effective local search with random number generation in CPUs (not in GPUs). Furthermore, this paper introduces a scalable implementation of the ABS with MPI and OpenMP. Our experimental results on TSUBAME3.0 show that the solution quality improves and the throughput linearly increases as the number of GPUs increases; with 256 GPUs, it evaluates 20.1 × 10 12 solutions per second. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. LPI-IBWA: Predicting lncRNA-protein interactions based on an improved Bi-Random walk algorithm.
- Author
-
Xie, Minzhu, Xie, Ruijie, and Wang, Hao
- Subjects
- *
NON-coding RNA , *GENETIC regulation , *ALGORITHMS , *FORECASTING , *K-nearest neighbor classification - Abstract
Many studies have shown that long-chain noncoding RNAs (lncRNAs) are involved in a variety of biological processes such as post-transcriptional gene regulation, splicing, and translation by combining with corresponding proteins. Predicting lncRNA-protein interactions is an effective approach to infer the functions of lncRNAs. The paper proposes a new computational model named LPI-IBWA. At first, LPI-IBWA uses similarity kernel fusion (SKF) to integrate various types of biological information to construct lncRNA and protein similarity networks. Then, a bounded matrix completion model and a weighted k -nearest known neighbors algorithm are utilized to update the initial sparse lncRNA-protein interaction matrix. Based on the updated lncRNA-protein interaction matrix, the lncRNA similarity network and the protein similarity network are integrated into a heterogeneous network. Finally, an improved Bi-Random walk algorithm is used to predict novel latent lncRNA-protein interactions. 5-fold cross-validation experiments on a benchmark dataset showed that the AUC and AUPR of LPI-IBWA reach 0.920 and 0.736, respectively, which are higher than those of other state-of-the-art methods. Furthermore, the experimental results of case studies on a novel dataset also illustrated that LPI-IBWA could efficiently predict potential lncRNA-protein interactions. • LPI-IBWA is an effective computational model to predict lncRNA-protein interactions. • Similarity kernel fusion is used to integrate heterogenous biological similarities. • Bounded matrix completion updates lncRNA-protein interaction matrix. • Bi-Random walk on lncRNA-protein interactions network predicts novel interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Computing pullback function of second order differential operators by using their semi-invariants.
- Author
-
Aldossari, Shayea
- Subjects
- *
ALGORITHMS - Abstract
In this paper, we present an algorithm that, given a differential operator of order 2, detects if this operator comes from a simpler operator under a non-trivial change of variables. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Randomized complexity of mean computation and the adaption problem.
- Author
-
Heinrich, Stefan
- Subjects
- *
SEQUENCE spaces , *ALGORITHMS - Abstract
Recently the adaption problem of Information-Based Complexity (IBC) for linear problems in the randomized setting was solved in Heinrich (2024) [8]. Several papers treating further aspects of this problem followed. However, all examples obtained so far were vector-valued. In this paper we settle the scalar-valued case. We study the complexity of mean computation in finite dimensional sequence spaces with mixed L p N norms. We determine the n -th minimal errors in the randomized adaptive and non-adaptive settings. It turns out that among the problems considered there are examples where adaptive and non-adaptive n -th minimal errors deviate by a power of n. The gap can be (up to log factors) of the order n 1 / 4. We also show how to turn such results into infinite dimensional examples with suitable deviation for all n simultaneously. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A structure-preserving algorithm for Vlasov–Darwin system conserving charge, canonical momentum, and Hamiltonian.
- Author
-
Shiroto, Takashi
- Subjects
- *
HAMILTONIAN systems , *PLASMA physics , *CONSERVATION laws (Physics) , *SYMMETRY , *ALGORITHMS - Abstract
In this paper, a conservative numerical method for the Vlasov–Darwin system is proposed. The Darwin model was assumed to be valid only in the Coulomb gauge, but recently this model has also been extended to the Lorenz gauge [1]. The Darwin model, based on the Lorenz gauge, exhibits a good symmetry between scalar and vector potentials, making the proofs of physical constraints relatively easy. In addition, the total energy was believed to be one of the conservative quantities of the Vlasov–Darwin system. However, the improved theory suggests that the Hamiltonian is the conservative quantity rather than the total energy. The structure-preserving scheme proposed in this paper exactly maintains the Lorenz gauge and the conservation laws of charge, canonical momentum, and Hamiltonian in discrete form. • The first Vlasov-Darwin scheme which exactly conserves the charge, canonical momentum, and Hamiltonian. • The proposed scheme is based on a modified Darwin model that was derived in the previous study. • The Lorenz gauge revealed that the Hamiltonian is the conservative quantity of this system rather than the total energy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Tonal vibration control with a self-tuning absorber employing the extremum seeking algorithm.
- Author
-
Casagrande, D.E., Gardonio, P., and Konda Rodrigues, G.
- Subjects
- *
ADAPTIVE control systems , *VIBRATION absorbers , *ALGORITHMS , *DEGREES of freedom , *KINETIC energy , *ONLINE algorithms - Abstract
This paper presents a theoretical study on the implementation of a self-tuning vibration absorber for tonal vibration control. The study considers a tuneable vibration absorber, composed by a spring-mass-damper system with variable stiffness and damping components, which is attached to a single degree of freedom mechanical system subjected to a harmonic excitation. The online tuning of the absorber stiffness and damping parameters is performed using an extremum seeking algorithm set to minimise the vibration kinetic energy of the hosting system. The paper proposes a two-paths tuning approach, which, starting from initial estimates of the stiffness and damping parameters of the absorber, searches for their optimal values along constant-damping and constant-stiffness paths respectively. The study presents a comprehensive analysis on the implementation of the algorithm at off-resonance and at resonance frequencies of the hosting system. Also, it investigates how the intrinsic parameters of the algorithm influence the convergence to the optimal stiffness and damping values. The proposed tuning methodology does not require a reference signal for the tonal excitation and, thus, can be suitably implemented both for fixed and time-varying tonal disturbances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Distributed and individualized computation offloading optimization in a fog computing environment.
- Author
-
Li, Keqin
- Subjects
- *
NASH equilibrium , *COMBINATORIAL optimization , *ALGORITHMS , *MATHEMATICAL optimization , *INFORMATION technology - Abstract
• Formulate a non-cooperative game with both UEs and MECs as players. • Prove the existence of and convergence to a Nash equilibrium. • Develop a set of algorithms to find the Nash equilibrium. • Demonstrate numerical examples of non-cooperative games with and without MECs' participation. In a newly emerged fog computing environment, various user equipments (UE) enhance their computing power and extend their battery lifetime by computation offloading to mobile edge cloud (MEC) servers. Such an environment is distributed and competitive in nature. In this paper, we take a game theoretical approach to computation offloading optimization in a fog computing environment. Such an approach captures and characterizes the nature of a competitive environment. The main contributions of the paper can be summarized as follows. First, we formulate a non-cooperative game with both UEs and MECs as players. Each UE attempts to minimize the execution time of its tasks with an energy constraint. Each MEC attempts to minimize the product of its power consumption for computation and execution time for allocated tasks. Second, we develop a heuristic algorithm for a UE to determine its "heuristically" best response to the current situation, an algorithm for an MEC to determine its best response to the current situation, and an iterative algorithm to find the Nash equilibrium. Third, we prove that our iterative algorithm converges to a Nash equilibrium. We demonstrate numerical examples of our non-cooperative games with and without MECs' participation. We observe that our iterative algorithm always quickly converges to a Nash equilibrium. The uniqueness of our non-cooperative games is that the strategy set of a player can be discrete and the payoff function of a player can be obtained by a heuristic algorithm for combinatorial optimization. To the best of the author's knowledge, there has been no such investigation of non-cooperative games based on combinatorial optimization for computation offloading optimization in a fog computing environment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Application of the Layered Algorithm in search of an airborne contaminant source.
- Author
-
Szaban, Miroslaw and Wawrzynczak, Anna
- Subjects
- *
SEARCH algorithms , *METROPOLITAN areas , *CELLULAR automata , *ALGORITHMS , *MATHEMATICAL optimization , *MICROBIOLOGICAL aerosols - Abstract
The paper presents a new method of optimization by the Layered Algorithm (LA). The proposed algorithm reduces the initial area to the sub-area containing the optimum. The proposed technique is based on the classification of the optimized function values (data sampled by sensors). The classification method uses a two-dimensional three-state Cellular Automata (CA). The CA classifies all area points ascribed to the CA cells based on their values. Specification of the categorization layers to the data gives a possibility to identify the different levels areas. Consequently, after analysis, a sub-area containing the optimum can be designated. In this paper, the proposed algorithm is applied to find the location of the airborne contaminant source by analyzing the concentration of released substances reported by mobile sensors distributed over the domain of interest. The Gaussian dispersion model simulation of the contaminant dispersion in the urbanized area is applied to generate the data used to verify the efficiency of the proposed Layered Algorithm. The LA successfully estimates the sub-area of the considered domain where the contamination source is located, taking to account data from sensors solely. • Optimization method reducing the initial area to the sub-area containing the optimum. • Verification of the Layered Algorithm (LA) with Selected Benchmark Functions. • Analysis of the LA efficiency to localize the airborne contaminant source. • LA results assessment by the measures: classification and accuracy error, relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Enhancement of image watermark retrieval based on genetic algorithms.
- Author
-
Shih, Frank Y. and Yi-Ta Wu
- Subjects
- *
WATERMARKS , *IDENTIFICATION , *MARKS of origin , *PAPER , *ALGORITHMS , *GENETIC algorithms - Abstract
A watermark hidden in an image is retrieved differently from the original watermark due to the frequently used rounding approach. The simple rounding will cause numerous errors in the embedded watermark especially when it is large. A novel technique based on genetic algorithms (GAs) is presented in this paper to correct the rounding errors. The fundamental is to adopt a fitness function for choosing the best chromosome which determines the conversion rule of real numbers into integers during the cosine transformation. Experimental results show that the dramatic improvement in reducing the errors is successful when GAs are used in decision making. Furthermore, we develop an initial chromosome by comparing the difference between the original image and the rounded watermarked image to speed up the process. In additional to correct fragile-watermarking rounding errors, our algorithm can also be applied in robust watermarking to achieve higher watermarked image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
19. A versatile stereo implementation on commodity graphics hardware
- Author
-
Yang, Ruigang and Pollefeys, Marc
- Subjects
- *
COMPUTER input-output equipment , *ALGORITHMS , *ALGEBRA , *PAPER - Abstract
Abstract: This paper presents a detailed description of a real-time correlation-based stereo algorithm running completely on the graphics processing unit (GPU). This is important since it allows to free up the main processor for other tasks including high-level interpretation of the stereo results. We first introduce a two-view stereo algorithm that includes some advanced features such as adaptive windows and cross-checking. Then we extend it using a plane-sweep approach to allow multiple frames without rectification. By taking advantage of advanced features of recent GPUs the proposed algorithm runs in real-time. Our implementation running on an ATI Radeon 9800 graphics card achieves up to 289 million disparity evaluations per second including all the overhead to download images and read-back the disparity map, which is several times faster than commercially available CPU-based implementations. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
20. A high-performance VLSI array reconfiguration scheme based on network flow under row and column rerouting.
- Author
-
Ding, Hao, Qian, Junyan, Zhao, Lingzhong, and Zhai, Zhongyi
- Subjects
- *
VERY large scale circuit integration , *ARRAY processors , *TELECOMMUNICATION equipment , *ALGORITHMS , *BOTTLENECKS (Manufacturing) - Abstract
• A network flow model of the VLSI processor array is constructed under row and column rerouting. • A new strategy for selecting the bottleneck row in the logic array using the minimum cutting technique is proposed. • The proposed schemes significantly reduce the interconnect redundancy of the logical subarray. The reconfiguration algorithms have been extensively investigated to ensure the reliability and stability for the processor arrays with faults. It is important to reduce the power consumption, capacitance and communication costs in the processors by reducing the interconnection length of the VLSI array. This paper discusses the reconfiguration problem of the high-performance VLSI processor array under the row and column rerouting constraints. A novel method, making use the idea of network flow, is proposed in this paper. Firstly, a network flow model of the VLSI processor array is constructed, such that the high-performance VLSI target array can be obtained by utilizing the minimal cost flow algorithm. Secondly, we propose a new strategy for bottleneck row selection in the logical array using the minimum cut technique, which can find a more suitable bottleneck row. Finally, we conducted reliable experiments to clearly reveal the efficiency of the new rerouting scheme and algorithm in reducing the number of long interconnects. The experimental results show that, for a host array with size of 256×256, the number of long interconnects in the subarray can be reduced by up to 79.22% and 55.88% without performance penalty for random faults with density of 1% and 25% respectively, when compared with state-of-the-art. In addition, the proposed scheme improves existing algorithm in terms of subarray size. On a 256×256 host array with 25% faulty density, the average improvement in subarray size is up to 3.77% compared with state-of-the-art. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. An explicit algorithm for normal forms in small overlap monoids.
- Author
-
Mitchell, James D. and Tsalakou, Maria
- Subjects
- *
MONOIDS , *ALGORITHMS , *PROBLEM solving - Abstract
We describe a practical algorithm for computing normal forms for semigroups and monoids with finite presentations satisfying so-called small overlap conditions. Small overlap conditions are natural conditions on the relations in a presentation, which were introduced by J. H. Remmers and subsequently studied extensively by M. Kambites. Presentations satisfying these conditions are ubiquitous; Kambites showed that a randomly chosen finite presentation satisfies the C (4) condition with probability tending to 1 as the sum of the lengths of relation words tends to infinity. Kambites also showed that several key problems for finitely presented semigroups and monoids are tractable in C (4) monoids: the word problem is solvable in O (min { | u | , | v | }) time in the size of the input words u and v ; the uniform word problem for 〈 A | R 〉 is solvable in O (N 2 min { | u | , | v | }) where N is the sum of the lengths of the words in R ; and a normal form for any given word u can be found in O (| u |) time. Although Kambites' algorithm for solving the word problem in C (4) monoids is highly practical, it appears that the coefficients in the linear time algorithm for computing normal forms are too large in practice. In this paper, we present an algorithm for computing normal forms in C (4) monoids that has time complexity O (| u | 2) for input word u , but where the coefficients are sufficiently small to allow for practical computation. Additionally, we show that the uniform word problem for small overlap monoids can be solved in O (N min { | u | , | v | }) time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Markov chains and unambiguous automata.
- Author
-
Baier, Christel, Kiefer, Stefan, Klein, Joachim, Müller, David, and Worrell, James
- Subjects
- *
MARKOV processes , *ALGORITHMS - Abstract
Unambiguous automata are nondeterministic automata in which every word has at most one accepting run. In this paper we give a polynomial-time algorithm for model checking discrete-time Markov chains against ω -regular specifications represented as unambiguous automata. We furthermore show that the complexity of this model checking problem lies in NC: the subclass of P comprising those problems solvable in poly-logarithmic parallel time. These complexity bounds match the known bounds for model checking Markov chains against specifications given as deterministic automata, notwithstanding the fact that unambiguous automata can be exponentially more succinct than deterministic automata. We report on an implementation of our procedure, including an experiment in which the implementation is used to model check LTL formulas on Markov chains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Deletion to scattered graph classes II - improved FPT algorithms for deletion to pairs of graph classes.
- Author
-
Jacob, Ashwin, Majumdar, Diptapriyo, and Raman, Venkatesh
- Subjects
- *
ALGORITHMS , *APPROXIMATION algorithms , *BIPARTITE graphs , *LOGIC - Abstract
The problem of deletion of vertices to a hereditary graph class is a well-studied problem in parameterized complexity. Recently, a natural extension of the problem was initiated where we are given a finite set of hereditary graph classes and we determine whether k vertices can be deleted from a given graph so that the connected components of the resulting graph belong to one of the given hereditary graph classes. The problem is shown to be fixed parameter tractable (FPT) when the deletion problem to each of the given hereditary graph classes is fixed-parameter tractable, and the property of being in any of the graph classes is expressible in the counting monodic second order (CMSO) logic. This paper focuses on pairs of specific graph classes (Π 1 , Π 2) in which we would like the connected components of the resulting graph to belong to, and design simpler and more efficient FPT algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Grid recognition: Classical and parameterized computational perspectives.
- Author
-
Gupta, Siddharth, Sa'ar, Guy, and Zehavi, Meirav
- Subjects
- *
PARAMETERIZATION , *ALGORITHMS , *TREES - Abstract
Over the past few decades, a large body of works studied the (in)tractability of various computational problems on grid graphs, which often yield substantially faster algorithms than general graphs. Unfortunately, the recognition of a grid graph is hard—it was shown to be NP-hard already in 1987. In this paper, we provide several positive results in this regard in the framework of parameterized complexity. Specifically, our contribution is threefold. First, we show that the problem is FPT parameterized by k + mcc where mcc is the maximum size of a connected component of G. Second, we present a new parameterization, denoted a G , relating graph distance to geometric distance. We show that the problem is para-NP-hard parameterized by a G , but FPT parameterized by a G on trees, as well as FPT parameterized by k + a G. Third, we show that the recognition of k × r grid graphs is NP-hard on graphs of pathwidth 2 where k = 3. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. A precise crop row detection algorithm in complex farmland for unmanned agricultural machines.
- Author
-
Ruan, Zhiwen, Chang, Penghao, Cui, Shangqing, Luo, Jiaqi, Gao, Rui, and Su, Zhongbin
- Subjects
- *
AGRICULTURE , *OBJECT recognition (Computer vision) , *CROPS , *LEAST squares , *ALGORITHMS - Abstract
Crop row detection is an important part of precision agriculture. Therefore, an unmanned agricultural machine crop row detection method based on YOLO-R is proposed. This method can solve the problem that traditional image processing methods are easily affected by weeds, light and other factors when extracting crop feature points. First, the YOLO-R object detection algorithm was used to obtain the crop position information, and then, the number of crop rows in the image and the crop in each crop row were obtained by the DBSCAN clustering algorithm. Finally, the function expression for each crop row was obtained by using the least squares method. The experimental results show that the AP values of YOLO-R are 91.69%, 95.34% and 89.13% on the seven-day, 14-day, and 21-day rice datasets, respectively. When the proposed algorithm's number of parameters was only 12.31% of that of YOLOv4 and the FPS was 17.54 higher than that of YOLOv4, the AP value was only 2.2% lower. The accuracy values of crop row detection algorithm are 93.91%, 95.87% and 89.87% on the seven-day, 14-day, and 21-day rice datasets, respectively, which indicates that the algorithm in this paper can effectively identify crop lines. • Proposing a crop row detection method based on YOLO-R object detection algorithm. • Ghostnet and Focal Loss are used in YOLO-R to perform better and faster. • Use channel attention module to coordinate the importance between each channel. • Incremental ablation study on all network designs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A two-phase heuristic algorithm for power-aware offline scheduling in IaaS clouds.
- Author
-
Ignatov, A., Maslova, I., Posypkin, M., Yang, W., and Wu, J.
- Subjects
- *
HEURISTIC algorithms , *SCHEDULING , *ONLINE algorithms , *ALGORITHMS - Abstract
The paper aims at mitigating hot-spots during Offline Scheduling in IaaS (Infrastructure-as-a-Service) cloud systems. Unlike previous studies, the research focuses on identifying and resolving hot-spots not at servers, but at server racks. A two-phase algorithm for performing power-aware offline scheduling is proposed. The first phase aims at identifying and mitigating hot-spots at racks, while the second phase performs VM consolidation, i.e. minimization of the number of occupied servers while maintaining a feasible VM mapping and low migration costs. The proposed algorithm takes into account the dynamic nature of VM's resource consumption: it does not only resolve detected hot-spots, but also tries to avoid hot-spots in a reasonable future time period. The algorithm was tested with the data from a real IaaS cloud with different sets of algorithm's parameters. Experimental evaluation showed that the statistical estimates of the future VM's resource consumption provide the most reliable mapping, which is a result of minimization of the number of new hot-spot occurrences. • A server rack is a hot-spot if it exceeds the upper bound of power consumption. • Hot-spot identification, mitigation, and avoidance is crucial for IaaS systems. • An algorithm is proposed for rack hot-spot mitigation and consolidation of VMs. • Considering future CPU utilization values is associated with migrating more memory. • Ignoring future CPU utilization values leads to more new hot-spots happening shortly. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Predictive algorithms in dynamical sampling for burst-like forcing terms.
- Author
-
Aldroubi, Akram, Huang, Longxiu, Kornelson, Keri, and Krishtal, Ilya
- Subjects
- *
INITIAL value problems , *ALGORITHMS , *MEASUREMENT errors - Abstract
In this paper, we consider the problem of recovery of a burst-like forcing term in an initial value problem (IVP) in the framework of dynamical sampling. We introduce an idea of using two particular classes of samplers that allow one to predict the solution of the IVP over a time interval without a burst. This leads to two different algorithms that stably and accurately approximate the burst-like forcing term even in the presence of a measurement acquisition error and a large background source. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. A new algorithm for computing staggered linear bases.
- Author
-
Hashemi, Amir and Michael Möller, H.
- Subjects
- *
GROBNER bases , *VECTOR fields , *ALGORITHMS , *VECTOR spaces , *POLYNOMIALS - Abstract
Considering a multivariate polynomial ideal over a given field as a vector space, we investigate for such an ideal a particular linear basis, a so-called staggered linear basis, which contains a Gröbner basis as well. In this paper, we present a simple and efficient algorithm to compute a staggered linear basis. The new framework is equipped with some novel criteria (including both Buchberger's criteria) to detect superfluous reductions. The proposed algorithm has been implemented in Maple, and we provide an illustrative example to show how it works. Finally, the efficiency of this algorithm compared to the existing methods is discussed via a set of benchmark polynomials. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Affinity-aware resource provisioning for long-running applications in shared clusters.
- Author
-
Mommessin, Clément, Yang, Renyu, Shakhlevich, Natalia V., Sun, Xiaoyang, Kumar, Satish, Xiao, Junqing, and Xu, Jie
- Subjects
- *
ALGORITHMS - Abstract
Resource provisioning plays a pivotal role in determining the right amount of infrastructure resource to run applications and reduce the monetary cost. A significant portion of production clusters is now dedicated to long-running applications (LRAs), which are typically in the form of microservices and executed in the order of hours or even months. It is therefore practically important to plan ahead the placement of LRAs in a shared cluster for the minimized number of compute nodes required by them. Existing works on LRA scheduling are often application-agnostic, without particularly addressing the constraining requirements imposed by LRAs, such as co-location affinity constraints and time-varying resource requirements. In this paper, we present an affinity-aware resource provisioning approach for deploying large-scale LRAs in a shared cluster subject to multiple constraints, with the objective of minimizing the number of compute nodes in use. We investigate a broad range of solution algorithms which fall into three main categories: Application-Centric, Node-Centric, and Multi-Node approaches, and tune them for typical large-scale real-world scenarios. Experimental studies driven by the Alibaba Tianchi dataset show that our algorithms can achieve competitive scheduling effectiveness and running time, as compared with the heuristics used by the latest work including Medea and Lra Sched. Best results are obtained by the Application-Centric algorithms, if the algorithm's running time is of primary concern, and by Multi-Node algorithms, if the solution quality is of primary concern. • Resource provisioning for long-running applications is studied in shared clusters. • Temporal resource requests and application-level affinity are considered. • Application-Centric algorithms win if the running time is of primary concern. • Node-Centric Algorithms offer a trade-off between solution quality and time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Toward the best algorithm for approximate GCD of univariate polynomials.
- Author
-
Nagasaka, Kosaku
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *SYLVESTER matrix equations - Abstract
Approximate polynomial GCD (greatest common divisor) of polynomials with a priori errors on their coefficients, is one of interesting problems in Symbolic-Numeric Computations. In fact, there are many known algorithms: QRGCD, UVGCD, STLN based methods, Fastgcd and so on. The fundamental question of this paper is "which is the best?" from the practical point of view, and subsequently "is there any better way?" by any small extension, any effect by pivoting, and any combination of sub-routines along the algorithms. In this paper, we consider a framework that covers those algorithms and their sub-routines, and makes their sub-routines being interchangeable between the algorithms (i.e. disassembling the algorithms and reassembling their parts). By this framework along with/without small new extensions and a newly adapted refinement sub-routine, we have done many performance tests and found the current answer. In summary, 1) UVGCD is the best way to get smaller tolerance, 2) modified Fastgcd is better for GCD that has one or more clusters of zeros with large multiplicity, and 3) modified ExQRGCD is better for GCD that has no cluster of zeros. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Industrial defect detection and location based on greedy membrane clustering algorithm.
- Author
-
Tang, Yaorui, Yang, Bo, Peng, Hong, and Luo, Xiaohui
- Subjects
- *
ALGORITHMS , *ARTIFICIAL membranes , *DISTRIBUTED computing , *GLOBAL optimization , *PARALLEL programming , *FEATURE extraction - Abstract
This paper introduces a related model of membrane calculation in the defect detection and positioning of industrial components. It has the characteristics of distributed and parallel computing, and can efficiently search for better solutions in a given feature space. Inspired by the membrane clustering algorithm, this paper proposes a greedy membrane clustering algorithm and names it GMCA. GMCA is applied after the extraction of local features of normal samples. It uses a greedy strategy to construct a sub-feature set that describes the local characteristics of normal samples. During training, GMCA can learn the membrane cluster center of normal image blocks and each sub-feature within the cluster. At test time, the anomaly map is obtained by calculating the distance from the test sample block to the corresponding cluster center and the maximum distance from the cluster center to the nearest neighbor in the training sample. This solves the limitation of traditional algorithms requiring dataset alignment. In the unsupervised dataset MvTec AD, samples can be divided into object categories and texture categories according to the background of images. The pixel-level anomaly location index (AUROC) of this method on object category data reaches 98.3%. The image-level anomaly detection index (AUROC) on texture category data reaches 99.1%. • We design a computational model of membrane clustering using the evolutionary mechanisms and communicative mechanisms of cells. • GMCA has the global optimization characteristics of high accuracy and fast convergence of the membrane clustering algorithm. • GMCA has the local optimization characteristics of the greedy strategy. • GMCA solves the limitation of traditional defect detection and positioning methods that require dataset alignment. • Numerous experiments show the proposed GMCA performs competitively in industrial defect detection and location prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. On the (non-) reliance on algorithms—A decision-theoretic account.
- Author
-
Sinclair-Desgagné, Bernard
- Subjects
- *
AMBIGUITY , *RECOMMENDER systems , *CONTROL (Psychology) , *DECISION making , *ALGORITHMS , *INFORMATION resources management , *AVERSION - Abstract
A wealth of empirical evidence shows that people display opposite behaviors when deciding whether to rely on an algorithm, even if it is inexpensive to do so and using the algorithm should enhance their own performance. This paper develops a formal theory to explain some of these conflicting facts and submit new testable predictions. Drawing from decision analysis, I invoke two key notions: the 'value of information' and the 'value of control'. The value of information matters to users of algorithms like recommender systems and prediction machines, which essentially provide information. I find that ambiguity aversion or a subjective cost of employing an algorithm will tend to decrease the value of algorithmic information, while repeated exposure to an algorithm might not always increase this value. The value of control matters to users who may delegate decision making to an algorithm. I model how, under partial delegation, imperfect understanding of what the algorithm actually does (so the algorithm is in fact a black box) can cause algorithm aversion. Some possible remedies are formulated and discussed. • This paper initiates a formal decision-theoretic approach to make sense of the empirical evidence concerning people's attitudes towards algorithms. • This approach exploits two fundamental notions: the value of information and the value of control. • Ambiguity aversion will tend to decrease the value of algorithmic information; repeated exposure to algorithms may not increase it. • A first model of 'black box' algorithms is developed to analyze the value of keeping versus delegating control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Algebraic number fields and the LLL algorithm.
- Author
-
Uray, M.J.
- Subjects
- *
ALGEBRAIC numbers , *ALGEBRAIC fields , *GAUSSIAN elimination , *ALGORITHMS , *ARITHMETIC - Abstract
In this paper we analyze the computational costs of various operations and algorithms in algebraic number fields using exact arithmetic. Let K be an algebraic number field. In the first half of the paper, we calculate the running time and the size of the output of many operations in K in terms of the size of the input and the parameters of K. We include some earlier results about these, but we go further than them, e.g. we also analyze some R -specific operations in K like less-than comparison. In the second half of the paper, we analyze two algorithms: the Bareiss algorithm, which is an integer-preserving version of the Gaussian elimination, and the LLL algorithm, which is for lattice basis reduction. In both cases, we extend the algorithm from Z n to K n , and give a polynomial upper bound on the running time when the computations in K are performed exactly (as opposed to floating-point approximations). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. An anti-steganalysis adaptive steganography for HEVC video based on PU partition modes.
- Author
-
He, Songhan, Xu, Dawen, Yang, Lin, Dai, Haojun, and Wang, Shanshan
- Subjects
- *
VIDEO coding , *VISUAL cryptography , *ALGORITHMS , *CODECS , *CRYPTOGRAPHY - Abstract
• An adaptive HEVC video steganography algorithm is designed, which can effectively resist the PU shift-based steganalysis. • The introduction of multivariate STC allows for smaller modifications with the same embedding capacity. • The superiority of this algorithm over the state-of-the-art video steganography algorithm is verified through extensive experiments. Calibration-based steganalysis methods are a fatal threat to current HEVC steganography algorithms. The current steganography algorithms based on prediction unit (PU) still lack resistance to this type of steganalysis. The reason is that the improvements in HEVC codec provide more potential steganographic space as well as introduce more detectable distortion. Therefore, a HEVC video steganography method that has resistance to PU shift-based steganalysis is proposed in this paper. First, the PU shift phenomenon that exists in PU-based steganography is introduced. Then the existing calibration-based steganalysis method is enhanced. According to the encoding rules of HEVC, the distortion cost function according to the difference of rate distortion cost (RD cost) during recompression is designed, which aims to obtain better resistance to calibration-based steganalysis. Experimental results demonstrate that the steganography algorithm designed in this paper provides excellent resistance ability for calibration-based steganalysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. ParaLiNGAM: Parallel causal structure learning for linear non-Gaussian acyclic models.
- Author
-
Shahbazinia, Amirhossein, Salehkaleybar, Saber, and Hashemi, Matin
- Subjects
- *
ACYCLIC model , *MACHINE learning , *PARALLEL algorithms , *ALGORITHMS - Abstract
One of the key objectives in many fields in machine learning is to discover causal relationships among a set of variables from observational data. In linear non-Gaussian acyclic models (LiNGAM), it can be shown that the true underlying causal structure can be identified uniquely from merely observational data. The DirectLiNGAM algorithm is a well-known solution to learn the true causal structure in a high dimensional setting. DirectLiNGAM algorithm executes in a sequence of iterations and it performs a set of comparisons between pairs of variables in each iteration. Unfortunately, the runtime of this algorithm grows significantly as the number of variables increases. In this paper, we propose a parallel algorithm, called ParaLiNGAM, to learn casual structures based on DirectLiNGAM algorithm. We propose a threshold mechanism that can reduce the number of comparisons remarkably compared with the sequential solution. Moreover, in order to further reduce runtime, we employ a messaging mechanism between workers. We also present an implementation of ParaLiNGAM on GPU, considering hardware constraints. Experimental results on synthetic and real data show that our proposed solution outperforms DirectLiNGAM by a factor up to 4788X, and by a median of 2344X. • A scalable algorithm for learning causal structures in linear non-Gaussian models. • Significant speedup by selecting the necessary computations judiciously. • Speedup ratio of up to 4788 on GPUs in real-world and synthetic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. An extension of holonomic sequences: C2-finite sequences.
- Author
-
Jiménez-Pastor, Antonio, Nuspl, Philipp, and Pillwein, Veronika
- Subjects
- *
LINEAR equations , *COMPUTER scientists , *MATHEMATICIANS , *DIFFERENCE equations , *POLYNOMIALS , *SHIFT registers - Abstract
Holonomic sequences are widely studied as many objects interesting to mathematicians and computer scientists are in this class. In the univariate case, these are the sequences satisfying linear recurrences with polynomial coefficients and also referred to as D -finite sequences. A subclass are C -finite sequences satisfying a linear recurrence with constant coefficients. We investigate the set of sequences which satisfy linear recurrence equations with coefficients that are C -finite sequences and call them C 2 -finite sequences. These sequences are a natural generalization of holonomic sequences. In this paper, we show that C 2 -finite sequences form a difference ring and provide methods to compute in this ring. Furthermore, we provide an analogous construction for D 2 -finite sequences, i.e., sequences satisfying a linear recurrence with holonomic coefficients. We show that these constructions can be iterated and obtain an increasing chain of difference rings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. EdgeDecAp: An auction-based decentralized algorithm for optimizing application placement in edge computing.
- Author
-
Smolka, Sven, Wißenberg, Leon, and Mann, Zoltán Ádám
- Subjects
- *
ALGORITHMS , *LOCAL knowledge , *SERVER farms (Computer network management) , *MATHEMATICAL optimization , *EDGE computing , *DECISION making - Abstract
In edge computing, application components can be placed over a range of computational devices from cloud data centers to nodes at the network edge. Application placement can have significant impact on important metrics like latency and resource utilization. Thus, application placement is an important optimization problem. In edge computing, the characteristics of both the infrastructure and the application may change over time, which may require the dynamic re-optimization of the application placement. Most algorithms suggested so far for the dynamic re-optimization of edge application placement are centralized, i.e., they rely on one entity collecting information from the whole infrastructure and making decisions centrally. However, centralized approaches suffer from limited scalability and are vulnerable to failures. In this paper, we present a decentralized approach for the dynamic re-optimization of edge application placement. We adopt an algorithm of Malek et al. for distributed systems and modify it to make it applicable to edge computing. In this approach, each node makes decisions autonomously, using auctions for coordination. Our empirical results demonstrate that the proposed algorithm is very effective in optimizing edge application placement. In an edge system with 637 edge nodes and 563 end devices, our algorithm achieves 54% higher reduction of application latency than a previous decentralized algorithm. • Optimized placement of application components on cloud and edge devices. • Decentralized optimization algorithm using local knowledge and local modifications. • Auctions and distributed locking for synchronization. • Minimization of application latency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. BSF: A parallel computation model for scalability estimation of iterative numerical algorithms on cluster computing systems.
- Author
-
Sokolinsky, Leonid B.
- Subjects
- *
COMPUTER workstation clusters , *COMPUTER systems , *SCALABILITY , *PARALLEL algorithms , *ALGORITHMS - Abstract
This paper examines a novel parallel computation model called bulk synchronous farm (BSF) that focuses on estimating the scalability of compute-intensive iterative algorithms aimed at cluster computing systems. The main advantage of the proposed model is that it allows to estimate the scalability of a parallel algorithm before its implementation. Another important feature of the BSF model is the representation of problem data in the form of lists that greatly simplifies the logic of building applications. In the BSF model, a computer is a set of processor nodes connected by a network and organized according to the master/slave paradigm. A cost metric of the BSF model is presented. This cost metric requires the algorithm to be represented in the form of operations on lists. This allows us to derive an equation that predicts the scalability boundary of a parallel program: the maximum number of processor nodes after which the speedup begins to decrease. The paper includes examples of applying the BSF model to designing and analyzing parallel numerical algorithms. The large-scale computational experiments conducted on a cluster computing system confirm the adequacy of the analytical estimations obtained using the BSF model. • Scalability is important property of parallel algorithm. • The main measure for evaluating scalability is speedup. • Algorithm is scalable if speedup increases linearly with number of processors. • Scalability boundary is number of processors at which speedup peak is reached. • Parallel computation model can predict scalability boundary of parallel algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. A decentralized algorithm to combine topology control with network coding.
- Author
-
Khalily-Dermany, Moammad
- Subjects
- *
LINEAR network coding , *WIRELESS sensor networks , *ELECTRIC network topology , *SUBGRADIENT methods , *TOPOLOGY , *ALGORITHMS , *DECOMPOSITION method , *ROUTING algorithms - Abstract
Network coding and topology control techniques have been widely used to increase throughput and improve the lifetime of Wireless Sensor Networks (WSNs). This paper considers the simultaneous utilization of these techniques in a WSN and proposes convex non-linear programming. Since solving the problem for a large-scale and dynamic WSN is impractical and almost impossible, Lagrangian, sub-gradient and the decomposition methods are employed to provide a decentralized algorithm. In the proposed algorithm, a node makes the computations by acquiring local knowledge and information from its neighbors. This paper provides a mathematical language to build an analytic foundation for the design of a modularized and decentralized algorithm that provides transmission ranges and routes for a WSN. The simulation results show that increasing the number of sources, sinks, sensors, and traffic load leads to improving the lifetime which is acquired by the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Circular pattern matching with k mismatches.
- Author
-
Charalampopoulos, Panagiotis, Kociumaka, Tomasz, Pissis, Solon P., Radoszewski, Jakub, Rytter, Wojciech, Straszyński, Juliusz, Waleń, Tomasz, and Zuba, Wiktor
- Subjects
- *
PATTERN matching , *ALGORITHMS , *HAMMING distance - Abstract
We consider the circular pattern matching with k mismatches (k -CPM) problem in which one is to compute the minimal Hamming distance of every length- m substring of T and any cyclic rotation of P , if this distance is no more than k. It is a variation of the well-studied k -mismatch problem. A multitude of papers has been devoted to solving the k -CPM problem, but only average-case upper bounds are known. In this paper, we present the first non-trivial worst-case upper bounds for this problem. Specifically, we show an O (n k) -time algorithm and an O (n + n m k 4) -time algorithm. The latter algorithm applies in an extended way a technique that was very recently developed for the k -mismatch problem Bringmann et al. (2019) [10]. A preliminary version of this work appeared at FCT 2019 [35]. In this version we improve the time complexity of the second algorithm from O (n + n m k 5) to O (n + n m k 4). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. IoT architecture for adaptation to transient devices.
- Author
-
Ariza, Jairo, Garcés, Kelly, Cardozo, Nicolás, Sánchez, Juan Pablo Rodríguez, and Vargas, Fernando Jiménez
- Subjects
- *
ALGORITHMS , *SELF-adaptive software - Abstract
IoT environments are continuously changing. Changes may come from the service, connectivity, or physical layers of the IoT architecture. Therefore, to function appropriately, the system needs to dynamically adapt to its environment. In previous work, we posited eight challenges to foster adaptation through all architecture layers of IoT systems. In this paper, we address the challenges to manage the inclusion of new devices and devices' transient connection , by means of dynamic adaptations incorporated into our proposed software architecture for adaptive IoT systems. To manage dynamic adaptations, we extend the reference IoT architecture with our specialized components. In particular, we use (1) ontologies and instances to represent the domain knowledge; (2) a matching algorithm to pair services and IoT devices, taking into account their functional requirements, quality attributes and sensors properties; and (3) a match update algorithm used whenever sensors become (un)available. We evaluate the effectiveness of our solution with respect to the accuracy of matching services and IoT devices, and the response to environment changes. • This paper presents a new adaptive architecture for IoT Systems. • Ontology matching algorithm-based architecture to manage transient connections. • Functional, non-functional, and QoS matching algorithm between IoT components. • Matching algorithm with better precision and recall compared with the gold-standard. • Architecture deployed in a real case study for a SUDS system in the city of Bogotá. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Constructive arithmetics in Ore localizations enjoying enough commutativity.
- Author
-
Hoffmann, Johannes and Levandovskyy, Viktor
- Subjects
- *
COMMUTATIVE algebra , *ARITHMETIC , *NONCOMMUTATIVE rings , *ALGORITHMS , *ORES , *LOCALIZATION (Mathematics) - Abstract
This paper continues a research program on constructive investigations of non-commutative Ore localizations, initiated in our previous papers, and particularly touches the constructiveness of arithmetics within such localizations. Earlier we have introduced monoidal, geometric and rational types of localizations of domains as objects of our studies. Here we extend this classification to rings with zero divisors and consider Ore sets of the mentioned types which are commutative enough: such a set either belongs to a commutative algebra or it is central or its elements commute pairwise. By using the systematic approach we have developed before, we prove that arithmetic within the localization of a commutative polynomial algebra is constructive and give the necessary algorithms. We also address the important question of computing the local closure of ideals which is also known as the desingularization, and present an algorithm for the computation of the symbolic power of a given ideal in a commutative ring. We also provide algorithms to compute local closures for certain non-commutative rings with respect to Ore sets with enough commutativity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. The (theta, wheel)-free graphs Part IV: Induced paths and cycles.
- Author
-
Radovanović, Marko, Trotignon, Nicolas, and Vušković, Kristina
- Subjects
- *
WHEELS , *ALGORITHMS - Abstract
A hole in a graph is a chordless cycle of length at least 4. A theta is a graph formed by three internally vertex-disjoint paths of length at least 2 between the same pair of distinct vertices. A wheel is a graph formed by a hole and a node that has at least 3 neighbors in the hole. In this series of papers we study the class of graphs that do not contain as an induced subgraph a theta nor a wheel. In Part II of the series we prove a decomposition theorem for this class, that uses clique cutsets and 2-joins. In this paper we use this decomposition theorem to solve several problems related to finding induced paths and cycles in our class. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Group sparse recovery in impulsive noise via alternating direction method of multipliers.
- Author
-
Wang, Jianjun, Huang, Jianwen, Zhang, Feng, and Wang, Wendong
- Subjects
- *
LAGRANGIAN functions , *NOISE , *ALGORITHMS , *MULTIPLIERS (Mathematical analysis) , *ORTHOGONAL matching pursuit , *DATA modeling - Abstract
In this paper, we consider the recovery of group sparse signals corrupted by impulsive noise. In some recent literature, researchers have utilized stable data fitting models, like l 1 -norm, Huber penalty function and Lorentzian-norm, to substitute the l 2 -norm data fidelity model to obtain more robust performance. In this paper, a stable model is developed, which exploits the generalized l p -norm as the measure for the error for sparse reconstruction. In order to address this model, we propose an efficient alternative direction method of multipliers, which includes the proximity operator of l p -norm functions to the framework of Lagrangian methods. Besides, to guarantee the convergence of the algorithm in the case of 0 ≤ p < 1 (nonconvex case), we took advantage of a smoothing strategy. For both 0 ≤ p < 1 (nonconvex case) and 1 ≤ p ≤ 2 (convex case), we have derived the conditions of the convergence for the proposed algorithm. Moreover, under the block restricted isometry property with constant δ τ k 0 < τ / (4 − τ) for 0 < τ < 4 / 3 and δ τ k 0 < (τ − 1) / τ for τ ≥ 4 / 3 , a sharp sufficient condition for group sparse recovery in the presence of impulsive noise and its associated error upper bound estimation are established. Numerical results based on the synthetic block sparse signals and the real-world FECG signals demonstrate the effectiveness and robustness of new algorithm in highly impulsive noise. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Integration of multi-objective PSO based feature selection and node centrality for medical datasets.
- Author
-
Rostami, Mehrdad, Forouzandeh, Saman, Berahmand, Kamal, and Soltani, Mina
- Subjects
- *
FEATURE selection , *REPRESENTATIONS of graphs , *ALGORITHMS , *CENTRALITY , *DATABASES , *COMPUTER engineering - Abstract
In the past decades, the rapid growth of computer and database technologies has led to the rapid growth of large-scale medical datasets. On the other, medical applications with high dimensional datasets that require high speed and accuracy are rapidly increasing. One of the dimensionality reduction approaches is feature selection that can increase the accuracy of the disease diagnosis and reduce its computational complexity. In this paper, a novel PSO-based multi objective feature selection method is proposed. The proposed method consists of three main phases. In the first phase, the original features are showed as a graph representation model. In the next phase, feature centralities for all nodes in the graph are calculated, and finally, in the third phase, an improved PSO-based search process is utilized to final feature selection. The results on five medical datasets indicate that the proposed method improves previous related methods in terms of efficiency and effectiveness. • In this paper, a novel feature selection method is developed by integrating of node centrality and PSO algorithm. • Proposed method selects a subset of features in the medical dataset with lowest redundancy and the highest relevance. • In this proposed method the optimal number of final feature set is determined automatically. • Experimental results showed that the proposed method has the best performance among different disease diagnosis methods. • The results on five medical datasets indicate that the proposed method improves disease diagnosis prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Vulnerability assessment of fault-tolerant optical network-on-chips.
- Author
-
Abdollahi, Meisam and Mohammadi, Siamak
- Subjects
- *
DATA transmission systems , *SYSTEMS on a chip , *OPTICAL resonators , *OPTICAL devices , *ALGORITHMS , *MOTIVATION (Psychology) - Abstract
Multi/Many-core systems based on the traditional electrical network-on-chip are confronted with the limited bandwidth, high latency, and reliability challenges. The emerging optical solution of silicon photonics has promised to improve the design parameters of electrical interconnections for future multiprocessor system on chips. Although the optical network-on-chip has advantages in terms of reliability compared to the electrical ones, important phenomena such as crosstalk noise, process variation, and temperature fluctuations must be carefully considered. These physical challenges have substantial adverse effects on the correct functionality of on-chip optical devices such as microring resonator and the optical waveguide. Malfunction of these elements may cause the injection of faults that should be tolerated by the reliable system. In this paper, we have assessed the effect of the fault-tolerant design of optical routers on the reliability parameters through application mapping. We propose a fault-tolerant algorithm to modify optical routers to improve the reliability parameter. The vulnerability analysis of the proposed algorithm shows that besides obtaining the fault-tolerant capability in optical routers, only about 9.63% and 19.29% of SNR decrease for real-world applications and seven traditional optical routers are achieved in the case of a single fault and two faults injections, respectively. • In the first step, we explain the primitives of optical on-chip data transmission and discuss on its reliability challenges and also the paper's motivation. • A general framework to construct a fault-tolerant optical router. • Evaluation the reliability parameters of various fault-tolerant optical routers. • Optimized application mapping for fair comparison in different optical routers. • Vulnerability analysis through fault injection for the proposed fault-tolerant schema. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Kohnert's rule for flagged Schur modules.
- Author
-
Armon, Sam, Assaf, Sami, Bowling, Grant, and Ehrhard, Henry
- Subjects
- *
POLYNOMIALS , *GENERALIZATION , *ALGORITHMS , *REPRESENTATIONS of groups (Algebra) - Abstract
Flagged Schur modules generalize the irreducible representations of the general linear group under the action of the Borel subalgebra. Their characters include many important generalizations of Schur polynomials, such as Demazure characters, flagged skew Schur polynomials, and Schubert polynomials. In this paper, we prove the characters of flagged Schur modules can be computed using a simple combinatorial algorithm due to Kohnert if and only if the indexing diagram is northwest. This gives a new proof that characters of flagged Schur modules are nonnegative sums of Demazure characters and gives a representation theoretic interpretation for Kohnert polynomials. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. A parallel branch-and-bound algorithm with history-based domination and its application to the sequential ordering problem.
- Author
-
Gonggiatgul, Taspon, Shobaki, Ghassan, and Muyan-Özçelik, Pınar
- Subjects
- *
MULTICORE processors , *PARALLEL algorithms , *NP-hard problems , *NP-complete problems , *ALGORITHMS , *COMBINATORIAL optimization - Abstract
In this paper, we describe the first parallel Branch-and-Bound (B&B) algorithm with a history-based domination technique. Although history-based domination substantially speeds up a B&B search, it makes parallelization much more challenging. Our algorithm is the first parallel exact algorithm for the Sequential Ordering Problem using a pure B&B approach. To effectively explore the solution space, we have developed three novel parallelization techniques: thread restart, parallel history domination, and history-table memory management. The proposed algorithm was experimentally evaluated using the SOPLIB and TSPLIB benchmarks on multi-core processors. Using 32 threads with a time limit of one hour, the algorithm gives geometric-mean speedups of 72x and 20x on the medium-difficulty SOPLIB and TSPLIB instances, respectively. On the hard instances, it solves 12 instances that the sequential algorithm does not solve, with geometric-mean speedups of 16x on SOPLIB and 32x on TSPLIB. Super-linear speedups up to 366x are seen on 16 instances. • We propose the first parallel B&B algorithm that involves a history-based domination technique. • The proposed algorithm includes three novel parallelization techniques: thread restart, parallel history-based domination, and history-table memory management. • The proposed algorithm is the first parallel B&B algorithm for the Sequential Ordering Problem, which is an NP-hard sequencing problem with precedence constraints. • We present a thorough experimental evaluation of the proposed parallel algorithm on SOPLIB and TSPLIB. The results show that super-linear speedup is not an anomaly; it can be achieved on many instances. We report a speedup ratio as high as 285 on a 32-core processor. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Simulating structural plasticity of the brain more scalable than expected.
- Author
-
Czappa, Fabian, Geiß, Alexander, and Wolf, Felix
- Subjects
- *
SYNAPSES , *SCALABILITY , *ALGORITHMS , *NEURONS - Abstract
Structural plasticity of the brain describes the creation of new and the deletion of old synapses over time. Rinke et al. (JPDC 2018) introduced a scalable algorithm that simulates structural plasticity for up to one billion neurons on current hardware using a variant of the Barnes–Hut algorithm. They demonstrate good scalability and prove a runtime complexity of O (n log 2 n). In this comment paper, we show that with careful consideration of the algorithm and a rigorous proof, the theoretical runtime can even be classified as O (n log n). • Improved serial runtime bound for the adapted Barnes–Hut algorithm to Θ(n*log(n)). • Improved the parallel runtime bound to Θ(n/p*log(n)+p). • Mathematical justification that the given runtime bound is sharp. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Faster Graph bipartization.
- Author
-
Kolay, Sudeshna, Misra, Pranabendu, Ramanujan, M.S., and Saurabh, Saket
- Subjects
- *
OPERATIONS research , *GEOMETRIC vertices , *TRANSVERSAL lines , *ALGORITHMS , *LINEAR dependence (Mathematics) - Abstract
In the Graph bipartization (or Odd Cycle Transversal) problem, the objective is to decide whether a given graph G can be made bipartite by the deletion of k vertices for some given k. The parameterized complexity of Odd Cycle Transversal was resolved in the breakthrough paper of Reed, Smith and Vetta [Operations Research Letters, 2004], who developed an algorithm running in time O (3 k k m n). The question of improving the dependence on the input size to linear, which was another long standing open problem in the area, was resolved by Iwata et al. [SICOMP 2016] and Ramanujan and Saurabh [TALG 2017], who presented O (4 k (m + n)) and 4 k k O (1) (m + n) algorithms respectively. In this paper, we obtain a faster algorithm that runs in time 3 k k O (1) (m + n) and hence preserves the linear dependence on the input size while nearly matching the dependence on k incurred by the algorithm of Reed, Smith and Vetta. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.