18 results
Search Results
2. Space subdivision to speed-up convex hull construction in E3.
- Author
-
Skala, Vaclav, Majdisova, Zuzana, and Smolik, Michal
- Subjects
- *
TOPOLOGY , *ALGORITHMS , *MATHEMATICAL proofs , *COMPUTATIONAL complexity , *APPROXIMATION theory - Abstract
Convex hulls are fundamental geometric tools used in a number of algorithms. This paper presents a fast, simple to implement and robust Smart Convex Hull (S-CH) algorithm for computing the convex hull of a set of points in E 3 . This algorithm is based on “spherical” space subdivision. The main idea of the S-CH algorithm is to eliminate as many input points as possible before the convex hull construction. The experimental results show that only a very small number of points are used for the final convex hull calculation. Experiments made also proved that the proposed S-CH algorithm achieves a better time complexity in comparison with other algorithms in E 3 . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
3. Kronecker product approximations for image restoration with whole-sample symmetric boundary conditions
- Author
-
Lv, Xiao-Guang, Huang, Ting-Zhu, Xu, Zong-Ben, and Zhao, Xi-Le
- Subjects
- *
IMAGE reconstruction , *KRONECKER products , *APPROXIMATION theory , *BOUNDARY value problems , *MATHEMATICAL symmetry , *ALGORITHMS , *MATRICES (Mathematics) , *COMPUTATIONAL complexity - Abstract
Abstract: Reflexive boundary conditions (BCs) assume that the array values outside the viewable region are given by a symmetry of the array values inside. The reflection guarantees the continuity of the image. In fact, there are usually two choices for the symmetry: symmetry around the meshpoint and symmetry around the midpoint. The first is called whole-sample symmetry in signal and image processing, the second is half-sample. Many researchers have developed some fast algorithms for the problems of image restoration with the half-sample symmetric BCs over the years. However, little attention has been given to the whole-sample symmetric BCs. In this paper, we consider the use of the whole-sample symmetric boundary conditions in image restoration. The blurring matrices constructed from the point spread functions (PSFs) for the BCs have block Toeplitz-plus-PseudoHankel with Toeplitz-plus-PseudoHankel blocks structures. Recently, regardless of symmetric properties of the PSFs, a technique of Kronecker product approximations was successfully applied to restore images with the zero BCs, half-sample symmetric BCs and anti-reflexive BCs, respectively. All these results extend quite naturally to the whole-sample symmetric BCs, since the resulting matrices have similar structures. It is interesting to note that when the size of the true PSF is small, the computational complexity of the algorithm obtained for the Kronecker product approximation of the resulting matrix in this paper is very small. It is clear that in this case all calculations in the algorithm are implemented only at the upper left corner submatrices of the big matrices. Finally, detailed experimental results reporting the performance of the proposed algorithm are presented. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
4. An agglomerative clustering algorithm using a dynamic k-nearest-neighbor list
- Author
-
Lai, Jim Z.C. and Huang, Tsung-Jen
- Subjects
- *
ALGORITHMS , *COMPUTATIONAL complexity , *SET theory , *APPROXIMATION theory , *MEASUREMENT of distances , *AGGLOMERATION (Materials) , *ELECTRONIC data processing - Abstract
Abstract: In this paper, a new algorithm is developed to reduce the computational complexity of Ward’s method. The proposed approach uses a dynamic k-nearest-neighbor list to avoid the determination of a cluster’s nearest neighbor at some steps of the cluster merge. Double linked algorithm (DLA) can significantly reduce the computing time of the fast pairwise nearest neighbor (FPNN) algorithm by obtaining an approximate solution of hierarchical agglomerative clustering. In this paper, we propose a method to resolve the problem of a non-optimal solution for DLA while keeping the corresponding advantage of low computational complexity. The computational complexity of the proposed method DKNNA+FS (dynamic k-nearest-neighbor algorithm with a fast search) in terms of the number of distance calculations is O(N 2), where N is the number of data points. Compared to FPNN with a fast search (FPNN+FS), the proposed method using the same fast search algorithm (DKNNA+FS) can reduce the computing time by a factor of 1.90–2.18 for the data set from a real image. In comparison with FPNN+FS, DKNNA+FS can reduce the computing time by a factor of 1.92–2.02 using the data set generated from three images. Compared to DLA with a fast search (DLA+FS), DKNNA+FS can decrease the average mean square error by 1.26% for the same data set. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
5. Bridging the gap between theory and practice of approximate Bayesian inference
- Author
-
Kwisthout, Johan and van Rooij, Iris
- Subjects
- *
APPROXIMATION theory , *BAYESIAN analysis , *COGNITIVE science , *COMPUTATIONAL complexity , *PROBABILITY theory , *MACHINE theory - Abstract
Abstract: In computational cognitive science, many cognitive processes seem to be successfully modeled as Bayesian computations. Yet, many such Bayesian computations have been proven to be computationally intractable (NP-hard) for unconstrained input domains, even if only an approximate solution is sought. This computational complexity result seems to be in strong contrast with the ease and speed with which humans can typically make the inferences that are modeled by Bayesian models. This contrast—between theory and practice—poses a considerable theoretical challenge for computational cognitive modelers: How can intractable Bayesian computations be transformed into computationally plausible ‘approximate’ models of human cognition? In this paper, three candidate notions of ‘approximation’ are discussed, each of which has been suggested in the cognitive science literature. We will sketch how (parameterized) computational complexity analyses can yield model variants that are tractable and which can serve as the basis of computationally plausible models of cognition. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
6. Finding approximate and constrained motifs in graphs.
- Author
-
Dondi, Riccardo, Fertin, Guillaume, and Vialette, Stéphane
- Subjects
- *
APPROXIMATION theory , *CONSTRAINT programming , *GRAPH theory , *BIOLOGICAL networks , *SYSTEM identification , *COMPUTER networks - Abstract
Abstract: One of the most relevant topics in the analysis of biological networks is the identification of functional motifs inside a network. A recent approach introduced in literature, called Graph Motif, represents the network as a vertex-colored graph, and the motif as a multiset of colors. An occurrence of a motif in a vertex-colored graph is a connected induced subgraph of whose vertex set is colored exactly as . In this paper we investigate three different variants of the Graph Motif problem. The first two variants, Minimum Adding Motif (Min-Add Graph Motif) and Minimum Substitution Motif (Min-Sub Graph Motif), deal with approximate occurrences of a motif in the graph, while the third variant, Constrained Graph Motif (CGM), constrains the motif to contain a given set of vertices. We investigate the computational and parameterized complexity of the three problems. We show that Min-Add Graph Motifand Min-Sub Graph Motifare both NP-hard, even when is a set, and the graph is a tree with maximum degree in which each color appears at most twice. Then, we show that Min-Sub Graph Motifis fixed-parameter tractable when parameterized by the size of . Finally, we consider the parameterized complexity of the CGMproblem; we give a fixed-parameter algorithm for graphs of bounded treewidth, and show that the problem is W[2]-hard when parameterized by , even if the input graph has diameter . [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
7. 5–6–7 Meshes: Remeshing and analysis
- Author
-
Aghdaii, Nima, Younesy, Hamid, and Zhang, Hao
- Subjects
- *
ARBITRARY constants , *COMPUTATIONAL complexity , *TOPOLOGY , *ALGORITHMS , *APPROXIMATION theory , *COMPARATIVE studies - Abstract
Abstract: We introduce a new type of meshes called 5–6–7 meshes. For many mesh processing tasks, low- or high-valence vertices are undesirable. At the same time, it is not always possible to achieve complete vertex valence regularity, i.e. to only have valence-6 vertices. A 5–6–7 mesh is a closed triangle mesh where each vertex has valence 5, 6, or 7. An intriguing question is whether it is always possible to convert an arbitrary mesh into a 5–6–7 mesh. In this paper, we answer the question in the positive. We present a 5–6–7 remeshing algorithm which converts a closed triangle mesh with arbitrary genus into a 5–6–7 mesh which (a) closely approximates the original mesh geometrically, e.g. in terms of feature preservation and (b) has a comparable vertex count as the original mesh. We demonstrate the results of our remeshing algorithm on meshes with sharp features and different topology and complexity. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
8. Removing local extrema from imprecise terrains
- Author
-
Gray, Chris, Kammer, Frank, Löffler, Maarten, and Silveira, Rodrigo I.
- Subjects
- *
GRAPH theory , *PROBLEM solving , *NUMBER theory , *COMPUTATIONAL complexity , *APPROXIMATION theory , *GRAPH connectivity , *ALGORITHMS , *PATHS & cycles in graph theory , *INTERVAL analysis - Abstract
Abstract: In this paper we consider imprecise terrains, that is, triangulated terrains with a vertical error interval in the vertices. In particular, we study the problem of removing as many local extrema (minima and maxima) as possible from the terrain; that is, finding an assignment of one height to each vertex, within its error interval, so that the resulting terrain has minimum number of local extrema. We show that removing only minima or only maxima can be done optimally in time, for a terrain with n vertices. Interestingly, however, the problem of finding a height assignment that minimizes the total number of local extrema (minima as well as maxima) is NP-hard, and is even hard to approximate within a factor of unless . Moreover, we show that even a simplified version of the problem where we can have only three different types of intervals for the vertices is already NP-hard, a result we obtain by proving hardness of a special case of 2-Disjoint Connected Subgraphs, a problem that has lately received considerable attention from the graph-algorithms community. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
9. On the approximability of Dodgson and Young elections
- Author
-
Caragiannis, Ioannis, Covey, Jason A., Feldman, Michal, Homan, Christopher M., Kaklamanis, Christos, Karanikolas, Nikos, Procaccia, Ariel D., and Rosenschein, Jeffrey S.
- Subjects
- *
ELECTIONS , *VOTING , *NOTIONS (Philosophy) , *ALGORITHMS , *POLYNOMIALS , *SOCIAL choice , *APPROXIMATION theory , *COMPUTATIONAL complexity - Abstract
Abstract: The voting rules proposed by Dodgson and Young are both designed to find an alternative closest to being a Condorcet winner, according to two different notions of proximity; the score of a given alternative is known to be hard to compute under either rule. In this paper, we put forward two algorithms for approximating the Dodgson score: a combinatorial, greedy algorithm and an LP-based algorithm, both of which yield an approximation ratio of , where m is the number of alternatives and is the st harmonic number. We also prove that our algorithms are optimal within a factor of 2, unless problems in have quasi-polynomial-time algorithms. Despite the intuitive appeal of the greedy algorithm, we argue that the LP-based algorithm has an advantage from a social choice point of view. Further, we demonstrate that computing any reasonable approximation of the ranking produced by Dodgsonʼs rule is -hard. This result provides a complexity-theoretic explanation of sharp discrepancies that have been observed in the social choice theory literature when comparing Dodgson elections with simpler voting rules. Finally, we show that the problem of calculating the Young score is -hard to approximate by any factor. This leads to an inapproximability result for the Young ranking. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
10. A randomized PTAS for the minimum Consensus Clustering with a fixed number of clusters
- Author
-
Bonizzoni, Paola, Della Vedova, Gianluca, and Dondi, Riccardo
- Subjects
- *
STOCHASTIC processes , *APPROXIMATION theory , *CLUSTER analysis (Statistics) , *MICROARRAY technology , *PARTITIONS (Mathematics) , *ALGORITHMS , *NP-complete problems , *COMPUTATIONAL complexity - Abstract
Abstract: The Consensus Clustering problem has been introduced as an effective way to analyze the results of different microarray experiments (Filkov and Skiena (2004a,b) . The problem asks for a partition that summarizes a set of input partitions (each corresponding to a different microarray experiment) under a simple and intuitive cost. The problem on instances with two input partitions has a simple polynomial time algorithm, but it becomes APX-hard on instances with three input partitions. The quest for defining the boundary between tractable and intractable instances leads to the investigation of the restriction of Consensus Clustering when the output partition contains a fixed number of sets. In this paper, we give a randomized polynomial time approximation scheme for such problems, while proving its NP-hardness even for 2 output partitions, therefore definitively settling the approximation complexity of the problem. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
11. Inapproximability of maximal strip recovery
- Author
-
Jiang, Minghui
- Subjects
- *
BIOINFORMATICS , *APPROXIMATION theory , *MATHEMATICAL sequences , *MATHEMATICAL mappings , *MATHEMATICAL optimization , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
Abstract: In comparative genomics, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given genomic maps as sequences of gene markers, the objective of MSR- is to find subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant , a polynomial-time -approximation for MSR- was previously known. In this paper, we show that for any , MSR- is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provide the first explicit lower bounds on approximating MSR- for all . In particular, we show that MSR- is NP-hard to approximate within . From the other direction, we show that the previous -approximation for MSR- can be optimized into a polynomial-time algorithm even if is not a constant but is part of the input. We then extend our inapproximability results to several related problems including CMSR-, -gap-MSR-, and -gap-CMSR-. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
12. Metrics for weighted transition systems: Axiomatization and complexity
- Author
-
Larsen, Kim G., Fahrenberg, Uli, and Thrane, Claus
- Subjects
- *
MACHINE theory , *COMPUTATIONAL complexity , *AXIOMS , *APPROXIMATION theory , *SIMULATION methods & models , *METRIC spaces , *ALGORITHMS - Abstract
Abstract: Simulation distances are essentially approximations of simulation which provide a measure of the extent by which behaviors in systems are inequivalent. In this paper, we consider the general quantitative model of weighted transition systems, where transitions are labeled with elements of a finite metric space. We study the so-called point-wise and accumulating simulation distances which provide extensions to the well-known Boolean notion of simulation on labeled transition systems. We introduce weighted process algebras for finite and regular behavior and offer sound and (approximate) complete inference systems for the proposed simulation distances. We also settle the algorithmic complexity of computing the simulation distances. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
13. Bounded approximate decentralised coordination via the max-sum algorithm
- Author
-
Rogers, A., Farinelli, A., Stranders, R., and Jennings, N.R.
- Subjects
- *
APPROXIMATION theory , *ALGORITHMS , *CONSTRAINT satisfaction , *GRAPH theory , *COMPUTATIONAL complexity , *ARTIFICIAL intelligence - Abstract
Abstract: In this paper we propose a novel approach to decentralised coordination, that is able to efficiently compute solutions with a guaranteed approximation ratio. Our approach is based on a factor graph representation of the constraint network. It builds a tree structure by eliminating dependencies between the functions and variables within the factor graph that have the least impact on solution quality. It then uses the max-sum algorithm to optimally solve the resulting tree structured constraint network, and provides a bounded approximation specific to the particular problem instance. In addition, we present two generic pruning techniques to reduce the amount of computation that agents must perform when using the max-sum algorithm. When this is combined with the above mentioned approximation algorithm, the agents are able to solve decentralised coordination problems that have very large action spaces with a low computation and communication overhead. We empirically evaluate our approach in a mobile sensor domain, where mobile agents are used to monitor and predict the state of spatial phenomena (e.g., temperature or gas concentration). Such sensors need to coordinate their movements with their direct neighbours to maximise the collective information gain, while predicting measurements at unobserved locations. When applied in this domain, our approach is able to provide solutions which are guaranteed to be within 2% of the optimal solution. Moreover, the two pruning techniques are extremely effective in decreasing the computational effort of each agent by reducing the size of the search space by up to 92%. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
14. An efficient computational method for linear fifth-order two-point boundary value problems
- Author
-
Lv, Xueqin and Cui, Minggen
- Subjects
- *
COMPUTATIONAL complexity , *BOUNDARY value problems , *LINEAR statistical models , *ALGORITHMS , *KERNEL functions , *APPROXIMATION theory , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, we present a new algorithm to solve general linear fifth-order boundary value problems (BVPs) in the reproducing kernel space . Representation of the exact solution is given in the reproducing kernel space. Its approximate solution is obtained by truncating the -term of the exact solution. Some examples are displayed to demonstrate the computational efficiency of the method. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
15. Single-machine scheduling under the job rejection constraint
- Author
-
Zhang, Liqi, Lu, Lingfa, and Yuan, Jinjiang
- Subjects
- *
MACHINE theory , *PRODUCTION scheduling , *CONSTRAINT satisfaction , *COMPUTATIONAL complexity , *APPROXIMATION theory , *ALGORITHMS - Abstract
Abstract: In this paper, we consider single-machine scheduling problems under the job rejection constraint. A job is either rejected, in which case a rejection penalty has to be paid, or accepted and processed on the single machine. However, the total rejection penalty of the rejected jobs cannot exceed a given upper bound. The objective is to find a schedule such that a given criterion is minimized, where is a non-decreasing function on the completion times of the accepted jobs. We analyze the computational complexities of the problems for distinct objective functions and present pseudo-polynomial-time algorithms. In addition, we provide a fully polynomial-time approximation scheme for the makespan problem with release dates. For other objective functions related to due dates, we point out that there is no approximation algorithm with a bounded approximation ratio. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
16. On the minimum corridor connection problem and other generalized geometric problems
- Author
-
Bodlaender, Hans L., Feremans, Corinne, Grigoriev, Alexander, Penninkx, Eelko, Sitters, René, and Wolle, Thomas
- Subjects
- *
ALGORITHMS , *COMPUTATIONAL complexity , *POLYGONS , *GRAPH connectivity , *MATHEMATICAL decomposition , *APPROXIMATION theory , *MATHEMATICAL optimization - Abstract
Abstract: In this paper we discuss the complexity and approximability of the minimum corridor connection problem where, given a rectilinear decomposition of a rectilinear polygon into “rooms”, one has to find the minimum length tree along the edges of the decomposition such that every room is incident to a vertex of the tree. We show that the problem is strongly NP-hard and give a subexponential time exact algorithm. For the special case when the room connectivity graph is k-outerplanar the algorithm running time becomes cubic. We develop a polynomial time approximation scheme for the case when all rooms are fat and have nearly the same size. When rooms are fat but are of varying size we give a polynomial time constant factor approximation algorithm. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
17. Dominating problems in swapped networks.
- Author
-
Chen, Weidong, Lu, Zaixin, and Wu, Weili
- Subjects
- *
COMPUTER networks , *PROBLEM solving , *ALGORITHMS , *PARALLEL computers , *DISTRIBUTED computing , *APPROXIMATION theory , *COMPUTATIONAL complexity - Abstract
Abstract: Swapped Networks (SNs) are a family of two-level interconnection networks, suitable for constructing large parallel and distributed systems. In this paper, the Minimum Dominating Set (MDS) problem and the Minimum Connected Dominating Set (MCDS) problem in SNs are investigated based on the connectivity rule of SNs. We prove the two problems in SNs are -hard, and present two efficient algorithms for building dominating sets and connected dominating sets in SNs. The proposed algorithms use as input a given (connected) dominating set of the factor network, and yield a good approximation of an MDS or MCDS for the SN provided that the input is a good approximation of an MDS or MCDS for the factor network. We also derive several non-trivial bounds on the (connected) domination parameters of SNs. We believe this work is of theoretical interest in graph theory since SNs form a family of graphs. It may also motivate further research on dominating problems in SNs with their potential applications. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
18. The design and evaluation of the Simple Self-Similar Sequences Generator
- Author
-
Inácio, Pedro R.M., Lakic, Branka, Freire, Mário M., Pereira, Manuela, and Monteiro, Paulo P.
- Subjects
- *
APPROXIMATION theory , *RANDOM number generators , *ALGORITHMS , *FRACTIONAL calculus , *WIENER processes , *COMPUTER simulation , *WAVELETS (Mathematics) , *COMPUTATIONAL complexity - Abstract
Abstract: This paper describes a new algorithm for the generation of pseudo random numbers with approximate self-similar structure. The Simple Self-Similar Sequences Generator (4SG) elaborates on an intuitive approach to obtain a fast and accurate procedure, capable of reproducing series of points exhibiting the property of persistence and anti-persistence. 4SG has a computational complexity of and memory requirements of the order of , where is the number of points to be generated. The accuracy of the algorithm is evaluated by means of computer-based simulations, recurring to several Hurst parameter estimators, namely Variance Time (VT) and the Wavelets-based estimator. The Hosking and the Wavelets-based methods for the generation of self-similar series were submitted to the same tests the 4SG was analysed with, providing for a basis for comparison of several performance aspects of the algorithm. Results show that the proposal embodies a good candidate not only for on-demand emulation of arbitrarily long self-similar sequences, but also for fast and efficient online simulations. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.