802 results
Search Results
2. Unsupervised Attribute Reduction Algorithm for Mixed Data Based on Fuzzy Optimal Approximation Set.
- Author
-
Wen, Haotong, Zhao, Shixin, and Liang, Meishe
- Subjects
- *
ROUGH sets , *FUZZY sets , *MACHINE learning , *ALGORITHMS , *APPROXIMATION theory , *APPROXIMATION algorithms - Abstract
Fuzzy rough set theory has been successfully applied to many attribute reduction methods, in which the lower approximation set plays a pivotal role. However, the definition of lower approximation used has ignored the information conveyed by the upper approximation and the boundary region. This oversight has resulted in an unreasonable relation representation of the target set. Despite the fact that scholars have proposed numerous enhancements to rough set models, such as the variable precision model, none have successfully resolved the issues inherent in the classical models. To address this limitation, this paper proposes an unsupervised attribute reduction algorithm for mixed data based on an improved optimal approximation set. Firstly, the theory of an improved optimal approximation set and its associated algorithm are proposed. Subsequently, we extend the classical theory of optimal approximation sets to fuzzy rough set theory, leading to the development of a fuzzy improved approximation set method. Finally, building on the proposed theory, we introduce a novel, fuzzy optimal approximation-set-based unsupervised attribute reduction algorithm (FOUAR). Comparative experiments conducted with all the proposed algorithms indicate the efficacy of FOUAR in selecting fewer attributes while maintaining and improving the performance of the machine learning algorithm. Furthermore, they highlight the advantage of the improved optimal approximation set algorithm, which offers higher similarity to the target set and provides a more concise expression. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Approximability results for the p-centdian and the converse centdian problems.
- Author
-
Yen Hung Chen
- Subjects
- *
UNDIRECTED graphs , *APPROXIMATION theory , *MATHEMATICAL functions , *INTEGERS , *ALGORITHMS - Abstract
Given an undirected graph G = (V,E) with a nonnegative edge length function and an integer p, 0 < p < |V |, the p-centdian problem is to find p vertices (called the centdian set) of V such that the eccentricity plus median-distance is minimized, in which the eccentricity is the maximum (length) distance of all vertices to their nearest centdian set and the median-distance is the total (length) distance of all vertices to their nearest centdian set. The eccentricity plus median-distance is called the centdian-distance. The purpose of the p-centdian problem is to find p open facilities (servers) which satisfy the quality-of-service of the minimum total distance (median-distance) and the maximum distance (eccentricity) to their service customers, simultaneously. If we converse the two criteria, that is given the bound of the centdian-distance and the objective function is to minimize the cardinality of the centdian set, this problem is called the converse centdian problem. In this paper, we prove the p-centdian problem is NP-Complete. Then we design the first non-trivial brute force exact algorithms for the p-centdian problem and the converse centdian problem, respectively. Finally, we design two approximation algorithms for both problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
4. A temporal segmentation algorithm for restoring the controllability of networked control systems.
- Author
-
Cui, Yulong, Wu, Mincheng, Shao, Cunqi, and He, Shibo
- Subjects
- *
POLYNOMIALS , *APPROXIMATION theory , *FEASIBILITY studies , *ALGORITHMS - Abstract
Restoring the controllability of networked control systems is a fundamental issue that needs to be settled, especially when malicious attacks or malfunctions occur. Previous studies tried to address such an issue by adding extra driver nodes or rewiring edges, which will inevitably change the original network structure. In this paper, an algorithm to restore the controllability of networked systems while preserving the integrity of the original network structure is proposed. Specifically, a static uncontrollable network will be transformed into a controllable temporal network by means of the cactus‐based segmentation method. The original problem will be equivalent to the classical set cover problem, which is known to be NP‐Hard, if the least number of segmentations is considered. An approximation algorithm with polynomial time complexity is proposed and it is proved that the solution to the problem is two‐optimal. Finally, simulations are carried out to verify the effectiveness and feasibility of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. EXPOSITORY RESEARCH PAPERS.
- Author
-
Ipsen, Ilse
- Subjects
- *
ALGORITHMS , *APPROXIMATION theory , *MATHEMATICS problems & exercises , *GAUSSIAN quadrature formulas , *MATHEMATICAL analysis - Abstract
The article highlights two papers published within the issue titled "Divide-and-Conquer: A Proportional, Minimal-Envy Cake-Cutting Algorithm," by Steven Brams, Michael Jones and Christian Klamler, and "Impossibility of Fast Stable Approximation of Analytic Functions from Equispaced Samples," by Rodrigo Platte, Lloyd Trefethen and Arno Kuijlaars. The first paper discussed the mathematical problem of proportional division. The second one talked about potential theory, matrix iterations in the form of Krylov space methods and quadrature formulae.
- Published
- 2011
- Full Text
- View/download PDF
6. The Conjugate Gradient Viscosity Approximation Algorithm for Split Generalized Equilibrium and Variational Inequality Problems.
- Author
-
Li, Meixia, Che, Haitao, and Tan, Jingjing
- Subjects
- *
VISCOSITY , *APPROXIMATION theory , *ALGORITHMS , *GENERALIZATION , *FEASIBILITY studies - Abstract
In this paper, we study a kind of conjugate gradient viscosity approximation algorithm for finding a common solution of split generalized equilibrium problem and variational inequality problem. Under mild conditions, we prove that the sequence generated by the proposed iterative algorithm converges strongly to the common solution. The conclusion presented in this paper is the generalization, extension, and supplement of the previously known results in the corresponding references. Some numerical results are illustrated to show the feasibility and efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Determining the Optimal Order Quantity with Compound Erlang Demand under (T,Q) Policy.
- Author
-
Jiang, Aiping, Tam, Kwok Leung, Bao, Yingzi, and Lu, Jialing
- Subjects
- *
ELECTRIC power , *ALGORITHMS , *APPROXIMATION theory , *POWER resources , *ALGEBRA - Abstract
Management of electric equipment has a direct impact on companies’ performance and profitability. Considering the critical role that electric power materials play in supporting maintenance operations and preventing equipment failure, it is essential to maintain an inventory to a reasonable level. However, a majority of these electric power materials exhibit an intermittent demand pattern characterized by random arrivals interspersed with time periods with no demand at all. These characteristics cause additional difficulty for companies in managing these electric power material inventories. In response to the above problem, this paper, based on the electric power material demand data of Shanghai Electric Power Company, develops a new method to determine the optimal order quantity Q⁎ in a cost-oriented periodic review (T,Q) system, whereby unsatisfied demands are backordered and demand follows a compound Erlang distribution. Q⁎corresponds to the value of Q that gives the minimum expected total inventory holding and backordering cost. Subsequently, an empirical investigation is conducted to compare this method with the Newsvendor model. Results verify its superiority in cost savings. Ultimately, considering the complicated calculation and low efficiency of that algorithm, this paper proposes an approximation and a heuristic algorithm which have a higher level of utility in a real industrial context. The approximation algorithm simplifies the calculation process by reducing iterative times while the heuristic algorithm achieves it by generalizing the relationship between the optimal order quantity Q⁎ and mean demand interarrival rate λ. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. APPROXIMATE SOLUTIONS OF INTERVAL-VALUED OPTIMIZATION PROBLEMS.
- Author
-
Van Tuyen, Nguyen
- Subjects
- *
APPROXIMATE solutions (Logic) , *EXISTENCE theorems , *MATHEMATICAL optimization , *MATHEMATICAL programming , *MULTIPLE criteria decision making , *APPROXIMATION theory , *ALGORITHMS - Abstract
This paper deals with approximate solutions of an optimization problem with interval-valued objective function. Four types of approximate solution concepts of the problem are proposed by considering the partial ordering LU on the set of all closed and bounded intervals. We show that these solutions exist under very weak conditions. Under suitable constraint qualifications, we derive Karush-Kuhn-Tucker necessary and sufficient optimality conditions for convex interval-valued optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
9. Multi-Objective Optimization Design for Electromagnetic Devices With Permanent Magnet Based on Approximation Model and Distributed Cooperative Particle Swarm Optimization Algorithm.
- Author
-
Xuerong, Ye, Hao, Chen, Huimin, Liang, Xinjun, Chen, and Jiaxin, You
- Subjects
- *
ELECTROMAGNETIC devices , *MULTIDISCIPLINARY design optimization , *PERMANENT magnet motors , *APPROXIMATION theory , *PARTICLE swarm optimization , *ALGORITHMS , *ENERGY consumption ,MOTOR design & construction - Abstract
Electromagnetic devices containing permanent magnet (PM) are common and typical electromagnetic devices that have advantages of low-power consumption, small size, high sensitivity, and its optimization problem has aroused widespread concern. Calculation accuracy of electromagnetic properties and efficiency of optimization processes are two important factors that affect the optimization effect of electromagnetic devices. A fast calculation method was proposed in this paper that aimed at electromagnetic devices with PM characteristics of strong non-linearity and low solving precision. The response surface method was used to derive the basis function, and the error between actual measured values and calculated values was modified by the Kriging method. On the basis of the calculation model, the distributed collaborative strategy was adopted to fulfill the parallel execution of the fast computation. Simultaneously, the updating strategy of particles was improved, and the robustness of the multi-objective particle swarm optimization algorithm in solving the optimization problem of electromagnetic devices with PM was enhanced. Effectiveness of the method proposed in this paper was verified by the case study of a specific type of electromagnetic relay with PM. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Stabilization algorithm for the optimal transportation meshfree approximation scheme.
- Author
-
Weißenfels, C. and Wriggers, P.
- Subjects
- *
ALGORITHMS , *MESHFREE methods , *COMPUTER-aided engineering , *APPROXIMATION theory , *DIRICHLET problem , *BOUNDARY value problems - Abstract
Meshfree approximation schemes possess a high potential in computer aided engineering due to their large flexibility. Especially the tremendous progress in processor technology within recent years relativizes the increase in computation time due to the inherent search algorithm. Nevertheless meshfree approximation schemes are still faced with some challenges, like imposition of Dirichlet boundary conditions, robustness of the algorithm and accuracy. The recent developed Optimal Transportation Meshfree (OTM) method seemed to overcome most of these problems. In this paper the OTM solution scheme is combined with a standard search algorithm in order to allow a simple and flexible computation. However this scheme is not stable for some examples of application. Hence an investigation is conducted which shows that the reason for this instability is due to underintegration. Based on this investigation a remedy to stabilize the algorithm is suggested which is based on well known concepts to control the hourglass effects in the Finite Element Method. In contrast to the original publication, the OTM algorithm is derived here from the principle of virtual work. Local maximum entropy shape functions are used which possess a weak Kronecker- δ property. This enables a direct imposition of Dirichlet boundary conditions if the boundary is convex. The limitations of this basis function are also addressed in this paper. Additionally, the search algorithm presented fulfills basic topological requirements. Several examples are investigated demonstrating the improved behavior of the stabilized OTM algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. Adaptive Randomized Dimension Reduction on Massive Data.
- Author
-
Darnell, Gregory, Georgiev, Stoyan, Mukherjee, Sayan, and Engelhardt, Barbara E.
- Subjects
- *
ESTIMATION theory , *DIMENSIONAL reduction algorithms , *ALGORITHMS , *DIMENSION reduction (Statistics) , *APPROXIMATION theory , *PRINCIPAL components analysis , *GENOMICS - Abstract
The scalability of statistical estimators is of increasing importance in modern applications. One approach to implementing scalable algorithms is to compress data into a low dimensional latent space using dimension reduction methods. In this paper, we develop an approach for dimension reduction that exploits the assumption of low rank structure in high dimensional data to gain both computational and statistical advantages. We adapt recent randomized low-rank approximation algorithms to provide an efficient solution to principal component analysis (PCA), and we use this efficient solver to improve estimation in large-scale linear mixed models (LMM) for association mapping in statistical genomics. A key observation in this paper is that randomization serves a dual role, improving both computational and statistical performance by implicitly regularizing the covariance matrix estimate of the random effect in an LMM. These statistical and computational advantages are highlighted in our experiments on simulated data and large-scale genomic studies. [ABSTRACT FROM AUTHOR]
- Published
- 2017
12. Bounded perturbation resilience of extragradient-type methods and their applications.
- Author
-
Dong, Q-L, Gibali, A, Jiang, D, and Tang, Y
- Subjects
- *
NUMERICAL solutions to equations , *MATHEMATICAL analysis , *APPROXIMATION theory , *HILBERT space , *ALGORITHMS - Abstract
In this paper we study the bounded perturbation resilience of the extragradient and the subgradient extragradient methods for solving a variational inequality (VI) problem in real Hilbert spaces. This is an important property of algorithms which guarantees the convergence of the scheme under summable errors, meaning that an inexact version of the methods can also be considered. Moreover, once an algorithm is proved to be bounded perturbation resilience, superiorization can be used, and this allows flexibility in choosing the bounded perturbations in order to obtain a superior solution, as well explained in the paper. We also discuss some inertial extragradient methods. Under mild and standard assumptions of monotonicity and Lipschitz continuity of the VI's associated mapping, convergence of the perturbed extragradient and subgradient extragradient methods is proved. In addition we show that the perturbed algorithms converge at the rate of $O(1/t)$ . Numerical illustrations are given to demonstrate the performances of the algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Fast Cross-Validation for Kernel-Based Algorithms.
- Author
-
Liu, Yong, Liao, Shizhong, Jiang, Shali, Ding, Lizhong, Lin, Hailun, and Wang, Weiping
- Subjects
- *
TAYLOR'S series , *APPROXIMATION theory , *ALGORITHMS , *SUPPORT vector machines - Abstract
Cross-validation (CV) is a widely adopted approach for selecting the optimal model. However, the computation of empirical cross-validation error (CVE) has high complexity due to multiple times of learner training. In this paper, we develop a novel approximation theory of CVE and present an approximate approach to CV based on the Bouligand influence function (BIF) for kernel-based algorithms. We first represent the BIF and higher order BIFs in Taylor expansions, and approximate CV via the Taylor expansions. We then derive an upper bound of the discrepancy between the original and approximate CV. Furthermore, we provide a novel computing method to calculate the BIF for general distribution, and evaluate BIF criterion for sample distribution to approximate CV. The proposed approximate CV requires training on the full data set only once and is suitable for a wide variety of kernel-based algorithms. Experimental results demonstrate that the proposed approximate CV is sound and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Search space-based multi-objective optimization evolutionary algorithm.
- Author
-
Medhane, Darshan Vishwasrao and Sangaiah, Arun Kumar
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *REAL-time computing , *APPROXIMATION theory , *MUTATION (Phonetics) - Abstract
Evolutionary multi-objective optimization (EMO) algorithms are actively used for answering optimization problems with multiple contradictory objectives and scheming interpretable and precise real-time applications. A majority of existing EMO algorithms performs better on two or three objectives non-dominated problems; however, they meet complications in managing and maintaining a set of optimal solutions to multi-objective optimization problems. This paper proposes a search space-based multi-objective evolutionary algorithm (SSMOEA) for multi-objective optimization problems. To accomplish the potential of the search space-based method for solving multi-objective optimization problems and to reinforce the selection procedure toward the ideal direction while sustaining an extensive and uniform distribution of solutions is our key objective. To the best of our knowledge, this paper is the first attempt to propose a search space-based multi-objective evolutionary algorithm for multi-objective optimization. The experimental setup used showed that the proposed algorithm is good and competitive in comparison to the existing EMO algorithms from the viewpoint of finding a scattered and estimated solution set in multi-objective optimization problems. SSMOEA can achieve a good trade-off between search space convergence and search space diversity in the appropriate experimental setup. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.
- Author
-
Ling, Qing-Hua, Song, Yu-Qing, Han, Fei, Yang, Dan, and Huang, De-Shuang
- Subjects
- *
PARTICLE swarm optimization , *MACHINE learning , *STOCHASTIC convergence , *LEAST squares , *APPROXIMATION theory - Abstract
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. A geometrical stability condition for compressed sensing.
- Author
-
Flinth, Axel
- Subjects
- *
GEOMETRY , *STABILITY theory , *COMPRESSED sensing , *SIGNAL processing , *ALGORITHMS , *APPROXIMATION theory - Abstract
During the last decade, the paradigm of compressed sensing has gained significant importance in the signal processing community. While the original idea was to utilize sparsity assumptions to design powerful recovery algorithms of vectors x ∈ R d , the concept has been extended to cover many other types of problems. A noteable example is low-rank matrix recovery. Many methods used for recovery rely on solving convex programs. A particularly nice trait of compressed sensing is its geometrical intuition. In recent papers, a classical optimality condition has been used together with tools from convex geometry and probability theory to prove beautiful results concerning the recovery of signals from Gaussian measurements. In this paper, we aim to formulate a geometrical condition for stability and robustness, i.e. for the recovery of approximately structured signals from noisy measurements. We will investigate the connection between the new condition with the notion of restricted singular values , classical stability and robustness conditions in compressed sensing, and also to important geometrical concepts from complexity theory. We will also prove the maybe somewhat surprising fact that for many convex programs, exact recovery of a signal x 0 immediately implies some stability and robustness when recovering signals close to x 0 . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. On the Optimal Pre-Computation of Window τNAF for Koblitz Curves.
- Author
-
Trost, William R. and Xu, Guangwu
- Subjects
- *
ELLIPTIC curve cryptography , *ALGORITHMS , *APPROXIMATION theory , *COMPUTATIONAL complexity , *COEFFICIENTS (Statistics) - Abstract
Koblitz curves have been an important subject of consideration for both theoretical and practical interests. The window τ-adic algorithm of Solinas (window τNAF) is the most powerful method for computing point multiplication for Koblitz curves. Pre-computation plays an important role in improving the performance of point multiplication. In this paper, the concept of optimal pre-computation for window τNAF is formulated. In this setting, an optimal pre-computation has some mathematically natural and clean forms, and requires 2w−2−1 point additions and two evaluations of the Frobenius map τ, where w is the window width. One of the main results of this paper is to construct an optimal pre-computation scheme for each window width w from 4 to 15 (more than practical needs).These pre-computations can be easily incorporated into implementations of window τNAF. The ideas in the paper can also be used to construct other suitable pre-computations. This paper includes a discussion of coefficient sets for window τNAF and the divisibility by powers of τ through different approaches. Some issues of implementation are also discussed. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
18. On nonlinear analysis by the multipoint meshless FDM.
- Author
-
Jaworska, Irena and Orkisz, Janusz
- Subjects
- *
NONLINEAR analysis , *APPROXIMATION theory , *ALGORITHMS , *DIFFERENTIAL equations , *FINITE element method , *DERIVATIVES (Mathematics) - Abstract
The main objective of this paper is to present an attempt of an application of the recently developed higher order multipoint meshless FDM in the analysis of nonlinear problems. The multipoint approach provides a higher order approximation and improves the precision of the solution. In addition to improved solution quality, the essential feature of the multipoint approach is its potentially wide ranging applicability. This is possible, because in both the multipoint and standard meshless FDM, the difference formulas are generated at once for the full set of derivatives. Using them, we may easily compose any required FD operator. It is worth mentioning that all derivative operators depend on the domain discretization rather than on the specific problem being analysed. Therefore, the solution of a wide class of problems including nonlinear ones, may be obtained with this method. The numerical algorithm of the multipoint method for nonlinear analysis is presented in this paper. Results of selected engineering benchmark problems – deflection of the ideal membrane and analysis of large deflection of plates using the von Karman theory – are considered. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. A novel approach for efficient updating approximations in dynamic ordered information systems.
- Author
-
Wang, Shu, Li, Tianrui, Luo, Chuan, Hu, Jie, Fujita, Hamido, and Huang, Tianqiang
- Subjects
- *
INFORMATION storage & retrieval systems , *APPROXIMATION theory , *ROUGH sets , *DATA mining , *ALGORITHMS - Abstract
Dynamic data in real-time application are typically updating in a multi-dimensional manner. In this paper, we introduce a novel approach based on Dominance-based Rough Set Approach (DRSA) to efficiently deal with the multi-dimensional variations of attribute set and attribute values in dynamic Ordered Information Systems (OIS). We improve the original notion of the P-generalized decision domains to make the feature value matrix be dominance symmetrical, and propose an efficient strategy based on the improved notion to obtain the dominance feature matrix. Then, we employ the dominance-feature-matrix-based incremental strategy to avoid repeated comparisons between original attributes, so that to efficiently update rough approximations of DRSA with the simultaneously increased attribute set and varied attribute values. In our approach, the steps based on these two combined strategies can work altogether or separately, not only efficiently dealing with the simultaneously increased attribute set and varied attribute values, but also efficiently dealing with the individually increased attribute set or varied attribute values in dynamic OIS. Efficient algorithm based on the updating strategies is designed and multiple groups of experiments are conducted. Experimental results on different real-world data sets show that the proposed algorithm is much faster than other algorithms for dealing with the multi-dimensional or the single-dimensional variations of attribute set and attribute values. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Deep stacked stochastic configuration networks for lifelong learning of non-stationary data streams.
- Author
-
Pratama, Mahardhika and Wang, Dianhui
- Subjects
- *
DEEP learning , *STOCHASTIC processes , *INFORMATION storage & retrieval systems , *ALGORITHMS , *APPROXIMATION theory - Abstract
The concept of SCN offers a fast framework with universal approximation guarantee for lifelong learning of non-stationary data streams. Its adaptive scope selection property enables for proper random generation of hidden unit parameters advancing conventional randomized approaches constrained with a fixed scope of random parameters. This paper proposes deep stacked stochastic configuration network (DSSCN) for continual learning of non-stationary data streams which contributes two major aspects: 1) DSSCN features a self-constructing methodology of deep stacked network structure where hidden unit and hidden layer are extracted automatically from continuously generated data streams; 2) the concept of SCN is developed to randomly assign inverse covariance matrix of multivariate Gaussian function in the hidden node addition step bypassing its computationally prohibitive tuning phase. Numerical evaluation and comparison with prominent data stream algorithms under two procedures: periodic hold-out and prequential test-then-train processes demonstrate the advantage of proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Integrating Coflow and Circuit Scheduling for Optical Networks.
- Author
-
Wang, Haibo, Yu, Xiwen, Xu, Hongli, Fan, Jingyuan, Qiao, Chunming, and Huang, Liusheng
- Subjects
- *
OPTICAL switches , *OPTICAL fiber networks , *PACKET switching (Data transmission) , *ALGORITHMS , *APPROXIMATION theory - Abstract
There are more and more structured traffic flows (a.k.a coflow) in today's data center networks. Completing a coflow is extremely important for various applications, e.g., MapReduce. To reduce the coflow completion time or CCT, one may increase the link capacity by applying advanced optical circuit switches in data center networks. Due to special features of optical circuit switches, both traffic scheduling and circuit scheduling will influence the CCT. However, previous solutions have some significant limitations: they consider either coflow scheduling, or circuit scheduling for only one optical circuit switch, which are both insufficient. In this paper, we study the integrated coflow and circuit scheduling (GCCS) problem with the objective to minimize the CCT, and prove its NP-hardness. We present an integrated algorithm which includes two steps, coflow scheduling and circuit scheduling, respectively. We also analyze that the proposed algorithm can achieve the approximation ratio $O(h)$O(h) in most practical situations, where $h$h is the maximum number of ports among all lightpaths. Through large-scale simulations, we demonstrate that the integrated solution can significantly reduce the CCT by about 43-70 percent compared with the state-of-the-art coflow scheduler for optical networks. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Simulation-Based Travel Time Reliable Signal Control.
- Author
-
Chen, Xiao, Osorio, Carolina, and Santos, Bruno Filipe
- Subjects
- *
TRAVEL time (Traffic engineering) , *TRAFFIC signal control systems , *TRANSPORTATION problems (Programming) , *COMPUTER simulation , *APPROXIMATION theory , *ALGORITHMS - Abstract
This paper addresses a travel time reliable signal control problem. Travel time distributional estimates are obtained from a stochastic microscopic traffic simulator. The estimates are embedded within a simulation-based optimization algorithm. Analytical approximations of the simulated metrics are combined with the simulated data in order to enhance the computational efficiency of the algorithm. The signal control problems are formulated based on the expectation and the standard deviation of travel time metrics. The proposed approach goes beyond the traditional use of first-order simulated information, it addresses a problem that embeds higher-order distributional information. It is used to solve a large-scale signal control problem. The approach addresses these challenging simulation-based optimization problems in a computationally efficient manner. Its performance is compared to that of a traditional simulation-based optimization approach. The proposed method systematically outperforms the traditional approach. Such an approach can be used to inform the design and operations of transportation systems by, for instance, addressing reliable and/or robust formulations of traditional transportation problems. The online appendix is available at https://doi.org/10.1287/trsc.2017.0812. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Iterative Loewner Matrix Macromodeling Approach for Noisy Frequency Responses.
- Author
-
Sahouli, Mohamed, Wahid, Sadia, and Dounavis, Anestis
- Subjects
- *
NONLINEAR theories , *ITERATIVE methods (Mathematics) , *BANDWIDTHS , *MATRICES (Mathematics) , *ALGORITHMS , *APPROXIMATION theory - Abstract
This paper presents a macromodeling technique for modeling distributed circuits characterized by noisy frequency-domain data. The proposed method is based on an iterative Loewner matrix (LM) approach. Using the LM approximation of previous iterations, the singular values and orthonormal matrices of the Loewner pencil are made to be more accurate. This approach is shown to minimize the biasing effect of the noisy data due to the nonlinearity of the singular value decomposition operation, resulting in more accurate poles and residues. Numerical examples illustrate that for noisy frequency-domain data, the proposed algorithm creates more accurate macromodels when compared with the traditional LM approach. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Randomized algorithms for the approximations of Tucker and the tensor train decompositions.
- Author
-
Che, Maolin and Wei, Yimin
- Subjects
- *
ALGORITHMS , *APPROXIMATION theory , *DECOMPOSITION method - Abstract
Randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptive randomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomized algorithms. We illustrate our adaptive randomized algorithms via several numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. On-the-fly backward error estimate for matrix exponential approximation by Taylor algorithm.
- Author
-
Caliari, M. and Zivcovich, F.
- Subjects
- *
APPROXIMATION theory , *MATRIX exponential , *ERROR analysis in mathematics , *ALGORITHMS , *COMPUTATIONAL mathematics , *APPLIED mathematics - Abstract
Abstract In this paper we show that it is possible to estimate the backward error for the approximation of the matrix exponential on-the-fly , without the need to precompute in high precision quantities related to specific accuracies. In this way, the scaling parameter s and the degree m of the truncated Taylor series (the underlying method) are adapted to the input matrix and not to a class of matrices sharing some spectral properties, such as with the classical backward error analysis. The result is a very flexible method which can approximate the matrix exponential at any desired accuracy. Moreover, the risk of overscaling, that is the possibility to select a value s larger than necessary, is mitigated by a new choice as the sum of two powers of two. Finally, several numerical experiments in MATLAB with data in double and variable precision and with different tolerances confirm that the method is accurate and often faster than available good alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. STAR: A Segmentation-Based Approximation of Point-Based Sampling Milano Retinex for Color Image Enhancement.
- Author
-
Lecca, Michela
- Subjects
- *
IMAGE segmentation , *COLOR image processing , *PIXELS , *APPROXIMATION theory , *ALGORITHMS - Abstract
Milano Retinex is a family of spatial color algorithms inspired by Retinex and mainly devoted to the image enhancement. In the so-called point-based sampling Milano Retinex algorithms, this task is accomplished by processing the color of each image pixel based on a set of colors sampled in its surround. This paper presents STAR, a segmentation based approximation of the point-based sampling Milano Retinex approaches: it replaces the pixel-wise image sampling by a novel, computationally efficient procedure that detects once for all the color and spatial information relevant to image enhancement from clusters of pixels output by a segmentation. The experiments reported here show that STAR performs similarly to previous point-based sampling Milano Retinex approaches and that STAR enhancement improves the accuracy of the well-known algorithm scale-invariant feature transform on the description and matching of photographs captured under difficult light conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. MATI: An efficient algorithm for influence maximization in social networks.
- Author
-
Rossi, Maria-Evgenia G., Shi, Bowen, Tziortziotis, Nikolaos, Malliaros, Fragkiskos D., Giatsidis, Christos, and Vazirgiannis, Michalis
- Subjects
- *
EPIDEMICS , *SOCIAL networks , *SOCIAL movements , *APPROXIMATION theory , *VIRAL marketing - Abstract
Influence maximization has attracted a lot of attention due to its numerous applications, including diffusion of social movements, the spread of news, viral marketing and outbreak of diseases. The objective is to discover a group of users that are able to maximize the spread of influence across a network. The greedy algorithm gives a solution to the Influence Maximization problem while having a good approximation ratio. Nevertheless it does not scale well for large scale datasets. In this paper, we propose Matrix Influence, MATI, an efficient algorithm that can be used under both the Linear Threshold and Independent Cascade diffusion models. MATI is based on the precalculation of the influence by taking advantage of the simple paths in the node’s neighborhood. An extensive empirical analysis has been performed on multiple real-world datasets showing that MATI has competitive performance when compared to other well-known algorithms with regards to running time and expected influence spread. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Predicting Model and Algorithm in RNA Folding Structure Including Pseudoknots.
- Author
-
Liu, Zhendong, Zhu, Daming, and Dai, Qionghai
- Subjects
- *
RNA folding , *ALGORITHMS , *ARBITRARY constants , *CHANGE , *APPROXIMATION theory - Abstract
The prediction of RNA structure with pseudoknots is a nondeterministic polynomial-time hard (NP-hard) problem; according to minimum free energy models and computational methods, we investigate the RNA-pseudoknotted structure. Our paper presents an efficient algorithm for predicting RNA structure with pseudoknots, and the algorithm takes O(n3) time and O(n2) space, the experimental tests in Rfam10.1 and PseudoBase indicate that the algorithm is more effective and precise. The predicting accuracy, the time complexity and space complexity outperform existing algorithms, such as Maximum Weight Matching (MWM) algorithm, PKNOTS algorithm and Inner Limiting Layer (ILM) algorithm, and the algorithm can predict arbitrary pseudoknots. And there exists a 1+ε (ε>0) polynomial time approximation scheme in searching maximum number of stackings, and we give the proof of the approximation scheme in RNA-pseudoknotted structure. We have improved several types of pseudoknots considered in RNA folding structure, and analyze their possible transitions between types of pseudoknots. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. A real-time interpolation strategy for transition tool path with C2 and G2 continuity.
- Author
-
Wang, Hui, Wu, Jianhua, Liu, Chao, and Xiong, Zhenhua
- Subjects
- *
INTERPOLATION , *ALGORITHMS , *MATHEMATICAL optimization , *APPROXIMATION theory , *NUMERICAL analysis - Abstract
A typical interpolation strategy for line segments consists of a transition scheme, a Look-ahead ACC/DEC scheduling, and an interpolation algorithm. In these three parts, the main computation occurs in the first and second part. Some research work has been carried out to decrease the computation in the previous literatures, but these methods occupy a lot of computing resources for the optimization process during the calculation of transition curve parameters and feed rates. Consequently, the computational efficiency of interpolation strategy is greatly reduced. To deal with the issue, a real-time interpolation strategy is proposed in this paper. In the transition scheme, a Bézier curve is utilized to smooth the line segments. Based on the relationship among the approximation error, the approximation radius, and the transition curve, the curve can be directly generated when the approximation error is given. In the ACC/DEC scheduling, a 3-segment feed rate profile with jerk continuity is constructed. Meanwhile, a Look-ahead planning based on Backward Scanning and Forward Revision (BSFR) algorithm is utilized to eliminate redundant computation. Compared with Zhao’s and Shi’s strategy, the proposed strategy has the merits of C2 and G2 continuity for the tool path, jerk continuity for the tool movement, and distinguished real-time performance for interpolation. The experiments of 3D pentagram and 2D butterfly are carried out with different strategies and their results demonstrate that the interpolation efficiency can be greatly improved with the proposed strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Efficient Low-Complexity Phase Noise Resistant Iterative Joint Phase Estimation and Decoding Algorithm.
- Author
-
Kreimer, Andrey and Raphaeli, Dan
- Subjects
- *
APPROXIMATION theory , *LOW density parity check codes , *ALGORITHMS , *GAUSSIAN function , *ITERATIVE methods (Mathematics) - Abstract
In this paper, we present a new low-complexity iterative joint phase estimation and decoding algorithm which can be applied to low-density-parity-check (or turbo) codes for channels affected by strong phase noise. The algorithm exhibits a very good performance and a very low complexity even in strong phase noise, high code rate, and high-order constellations. The proposed algorithm works by first obtaining a preliminary pilots-aided phase estimation. Next, the expressions for the belief propagation messages are approximated around that reference and reduced to a Gaussian canonical form. This results in a simple closed form of a recursive update. Simulations results for phase-shift-keying modulation of size 8 and quadrature-amplitude modulation of size 64 are presented and compared with other algorithms in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Approximate sparse spectral clustering based on local information maintenance for hyperspectral image classification.
- Author
-
Yan, Qing, Ding, Yun, Zhang, Jing-Jing, Xun, Li-Na, and Zheng, Chun-Hou
- Subjects
- *
HYPERSPECTRAL imaging systems , *COMPUTATIONAL complexity , *CLUSTER analysis (Statistics) , *APPROXIMATION theory , *APPLIED mathematics - Abstract
Sparse spectral clustering (SSC) has become one of the most popular clustering approaches in recent years. However, its high computational complexity prevents its application to large-scale datasets such as hyperspectral images (HSIs). In this paper, we propose two efficient approximate sparse spectral clustering methods for HSIs clustering in which clustering performance is improved by utilizing local information among the data. Firstly, we construct a smaller representative dataset on which sparse spectral clustering is performed. Then the labels of ground object are extending to whole dataset based on the local information according to two extending strategies. The first one is that the local interpolation is utilized to improve the extension of the clustering result. The other one is that the label extension is turned to a problem of subspace embedding, and is fulfilled by locally linear embedding (LLE). Several experiments on HSIs demonstrated that the proposed algorithms are effective for HSIs clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Solving global optimization problems using reformulations and signomial transformations.
- Author
-
Lundell, A. and Westerlund, T.
- Subjects
- *
MATHEMATICAL functions , *MIXED integer linear programming , *LINEAR programming , *ALGORITHMS , *APPROXIMATION theory - Abstract
Highlights • A framework for convexifying MINLP problems containing signomial functions and twice-differentiable functions is presented. • Lifting transformation schemes based on power, exponential and α BB-type reformulations is used. • The transformations used in the framework are obtained by solving an optimization problem in a preprocessing step. • The reformulations technique is included in a global algorithm that can solve any twice-differentiable MINLP problem. Abstract In this paper, a framework for reformulating nonconvex mixed-integer nonlinear programming (MINLP) problems containing twice-differentiable (C 2) functions to convex relaxed form is discussed. To provide flexibility and for utilizing more effective transformation strategies, the twice-differentiable functions can be partitioned into convex, signomial and general nonconvex functions. The latter two can then be convexified using lifting transformations in combination with approximations using piecewise linear functions (PLFs). However, since there are many degrees of freedom in how to select the set of transformations, an optimization-based method is proposed for finding an optimal set. The lifting transformations are based on single-variable power and exponential transformations for signomials. For nonconvex C 2-functions the α reformulation (α R) technique as well as more generally the method of difference of convex functions can be applied. In the α R, the α BB convex underestimator can be used. The framework is utilized in the α signomial global optimization (α SGO) algorithm to find the ϵ -global solution to a nonconvex problem by iteratively updating the approximations provided by the PLFs. The framework can also be used to directly obtain a convex relaxation of any nonconvex MINLP problem of the specified type to a determined accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Provably Correct Algorithms for Matrix Column Subset Selection with Selectively Sampled Data.
- Author
-
Yining Wang and Singh, Aarti
- Subjects
- *
SELECTION theorems , *STATISTICAL sampling , *APPROXIMATION theory , *ALGORITHMS , *POPULATION genetics , *ELECTRONIC circuits - Abstract
We consider the problem of matrix column subset selection, which selects a subset of columns from an input matrix such that the input can be well approximated by the span of the selected columns. Column subset selection has been applied to numerous real-world data applications such as population genetics summarization, electronic circuits testing and recommendation systems. In many applications the complete data matrix is unavailable and one needs to select representative columns by inspecting only a small portion of the input matrix. In this paper we propose the first provably correct column subset selection algorithms for partially observed data matrices. Our proposed algorithms exhibit different merits and limitations in terms of statistical accuracy, computational efficiency, sample complexity and sampling schemes, which provides a nice exploration of the tradeoff between these desired properties for column subset selection. The proposed methods employ the idea of feedback driven sampling and are inspired by several sampling schemes previously introduced for low-rank matrix approximation tasks (Drineas et al., 2008; Frieze et al., 2004; Deshpande and Vempala, 2006; Krishnamurthy and Singh, 2014). Our analysis shows that, under the assumption that the input data matrix has incoherent rows but possibly coherent columns, all algorithms provably converge to the best low-rank approximation of the original data as number of selected columns increases. Furthermore, two of the proposed algorithms enjoy a relative error bound, which is preferred for column subset selection and matrix approximation purposes. We also demonstrate through both theoretical and empirical analysis the power of feedback driven sampling compared to uniform random sampling on input matrices with highly correlated columns. [ABSTRACT FROM AUTHOR]
- Published
- 2018
34. Point Set Denoising Using Bootstrap-Based Radial Basis Function.
- Author
-
Liew, Khang Jie, Ramli, Ahmad, and Abd. Majid, Ahmad
- Subjects
- *
SIGNAL denoising , *STATISTICAL bootstrapping , *RADIAL basis functions , *THREE-dimensional imaging , *SCANNING systems , *APPROXIMATION theory - Abstract
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
35. On computing the 2-vertex-connected components of directed graphs.
- Author
-
Jaberi, Raed
- Subjects
- *
GEOMETRIC vertices , *MATHEMATICAL connectedness , *DIRECTED graphs , *ALGORITHMS , *APPROXIMATION theory , *GENERALIZATION , *SPANNING trees - Abstract
In this paper we consider the problem of computing the 2-vertex-connected components of directed graphs. We present a new algorithm for solving this problem in O ( n m ) time. Furthermore, we show that the old algorithm of Erusalimskii and Svetlov runs in O ( n m 2 ) time. In this paper, we investigate the relationship between 2-vertex-connected components and dominator trees. We also consider three applications of our new algorithm, which are approximation algorithms for problems that are generalization of the problem of approximating the smallest 2-vertex-connected spanning subgraph of 2-vertex-connected directed graph. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
36. An efficient algorithm to generate random sphere packs in arbitrary domains.
- Author
-
Lozano, Elias, Roehl, Deane, Celes, Waldemar, and Gattass, Marcelo
- Subjects
- *
PARTICLE size distribution , *ALGORITHMS , *APPROXIMATION theory , *SIMULATION methods & models , *STATISTICAL physics , *NUMERICAL analysis - Abstract
Particle-based methods based on material models using spheres can provide good approximations for many physical phenomena at both the micro and macroscales. The point of departure for the simulations, in general, is a dense arrangement of spherical particles (sphere pack) inside a given container. For generic domains, the generation of a sphere pack may be complex and time-consuming, especially if the pack must comply with a prescribed sphere size distribution and the stability requirements of the simulation. The primary goal of this paper is to present an efficient algorithm that is capable of producing packs with millions of spheres following a statistical sphere size distribution inside complex arbitrary domains. This algorithm uses a new strategy to ensure that the sphere size distribution is preserved even when large particles are rejected in the growing process. The paper also presents numerical results that enable an evaluation of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Efficient evaluation of subdivision schemes with polynomial reproduction property.
- Author
-
Deng, Chongyang and Ma, Weiyin
- Subjects
- *
SUBDIVISION surfaces (Geometry) , *POLYNOMIALS , *INTERPOLATION , *APPROXIMATION theory , *MATHEMATICAL bounds , *ALGORITHMS - Abstract
In this paper we present an efficient framework for the evaluation of subdivision schemes with polynomial reproduction property. For all interested rational parameters between 0 and 1 with the same denominator, their exact limit positions on the subdivision curve can be obtained by solving a system of linear equations. When the framework is applied to binary and ternary 4-point interpolatory subdivision schemes, we find that the corresponding coefficient matrices are strictly diagonally dominant, and so the evaluation processes are robust. For any individual irrational parameters between 0 and 1, its approximate value is computed by a recursive algorithm which can attain an arbitrary error bound. For surface schemes generalizing univariate subdivision schemes with polynomial reproduction property, exact evaluation methods can also be derived by combining Stam’s method with that of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. Iterative Approximation of Basic Belief Assignment Based on Distance of Evidence.
- Author
-
Yang, Yi and Liu, Yuanli
- Subjects
- *
ITERATIVE methods (Mathematics) , *APPROXIMATION theory , *COST analysis , *COMPUTATIONAL complexity , *MATHEMATICAL statistics , *SIMULATION methods & models - Abstract
In the theory of belief functions, the approximation of a basic belief assignment (BBA) is for reducing the high computational cost especially when large number of focal elements are available. In traditional BBA approximation approaches, a focal element’s own characteristics such as the mass assignment and the cardinality, are usually used separately or jointly as criteria for the removal of focal elements. Besides the computational cost, the distance between the original BBA and the approximated one is also concerned, which represents the loss of information in BBA approximation. In this paper, an iterative approximation approach is proposed based on maximizing the closeness, i.e., minimizing the distance between the approximated BBA in current iteration and the BBA obtained in the previous iteration, where one focal element is removed in each iteration. The iteration stops when the desired number of focal elements is reached. The performance evaluation approaches for BBA approximations are also discussed and used to compare and evaluate traditional BBA approximations and the newly proposed one in this paper, which include traditional time-based way, closeness-based way and new proposed ones. Experimental results and related analyses are provided to show the rationality and efficiency of our proposed new BBA approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. VEGAS: Visual influEnce GrAph Summarization on Citation Networks.
- Author
-
Shi, Lei, Tong, Hanghang, Tang, Jie, and Lin, Chuang
- Subjects
- *
CITATION networks , *DATA mining , *ALGORITHMS , *APPROXIMATION theory , *DATA analysis - Abstract
Visually analyzing citation networks poses challenges to many fields of the data mining research. How can we summarize a large citation graph according to the user’s interest? In particular, how can we illustrate the impact of a highly influential paper through the summarization? Can we maintain the sensory node-link graph structure while revealing the flow-based influence patterns and preserving a fine readability? The state-of-the-art influence maximization algorithms can detect the most influential node in a citation network, but fail to summarize a graph structure to account for its influence. On the other hand, existing graph summarization methods fold large graphs into clustered views, but can not reveal the hidden influence patterns underneath the citation network. In this paper, we first formally define the Influence Graph Summarization problem on citation networks. Second, we propose a matrix decomposition based algorithm pipeline to solve the IGS problem. Our method can not only highlight the flow-based influence patterns, but also easily extend to support the rich attribute information. A prototype system called VEGAS implementing this pipeline is also developed. Third, we present a theoretical analysis on our main algorithm, which is equivalent to the kernel k-mean clustering. It can be proved that the matrix decomposition based algorithm can approximate the objective of the proposed IGS problem. Last, we conduct comprehensive experiments with real-world citation networks to compare the proposed algorithm with classical graph summarization methods. Evaluation results demonstrate that our method significantly outperforms the previous ones in optimizing both the quantitative IGS objective and the quality of the visual summarizations. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
40. Separation Of Non-Periodic And Periodic 2D Profile Features Using B-Spline Functions.
- Author
-
Janecki, Dariusz, Cedro, Leszek, and Zwierzchowski, Jarosław
- Subjects
- *
DIGITAL filters (Mathematics) , *ALGORITHMS , *FOURIER transforms , *APPROXIMATION theory , *ROUNDNESS measurement - Abstract
The form, waviness and roughness components of a measured profile are separated by means of digital filters. The aim of analysis was to develop an algorithm for one-dimensional filtering of profiles using approximation by means of B-splines. The theory of B-spline functions introduced by Schoenberg and extended by Unser et al. was used. Unlike the spline filter proposed by Krystek, which is described in ISO standards, the algorithm does not take into account the bending energy of a filtered profile in the functional whose minimization is the principle of the filter. Appropriate smoothness of a filtered profile is achieved by selecting an appropriate distance between nodes of the spline function. In this paper, we determine the Fourier transforms of the filter impulse response at different impulse positions, with respect to the nodes. We show that the filter cutoff length is equal to half of the node-to-node distance. The inclination of the filter frequency characteristic in the transition band can be adjusted by selecting an appropriate degree of the B-spline function. The paper includes examples of separation of 2D roughness, as well as separation of form and waviness of roundness profiles. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
41. Detecting approximate clones in business process model repositories.
- Author
-
La Rosa, Marcello, Dumas, Marlon, Ekanayake, Chathura C., García-Bañuelos, Luciano, Recker, Jan, and ter Hofstede, Arthur H.M.
- Subjects
- *
BUSINESS process management , *APPROXIMATION theory , *EMPIRICAL research , *ALGORITHMS , *STANDARDIZATION , *PROBLEM solving - Abstract
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This paper studies the broader problem of approximate clone detection in process models. The paper proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hierarchical Agglomerative Clustering (HAC). The paper also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
42. Gaussian Process Surrogate Model for Levering Similar Trends Across Concepts.
- Author
-
Eugenio, José, del Río, Valenzuela, and Mavris, Dimitrí
- Subjects
- *
GAUSSIAN processes , *APPROXIMATION theory , *ENGINES , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
The computational budget for designing and optimizing engineering systems has been tightened by the growth in the number of technology alternatives to satisfy the increasingly demanding goals in modern systems and the use of sophisticated, but also computationally burdensome, simulation tools to quantitatively choose the best technology alternative more accurately. This paper develops a new Gaussian process metamodel, which leverages trends in engineering objectives that are similar across derivative concepts, as opposed to current approximation methods that handle concepts independently. This metamodel is based on a muitifidelity approach that enhances a new concept prediction with observations of a previous concept with similar trends. The influence of the sizes of the new and the old concept training sets in the performance of the proposed metamodel is explored in this paper. The proposed multifidelity metamodel is tested against traditional independent surrogates in approximating the engine shaft horsepower of a UH-60A modification with a Fenestron tail, where observations from the UH-60A with a conventional tail are available. The proposed metamodels are up to two times as accurate as independent concept modeling for small new concept training sets; then, a multiobjective efficient global optimization algorithm is applied to the proposed metamodel to search for minimal UH-60A power consumption in hover and cruise. This results in Pareto-optimal solutions that are 10% more efficient than the UH-60A baseline. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
43. A simulation algorithm with uncertain random variables.
- Author
-
Dalman, Hasan
- Subjects
- *
RANDOM variables , *SIMULATION methods & models , *APPROXIMATION theory , *ALGORITHMS , *UNCERTAINTY - Abstract
In many situations, uncertainty and randomness concurrently occur in a system. Thus this paper presents a new concept for uncertain random variable. Also, a simulation algorithm based on uncertain random variables is presented to approximate the chance distribution using pessimistic value and optimistic value. An example is also given to illustrate how to use the presented simulation algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. A Dynamic Multistage Stochastic Unit Commitment Formulation for Intraday Markets.
- Author
-
Analui, Bita and Scaglione, Anna
- Subjects
- *
STOCHASTIC analysis , *MATHEMATICAL variables , *APPROXIMATION theory , *DECISION trees , *ALGORITHMS - Abstract
As net-load becomes less predictable there is a lot of pressure in changing decision models for power markets such that they account explicitly for future scenarios in making commitment decisions. This paper proposes to make commitment decisions with multiple gate closures. Our proposed model also leverages a state-space formulation for the commitment variables, through which the operational constraints of generation units participating in the market are respected. We also study the problem of constructing scenario tree approximations for stochastic processes and evaluate our algorithms on scenario tree libraries derived from real net-load data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. Enhanced linear reformulation for engineering optimization models with discrete and bounded continuous variables.
- Author
-
An, Qi, Fang, Shu-Cherng, Li, Han-Lin, and Nie, Tiantian
- Subjects
- *
MATHEMATICAL optimization , *COMPUTATIONAL fluid dynamics , *APPROXIMATION theory , *NUMERICAL analysis , *ALGORITHMS - Abstract
In this paper, we significantly extend the applicability of state-of-the-art ELDP (equations for linearizing discrete product terms) method by providing a new linearization to handle more complicated non-linear terms involving both of discrete and bounded continuous variables. A general class of “representable programming problems” is formally proposed for a much wider range of engineering applications. Moreover, by exploiting the logarithmic feature embedded in the discrete structure, we present an enhanced linear reformulation model which requires half an order fewer equations than the original ELDP. Computational experiments on various engineering design problems support the superior computational efficiency of the proposed linearization reformulation in solving engineering optimization problems with discrete and bounded continuous variables. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Analytical best approximate Hermitian and generalized skew-Hamiltonian solution of matrix equation [formula omitted].
- Author
-
Xu, Wei-Ru, Chen, Guo-Liang, and Sheng, Xing-Ping
- Subjects
- *
APPROXIMATION theory , *HAMILTON'S equations , *MATRICES (Mathematics) , *DIFFERENTIAL calculus , *SINGULAR value decomposition , *ALGORITHMS - Abstract
In this paper, we consider the matrix equation A X A H + C Y C H = F , where A , C and F are given matrices with appropriate sizes and [ X , Y ] is an unknown Hermitian and generalized skew-Hamiltonian matrix pair. Based on matrix differential calculus and projection theorem in inner product spaces, we exploit the best approximate solution [ X ̂ , Y ̂ ] in the set S to a given matrix pair [ X ∗ , Y ∗ ] , where S signifies the least-squares Hermitian and generalized skew-Hamiltonian solution set of the matrix equation A X A H + C Y C H = F . The analytical expression of the best approximate solution is presented by applying the canonical correlation decomposition and the generalized singular value decomposition. Finally, a numerical algorithm and an illustrated example are given. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. A near-optimal subdivision algorithm for complex root isolation based on the Pellet test and Newton iteration.
- Author
-
Becker, Ruben, Sagraloff, Michael, Sharma, Vikram, and Yap, Chee
- Subjects
- *
ITERATIVE methods (Mathematics) , *ALGORITHMS , *QUADTREES , *APPROXIMATION theory , *MATHEMATICAL bounds , *COMPUTATIONAL complexity - Abstract
We describe a subdivision algorithm for isolating the complex roots of a polynomial F ∈ C [ x ] . Given an oracle that provides approximations of each of the coefficients of F to any absolute error bound and given an arbitrary square B in the complex plane containing only simple roots of F , our algorithm returns disjoint isolating disks for the roots of F in B . Our complexity analysis bounds the absolute error to which the coefficients of F have to be provided, the total number of iterations, and the overall bit complexity. It further shows that the complexity of our algorithm is controlled by the geometry of the roots in a near neighborhood of the input square B , namely, the number of roots, their absolute values and pairwise distances. The number of subdivision steps is near-optimal. For the benchmark problem , namely, to isolate all the roots of a polynomial of degree n with integer coefficients of bit size less than τ , our algorithm needs O ˜ ( n 3 + n 2 τ ) bit operations, which is comparable to the record bound of Pan (2002) . It is the first time that such a bound has been achieved using subdivision methods, and independent of divide-and-conquer techniques such as Schönhage's splitting circle technique. Our algorithm uses the quadtree construction of Weyl (1924) with two key ingredients: using Pellet's Theorem (1881) combined with Graeffe iteration, we derive a “soft-test” to count the number of roots in a disk. Using Schröder's modified Newton operator combined with bisection, in a form inspired by the quadratic interval method from Abbot (2006), we achieve quadratic convergence towards root clusters. Relative to the divide-conquer algorithms, our algorithm is quite simple with the potential of being practical. This paper is self-contained: we provide pseudo-code for all subroutines used by our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Probabilistic MIMO Symbol Detection With Expectation Consistency Approximate Inference.
- Author
-
Cespedes, Javier, Olmos, Pablo M., Sanchez-Fernandez, Matilde, and Perez-Cruz, Fernando
- Subjects
- *
MIMO systems , *ALGORITHMS , *APPROXIMATION theory , *LOW density parity check codes , *ANTENNAS (Electronics) - Abstract
In this paper, we explore low-complexity probabilistic algorithms for soft symbol detection in high-dimensional multiple-input multiple-output (MIMO) systems. We present a novel algorithm based on the expectation consistency (EC) framework, which describes the approximate inference problem as an optimization over a nonconvex function. EC generalizes algorithms such as belief propagation and expectation propagation. For the MIMO symbol detection problem, we discuss feasible methods to find stationary points of the EC function and explore their tradeoffs between accuracy and speed of convergence. The accuracy is studied, first in terms of input–output mutual information and show that the proposed EC MIMO detector greatly improves state-of-the-art methods, with a complexity order cubic in the number of transmitting antennas. Second, these gains are corroborated by combining the probabilistic output of the EC detector with a low-density parity-check channel code. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
49. EFFICIENT APPROXIMATION FOR COUNTING OF FORMAL CONCEPTS GENERATED FROM FORMAL CONTEXT.
- Author
-
KOVÀCS, L.
- Subjects
- *
APPROXIMATION theory , *COST functions , *ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICS - Abstract
The number of formal concepts generated from the input context is an important parameter in the cost functions of concept formation algorithms. The calculation of concept count for any arbitrary context is a hard, NP-complete problem and only rough approximation methods can be found in the literature to solve this problem. This paper introduces an efficient numerical approximation algorithm for contexts where attribute probabilities are independent from the objects instances. The preconditions required by the approximation method are usually met in the FCA applications, thus the proposed method provides an efficient tool for practical complexity analysis, too. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Fast and accurate two-field reduced basis approximation for parametrized thermoelasticity problems.
- Author
-
Hoang, Khac Chi, Kim, Tae-Yeon, and Song, Jeong-Hoon
- Subjects
- *
THERMOELASTICITY , *APPROXIMATION theory , *ERRORS , *ALGORITHMS , *PREDICTION theory - Abstract
This paper concerns a two-field reduced basis algorithm for the metamodelling of parametrized one-way coupled thermoelasticity problems based on the constitutive relation error (CRE) estimation. The coupled system consists of parametrized thermal diffusion and elastostatic equations which are explicitly coupled in a one-way manner. The former can be solved in advance independently and the latter can be solved afterwards using the solution of the former. For the fast and accurate analysis of the coupled system, we developed an algorithm that can choose adaptively the number of reduced basis functions of the temperature field to approximate the CRE equality of the mechanical field. We compute approximately the upper bound for the true errors of displacement and stress fields in energy norms. To enable this, a two-field greedy sampling strategy is adopted to construct the displacement and stress fields in an efficient manner. The computational efficiency of the proposed approach is demonstrated with computing the effective coefficient of thermal expansion of heterogeneous materials. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.