654 results
Search Results
2. Synthesis of flat‐top beampatterns based on simple polynomial transforms of Gaussian excitations.
- Author
-
Molnar, Goran, Ljubenko, Dorian, and Šakić, Mile
- Subjects
- *
ANTENNA arrays , *ANTENNA design , *ANTENNA radiation patterns , *MATHEMATICAL optimization , *POLYNOMIALS - Abstract
In the design of antenna arrays which require fast and robust flat‐top beam synthesis, computationally efficient methods are preferred. This feature is usually met by analytical techniques or simple optimisation procedures. On the other hand, in the flat‐top beam synthesis, a common requirement is the ability to control beamwidth or sidelobe level. However, this can result in a high dynamic range ratio (DRR) of array's excitation coefficients. In this paper, a straightforward method for the design of symmetrical flat‐top arrays with controllable sidelobe level or DRR is proposed. The method is based on quadratic and cubic transforms of Gaussian excitations. In addition, the method utilises zero coefficients whose positions are used to control the DRR, including the ability to achieve its minimum. Compared to other flat‐top arrays with analytically shaped beams, the proposed arrays have lower DRRs for the same sidelobe level. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment.
- Author
-
Chen, Weimin, Wong, Kelvin Kian Loong, Long, Sifan, and Sun, Zhili
- Subjects
- *
MATHEMATICAL optimization , *ENTROPY , *DISTRIBUTION (Probability theory) , *REINFORCEMENT learning , *FUNCTION spaces , *KALMAN filtering - Abstract
In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. On some properties of linear functionals.
- Author
-
Stefanov, Stefan M.
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *NUMERICAL analysis , *VECTOR spaces , *LINEAR operators , *FUNCTIONALS , *FUNCTIONAL analysis - Abstract
In this paper, some properties of linear functionals are studied, which are used in functional analysis, optimization theory, numerical analysis, approximation theory, various applications, etc. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Analysis of an efficient parallel implementation of active-set Newton algorithm.
- Author
-
San Juan Sebastián, Pablo, Garcia-Molla, Victor M., Vidal, Antonio M., and Virtanen, Tuomas
- Subjects
- *
PARALLEL processing , *NEWTON-Raphson method , *MULTICORE processors , *MATHEMATICAL optimization , *APPROXIMATION theory - Abstract
This paper presents an analysis of an efficient parallel implementation of the active-set Newton algorithm (ASNA), which is used to estimate the nonnegative weights of linear combinations of the atoms in a large-scale dictionary to approximate an observation vector by minimizing the Kullback–Leibler divergence between the observation vector and the approximation. The performance of ASNA has been proved in previous works against other state-of-the-art methods. The implementations analysed in this paper have been developed in C, using parallel programming techniques to obtain a better performance in multicore architectures than the original MATLAB implementation. Also a hardware analysis is performed to check the influence of CPU frequency and number of CPU cores in the different implementations proposed. The new implementations allow ASNA algorithm to tackle real-time problems due to the execution time reduction obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Coradiant-Valued Maps and Approximate Solutions in Variable Ordering Structures.
- Author
-
Sayadi-bander, Abbas, Basirzadeh, Hadi, and Pourkarimi, Latif
- Subjects
- *
MATHEMATICAL optimization , *VECTORS (Calculus) , *MATHEMATICAL mappings , *APPROXIMATION theory , *DIFFERENTIABLE mappings - Abstract
In this paper, to introduce approximate efficiency notions in variable ordering structures, some coradiant-valued maps are dealt with. Approximate nondominated and minimal elements are defined and some of their properties are studied. Corresponding to these concepts, necessary and sufficient conditions are provided. To obtain such conditions, some scalarization methods are investigated. The paper also investigates possible relationships between the Pascoletti-Serafini radial scalarization, approximate efficiency, approximate nondominance, and minimality utilizing some coradiant-valued maps. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Low rank matrix completion using truncated nuclear norm and sparse regularizer.
- Author
-
Dong, Jing, Xue, Zhichao, Guan, Jian, Han, Zi-Fa, and Wang, Wenwu
- Subjects
- *
SPARSE matrices , *MATHEMATICAL regularization , *IMAGE representation , *APPROXIMATION theory , *MATHEMATICAL optimization - Abstract
Abstract Matrix completion is a challenging problem with a range of real applications. Many existing methods are based on low-rank prior of the underlying matrix. However, this prior may not be sufficient to recover the original matrix from its incomplete observations. In this paper, we propose a novel matrix completion algorithm by employing the low-rank prior and a sparse prior simultaneously. Specifically, the matrix completion task is formulated as a rank minimization problem with a sparse regularizer. The low-rank property is modeled by the truncated nuclear norm to approximate the rank of the matrix, and the sparse regularizer is formulated as an ℓ 1 -norm term based on a given transform operator. To address the raised optimization problem, a method alternating between two steps is developed, and the problem involved in the second step is converted to several subproblems with closed-form solutions. Experimental results show the effectiveness of the proposed algorithm and its better performance as compared with the state-of-the-art matrix completion algorithms. Highlights • This paper proposes a novel matrix completion algorithm by employing a low-rank prior based on truncated nuclear norm and a sparse prior simultaneously. • To address the resulting optimization problem, a method alternating between two steps is developed, and the problem involved in the second step is converted to several subproblems with closed-form solutions. • Experimental results demonstrate the effectiveness of the proposed algorithm and its better performance as compared with the state-of-the-art matrix completion algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. A Novel Decomposition and Coordination Approach for Large Day-Ahead Unit Commitment With Combined Cycle Units.
- Author
-
Sun, Xiaorong, Luh, Peter B., Bragin, Mikhail A., Chen, Yonghong, Wan, Jie, and Wang, Fengyu
- Subjects
- *
ELECTRIC power systems , *LAGRANGIAN functions , *RENEWABLE energy sources , *APPROXIMATION theory , *MATHEMATICAL optimization - Abstract
Day-Ahead Unit Commitment (UC) is an important problem faced by Independent System Operators (ISOs). Midcontinent ISO as the largest ISO in US, solves a complicated UC problem involving over 45,000 buses and 1,400 generation resources. With the increasing number of combined cycle units (CCs) represented by configuration-based modeling, solving the problem becomes more challenging. The state-of-the-practice branch-and-cut method suffers from poor performance when there are a large number of CCs. The goal of this paper is to solve such large UC problems with near-optimal solutions within time limits. In this paper, our recently developed Surrogate Lagrangian Relaxation, which overcomes major difficulties of Lagrangian Relaxation by not requiring dual optimal costs, is significantly enhanced through adding quadratic penalties on constraint violations to accelerate convergence. Quadratic penalty terms are linearized through a novel use of absolute value functions. Therefore, resource-level subproblems can be formulated and solved by branch-and-cut. Complicated constraints within a CC unit are thus handled within a subproblem. Subproblem solutions are then effectively coordinated. Computational improvements on key aspects are also incorporated to fine tune the algorithm. As demonstrated by MISO cases, the method provides near-optimal solutions within a time limit, and significantly outperforms branch-and-cut. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. APPROXIMATE SOLUTIONS OF INTERVAL-VALUED OPTIMIZATION PROBLEMS.
- Author
-
Van Tuyen, Nguyen
- Subjects
- *
APPROXIMATE solutions (Logic) , *EXISTENCE theorems , *MATHEMATICAL optimization , *MATHEMATICAL programming , *MULTIPLE criteria decision making , *APPROXIMATION theory , *ALGORITHMS - Abstract
This paper deals with approximate solutions of an optimization problem with interval-valued objective function. Four types of approximate solution concepts of the problem are proposed by considering the partial ordering LU on the set of all closed and bounded intervals. We show that these solutions exist under very weak conditions. Under suitable constraint qualifications, we derive Karush-Kuhn-Tucker necessary and sufficient optimality conditions for convex interval-valued optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
10. A bounded-risk mechanism for the kidney exchange game.
- Author
-
Esfandiari, Hossein and Kortsarz, Guy
- Subjects
- *
RANDOMIZATION (Statistics) , *KIDNEY exchange , *GEOMETRIC vertices , *APPROXIMATION theory , *MATHEMATICAL optimization - Abstract
In this paper, we introduce and study the notion of low risk mechanisms . Intuitively, we say a mechanism is a low risk mechanism if the randomization of the mechanism does not affect the utility of agents by a lot. Specifically, we desire to design mechanisms in which the variances of the utility of agents are small . Inspired by this work, later, Procaccia et al. (0000) study the approximation–variance tradeoffs in mechanism design. In particular, here we present a low risk mechanism for the pairwise kidney exchange game . This game naturally appears in situations that some service providers benefit from pairwise allocations on a network, such as the kidney exchanges between hospitals. Ashlagi et al. (2013) present a 2-approximation randomized truthful mechanism for this problem. This is the best-known result in this setting with multiple players. However, we note that the variance of the utility of an agent in this mechanism may be as large as Ω ( n 2 ) , where n is the number of vertices. Indeed, this is not desirable in a real application. In this paper, we resolve this issue by providing a 2-approximation randomized truthful mechanism in which the variance of the utility of each agent is at most 2 + ϵ . As a side result, we apply our technique to design a deterministic mechanism such that, if an agent deviates from the mechanism, she does not gain more than 2 ⌈ log 2 m ⌉ , where m is the number of players. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. Support region estimation of the product polar companded quantizer for Gaussian source.
- Author
-
Perić, Zoran, Petković, Marko D., Nikolić, Jelena, and Jovanović, Aleksandra
- Subjects
- *
ESTIMATION theory , *GAUSSIAN processes , *MATHEMATICAL optimization , *PARAMETER estimation , *COMPUTATIONAL complexity , *APPROXIMATION theory - Abstract
This paper describes two approaches to optimization of the key design parameters, the support region threshold and the number of magnitude representation levels, of product polar companded quantizer (PPCQ) for Gaussian source of unit variance. The first approach is based on the exact performance analysis of PPCQ and on distortion optimization with respect to the key design parameters. Due to the optimization problem complexity we encountered with the first approach, some suitable approximations are introduced with the second one. As a result, much simpler asymptotic closed-form formula for distortion of PPCQ is derived as a function of the support region threshold. Although with this approach the closed-form formula for the support region threshold cannot be derived, the results of this approach indicate the useful support region threshold form. By combining the results of both approaches we propose, the worthy closed-form formulas for the support region threshold and signal to quantization noise ratio of a nearly optimal PPCQ are provided. Moreover, from the results of both analyses the lower and upper bound expressions for the number of magnitude representation levels are provided. The analysis presented in the paper is useful for designing PPCQ and is of great theoretical and practical importance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Piecewise Linear Approximation Based MILP Method for PVC Plant Planning Optimization.
- Author
-
Xiaoyong Gao, Zhenhui Feng, Yuhong Wang, Xiaolin Huang, Dexian Huang, Tao Chen, and Xue Lian
- Subjects
- *
POLYVINYL chloride , *LINEAR statistical models , *NONLINEAR programming , *APPROXIMATION theory , *MATHEMATICAL optimization - Abstract
This paper presents a new piecewise linear modeling method for the planning of polyvinyl chloride (PVC) plants. In our previous study ( Ind. Eng. Chem. Res., 2016, 55, 12430-12443, DOI: 10.1021/acs.iecr.6b02825), a multiperiod mixed-integer nonlinear programming (MINLP) model was developed to demonstrate the importance of integrating both the material processing and the utility systems. However, the optimization problem is really difficult to solve due to the process intrinsic nonlinearities, i.e., the operating cost or energy-consuming characteristics of calcium carbide furnaces, electrolytic cells, and CHP units. The present paper intends to address this challenge by using the piecewise linear modeling approach that provides good approximation of the global nonlinearity with locally linear models. Specifically, a hinging hyperplanes (HH) model is introduced to approximate the nonlinear items in the original MINLP model. HH model is a kind of continuous piecewise linear (CPWL) model, which is proven to be effective for any continuous linear functions with arbitrary dimensions on compact sets in any given precision, and is the basis for the linearization MINLP model. As a result, with the help of auxiliary variables, the original MINLP can be transformed into a mixed-integer linear program (MILP) model, which then can be solved by many established efficient and mature algorithms. Computational results show that the proposed model can reduce the solving time by up to 97% or more and the planning results are close to or even better than those obtained by the MINLP approach. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Simulated annealing based GRASP for Pareto-optimal dissimilar paths problem.
- Author
-
Liu, Linzhong, Mu, Haibo, and Yang, Juhua
- Subjects
- *
APPROXIMATION theory , *METAHEURISTIC algorithms , *APPROXIMATION algorithms , *MATHEMATICAL optimization , *SIMULATION methods & models , *SIMULATED annealing - Abstract
This paper investigates a meta-heuristic (MH) for the Pareto-optimal dissimilar path problem (DPP) (PDPP) whose solution is a set composed of at least two different paths. The objective vector of a PDPP includes some conflicting objectives: on the one hand, the average path measures such as the length and risk of paths in a solution must be kept low and, on the other hand, the dissimilarity among these paths should be kept high. The dissimilarity of the DPP is a measure of a paths set with cardinality no less than two. However, just one path can be extracted from a chromosome in the existing MHs for various path problems. This results in a great difficulty to evaluate the chromosome in the existing MHs when we apply them to solve DPP and, consequently, there exists no MH for solving the DPP so far. In this paper, a new decoding approach of a chromosome is first explored and, with this approach, a set of paths can be extracted from a chromosome. By combining the simulated annealing (SA), in which the new decoding approach is adopted, with the well-known greedy randomized adaptive search procedure (GRASP), a SA-based GRASP for the PDPP is proposed. The proposed algorithm is compared against a most recent heuristic, whose performance is better than all of the early approaches, for the PDPP and the experimental results show that the proposed algorithm is able to quickly create superior approximation of the efficient set of the PDPP than the existing solution approaches for the PDPP. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Disjoint convex shell and its applications in mesh unfolding.
- Author
-
Kim, Yun-hyeong, Xi, Zhonghua, and Lien, Jyh-Ming
- Subjects
- *
APPROXIMATION theory , *MATHEMATICAL optimization , *MESH networks , *CONVEX bodies , *POLYGONS - Abstract
In this work, we study a geometric structure called disjoint convex shell or simply DC-shell. A DC-shell of a polyhedron is a set of pairwise interior disjoint convex objects that collectively approximate the given polyhedron. Preventing convex objects from overlapping enables faster and robust collision response and more realistic fracturing simulation. Without the disjointness constraint, a physical realization of the approximation becomes impossible. This paper investigates multiple approaches that construct DC-shells from shapes that are either composed of overlapping components or segmented into parts. We show theoretically that, even under this rather simplified setting, constructing DC-shell is difficult. To demonstrate the power of DC-shell, we studied how DC-shell can be used in mesh unfolding, an important computational method in manufacturing 3D shape from the 2D material. Approximating a given polyhedron model by DC-shells provides two major benefits. First, they are much easier to unfold using the existing unfolding methods. Second, they can be folded easily by both human folder or self-folding machines. Consequently, DC-shell makes paper craft creation and design more accessible to younger children and provides chances to enrich their education experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. DEGENERATE FIRST-ORDER QUASI-VARIATIONAL INEQUALITIES: AN APPROACH TO APPROXIMATE THE VALUE FUNCTION.
- Author
-
EL FAROUQ, NAÏMA
- Subjects
- *
COST functions , *MATHEMATICAL optimization , *APPROXIMATION theory , *DERIVATIVES (Mathematics) , *PRODUCTION management (Manufacturing) - Abstract
The originality of this paper is to deal with the particular case of null infimum jump costs in the infinite horizon impulse control problem. The value function of such problems is a viscosity solution of the classic quasi-variational inequality (QVI) associated, but not the unique one. This is a drawback to characterize it. In this paper, a new QVI for which the value function is the unique viscosity solution is given. This allows us to approximate the value function. So, we give some discrete approximations of the new QVI and prove that the approximate value function converges locally uniformly, toward the value function of the impulse control problem with zero lower bound of impulse cost. We choose the classic example of continuous time inventory control in Rn to illustrate the results of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Why Gradient Descent Not the Best Optimization Technique Works Best in Neural Networks: Qualitative Explanation.
- Author
-
CONTRERAS, JONATAN, CEBERIO, MARTINE, KOSHELEVA, OLGA, and KREINOVICH, VLADIK
- Subjects
- *
ARTIFICIAL neural networks , *MATHEMATICAL optimization , *COMPUTER algorithms , *MACHINE learning , *APPROXIMATION theory - Abstract
In a usual Numerical Methods class, students learn that gradient descent is not an efficient optimization algorithm, and that more efficient algorithms exist, algorithms which are actually used in state-of-the-art numerical optimization packages. On the other hand, in solving optimization problems related to machine learning - and, in particular, in currently most efficient deep learning - gradient descent (in the form of backpropagation) is much more efficient than any of the alternatives that have been tried. How can we reconcile these two statements? In this paper, we explain that, in reality, there is no contradiction here. Namely, in usual applications of numerical optimization, we want to attain the smallest possible value of the objective function. Thus, after a few iterations, it is necessary to switch from gradient descent - which only works efficiently when we are sufficiently far away from the actual minimum - to more sophisticated techniques. On the other hand, in machine learning, as we show, attaining the actual minimum is not what we want - this would be over-fitting. We actually need to stop way before we reach the actual minimum. Thus, we do not need to get too close to the actual minimum - and so, there is no need to switch from gradient descent to any more sophisticated (and more time-consuming) optimization technique. This explains why - contrary to what students learn in Numerical Methods - gradient descent is the most efficient optimization technique in machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. An experimental method of modeling “Line-Type” and “Area-Type” connections.
- Author
-
Wang, Mian and Zheng, Gangtie
- Subjects
- *
APPROXIMATION theory , *MATHEMATICAL functions , *MATHEMATICAL optimization , *INFORMATION theory , *DATA analysis - Abstract
A method for experimentally modeling complex connections, such as “Line-Type” connection (LTC) and “Area-Type” connection (ATC), is proposed in this paper. Unlike traditional methods, instead of treating the junction forces as presumed approximate functions, a new strategy proposed in the present paper is to estimate them from experimentally measured accelerations. Along with them, the junction motion information is also estimated from the measured data. Based on this estimated information, the connection’s model is built. In this way, two potential disadvantages of traditional methods in modeling complex connections, i.e. the difficulty in finding appropriate functions for complex connections and the computation burden in identifying too many parameters through optimization process, can be avoided. In the proposed method, the LTC is modeled as a series of independent junction node pairs, and the ATC is modeled as a combination of a presumed Virtual Structure and a series of independent Virtual Node Pairs. Numerical and experimental results with the constructed example have verified the effectiveness of the proposed method in modeling both LTC and ATC. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. A Parallel Proximal Algorithm for Anisotropic Total Variation Minimization.
- Author
-
Kamilov, Ulugbek S.
- Subjects
- *
PARALLEL algorithms , *INVERSE problems , *APPROXIMATION theory , *SIGNAL processing , *MATHEMATICAL optimization - Abstract
Total variation (TV) is a one of the most popular regularizers for stabilizing the solution of ill-posed inverse problems. This paper proposes a novel proximal-gradient algorithm for minimizing TV regularized least-squares cost functionals. Unlike traditional methods that require nested iterations for computing the proximal step of TV, our algorithm approximates the latter with several simple proximals that have closed form solutions. We theoretically prove that the proposed parallel proximal method achieves the TV solution with arbitrarily high precision at a global rate of converge that is equivalent to the fast proximal-gradient methods. The results in this paper have the potential to enhance the applicability of TV for solving very large-scale imaging inverse problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Search space-based multi-objective optimization evolutionary algorithm.
- Author
-
Medhane, Darshan Vishwasrao and Sangaiah, Arun Kumar
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *REAL-time computing , *APPROXIMATION theory , *MUTATION (Phonetics) - Abstract
Evolutionary multi-objective optimization (EMO) algorithms are actively used for answering optimization problems with multiple contradictory objectives and scheming interpretable and precise real-time applications. A majority of existing EMO algorithms performs better on two or three objectives non-dominated problems; however, they meet complications in managing and maintaining a set of optimal solutions to multi-objective optimization problems. This paper proposes a search space-based multi-objective evolutionary algorithm (SSMOEA) for multi-objective optimization problems. To accomplish the potential of the search space-based method for solving multi-objective optimization problems and to reinforce the selection procedure toward the ideal direction while sustaining an extensive and uniform distribution of solutions is our key objective. To the best of our knowledge, this paper is the first attempt to propose a search space-based multi-objective evolutionary algorithm for multi-objective optimization. The experimental setup used showed that the proposed algorithm is good and competitive in comparison to the existing EMO algorithms from the viewpoint of finding a scattered and estimated solution set in multi-objective optimization problems. SSMOEA can achieve a good trade-off between search space convergence and search space diversity in the appropriate experimental setup. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. Optimization problems with fixed volume constraints and stability results related to rearrangement classes.
- Author
-
Liu, Yichen, Emamizadeh, Behrouz, and Farjudian, Amin
- Subjects
- *
MATHEMATICAL optimization , *CONSTRAINTS (Physics) , *STABILITY theory , *PROOF theory , *MATHEMATICAL inequalities , *CHARACTERISTIC functions , *APPROXIMATION theory - Abstract
The material in this paper has been divided into two main parts. In the first part we describe two optimization problems--one maximization and one minimization--related to a sharp trace inequality that was recently obtained by G. Auchmuty. In both problems the admissible set is the one comprising characteristic functions whose supports have a fixed measure. We prove the maximization to be solvable, whilst the minimization will turn out not to be solvable in general. We will also discuss the case of radial domains. In the second part of the paper, we study approximation and stability results regarding rearrangement optimization problems. First, we show that if a sequence of the generators of rearrangement classes converges, then the corresponding sequence of the optimal solutions will also converge. Second, a stability result regarding the Hausdorff distance between the weak closures of two rearrangement classes is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Optimization of third-order discrete and differential inclusions described by polyhedral set-valued mappings.
- Author
-
Mahmudov, Elimhan N., Demir, Sevilay, and Değer, Özkan
- Subjects
- *
DIFFERENTIAL inclusions , *MATHEMATICAL optimization , *DISCRETE systems , *SET-valued maps , *APPROXIMATION theory , *MATHEMATICAL equivalence - Abstract
The present paper is concerned with the necessary and sufficient conditions of optimality for third-order polyhedral optimization described by polyhedral discrete and differential inclusions (PDIs). In the first part of the paper, the discrete polyhedral problemis reduced to convex minimization problem and the necessary and sufficient condition for optimality is derived. Then the necessary and sufficient conditions of optimality for discrete-approximation problemare formulated using the transversality condition and approximation method for the continuous polyhedral problemgoverned by PDI. On the basis on the obtained results in Section 3, we prove the sufficient conditions of optimality for the problem. It turns out that the concerned method requires some special equivalence theorem, which allow us to make a bridge betweenandproblems. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
22. Noisy Accelerated Power Method for Eigenproblems With Applications.
- Author
-
Mai, Vien V. and Johansson, Mikael
- Subjects
- *
APPROXIMATION theory , *SYMMETRIC matrices , *LANCZOS method , *MATHEMATICAL optimization , *STATISTICAL learning - Abstract
This paper introduces an efficient algorithm for finding the dominant generalized eigenvectors of a pair of symmetric matrices. Combining tools from approximation theory and convex optimization, we develop a simple scalable algorithm with strong theoretical performance guarantees. More precisely, the algorithm retains the simplicity of the well-known power method but enjoys the asymptotic iteration complexity of the powerful Lanczos method. Unlike these classic techniques, our algorithm is designed to decompose the overall problem into a series of subproblems that only need to be solved approximately. The combination of good initializations, fast iterative solvers, and appropriate error control in solving the subproblems lead to a linear running time in the input sizes compared to the superlinear time for the traditional methods. The improved running time immediately offers acceleration for several applications. As an example, we demonstrate how the proposed algorithm can be used to accelerate canonical correlation analysis, which is a fundamental statistical tool for learning of a low-dimensional representation of high-dimensional objects. Numerical experiments on real-world datasets confirm that our approach yields significant improvements over the current state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Efficient conic fitting with an analytical Polar-N-Direction geometric distance.
- Author
-
Wu, Yihong, Wang, Haoren, Tang, Fulin, and Wang, Zhiheng
- Subjects
- *
MATHEMATICAL transformations , *COST functions , *APPROXIMATION theory , *REPRESENTATION theory , *MATHEMATICAL optimization - Abstract
Highlights • Sampson distance is revisited and its geometric meaning is exhibited. • A new geometric distance between a point and a conic, called Polar-N-Direction distance, is given. It can be adapted to a projective transformation because it is computed along the normal direction of the polar line of the point, making conic fitting more robust and accurate. • Polar-N-Direction distance is represented explicitly and analytically. This geometric distance based fitting is greatly easy to be implemented. • A new cost function based on Polar-N-Direction distance is constructed. The conic fitting optimization by minimizing this cost function is very efficient. Abstract Fitting conics from images is a preliminary step for its plentiful applications. It is a common sense that geometric distance based fitting methods are better than algebraic distance based ones. However, for a long time, there has not been a geometric distance between a point and a general conic that allows easy computation and achieves high accuracy simultaneously. Though Sampson distance is widely accepted, it is only a first-order approximation. For other geometric distances, the computations are too complex to be popular in practice. In this paper, we derive a new geometric distance between a point and a general conic, called Polar-N-Direction distance. The distance can be adapted to a projective transformation because it is computed along the normal direction of the polar line of the point, making conic fitting more robust. Moreover, Polar-N-Direction distance is accurate and simultaneously still analytical in an explicit representation, which is quite easy to be implemented. Then, based on the distance, a new cost function is constructed. The conic fitting optimization by minimizing this cost function has all the merits of the geometric distance based methods and simultaneously avoids their limitations. Experiments show that the conic fitting method is greatly efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Edit Distance with Multiple Block Operations.
- Author
-
Gonen, Mira, Shapira, Dana, and Storer, James A
- Subjects
- *
APPROXIMATION theory , *PARSING (Computer grammar) , *GEOMETRIC vertices , *GRAPH theory , *MATHEMATICAL optimization - Abstract
In this paper, we consider the edit distance with block moves, block copies and block deletions, which is shown to be NP-hard, and employ a simple left-to-right greedy sliding window algorithm that achieves a constant factor approximation ratio of 5. This is an improvement on the constant approximation of 12 presented by Ergun and Sahinalp (Ergün, F. Muthukrishnan, S. and Sahinalp, S. C. Comparing sequences with segment rearrangements. FST TCS 2003: Foundations of Software Technology and Theoretical Computer Science), and is achieved by a proof that introduces two non-trivial kinds of substrings for different purposes, so recursive and non-recursive operations can be treated at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. A nonconvex penalty function with integral convolution approximation for compressed sensing.
- Author
-
Wang, Jianjun, Zhang, Feng, Huang, Jianwen, Wang, Wendong, and Yuan, Changan
- Subjects
- *
COMPRESSED sensing , *APPROXIMATION theory , *SIGNAL reconstruction , *SIGNAL-to-noise ratio , *MATHEMATICAL optimization - Abstract
Highlights • We propose a novel penalty function for CS using integral convolution approximation. • Our criterion dose not underestimate the large component in signal recovery. • Our methods perform well under both the Gaussian random sensing matrix satisfying RIP and the highly coherent sensing matrix. • We carry out a series of experiments to verify our analysis. Abstract In this paper, we propose a novel nonconvex penalty function for compressed sensing using integral convolution approximation. It is well known that an unconstrained optimization criterion based on ℓ 1 -norm easily underestimates the large component in signal recovery. Moreover, most methods either perform well only under the Gaussian random measurement matrix satisfying restricted isometry property or the highly coherent measurement matrix, which both can not be established at the same time. We introduce a new solver to address both of these concerns by adopting a frame of the difference between two convex functions with integral convolution approximation. What's more, to better boost the recovery performance, a weighted version of it is also provided. Experimental results suggest the effectiveness and robustness of our methods through several signal reconstruction examples in term of success rate and signal-to-noise ratio. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Partition-based optimization model for generative anatomy modeling language (POM-GAML).
- Author
-
Demirel, Doga, Cetinsaya, Berk, Halic, Tansel, Kockara, Sinan, and Ahmadi, Shahryar
- Subjects
- *
MATHEMATICAL optimization , *NONLINEAR analysis , *ERRORS , *APPROXIMATION theory , *K-means clustering - Abstract
Background: This paper presents a novel approach for Generative Anatomy Modeling Language (GAML). This approach automatically detects the geometric partitions in 3D anatomy that in turn speeds up integrated non-linear optimization model in GAML for 3D anatomy modeling with constraints (e.g. joints). This integrated non-linear optimization model requires the exponential execution time. However, our approach effectively computes the solution for non-linear optimization model and reduces computation time from exponential to linear time. This is achieved by grouping the 3D geometric constraints into communities. Methods: Various community detection algorithms (k-means clustering, Clauset Newman Moore, and Density-Based Spatial Clustering of Applications with Noise) were used to find communities and partition the non-linear optimization problem into sub-problems. GAML was used to create a case study for 3D shoulder model to benchmark our approach with up to 5000 constraints. Results: Our results show that the computation time was reduced from exponential time to linear time and the error rate between the partitioned and non-partitioned approach decreases with the increasing number of constraints. For the largest constraint set (5000 constraints), speed up was over 2689-fold whereas error was computed as low as 2.2%. Conclusion: This study presents a novel approach to group anatomical constraints in 3D human shoulder model using community detection algorithms. A case study for 3D modeling for shoulder models developed for arthroscopic rotator cuff simulation was presented. Our results significantly reduced the computation time in conjunction with a decrease in error using constrained optimization by linear approximation, non-linear optimization solver. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Distributionally robust shortfall risk optimization model and its approximation.
- Author
-
Guo, Shaoyan and Xu, Huifu
- Subjects
- *
PROBABILITY theory , *FUNCTIONAL analysis , *APPROXIMATION theory , *MATHEMATICAL functions , *MATHEMATICAL optimization - Abstract
Utility-based shortfall risk measures (SR) have received increasing attention over the past few years for their potential to quantify the risk of large tail losses more effectively than conditional value at risk. In this paper, we consider a distributionally robust version of the shortfall risk measure (DRSR) where the true probability distribution is unknown and the worst distribution from an ambiguity set of distributions is used to calculate the SR. We start by showing that the DRSR is a convex risk measure and under some special circumstance a coherent risk measure. We then move on to study an optimization problem with the objective of minimizing the DRSR of a random function and investigate numerical tractability of the optimization problem with the ambiguity set being constructed through ϕ -divergence ball and Kantorovich ball. In the case when the nominal distribution in the balls is an empirical distribution constructed through iid samples, we quantify convergence of the ambiguity sets to the true probability distribution as the sample size increases under the Kantorovich metric and consequently the optimal values of the corresponding DRSR problems. Specifically, we show that the error of the optimal value is linearly bounded by the error of each of the approximate ambiguity sets and subsequently derive a confidence interval of the optimal value under each of the approximation schemes. Some preliminary numerical test results are reported for the proposed modeling and computational schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. New Exact Penalty Function Methods with ∈-approximation and Perturbation Convergence for Solving Nonlinear Bilevel Programming Problems.
- Author
-
Qiang Tuo and Heng-you Lan
- Subjects
- *
PERTURBATION theory , *MATHEMATICAL functions , *BILEVEL programming , *APPROXIMATION theory , *DIFFERENTIAL equations , *MATHEMATICAL optimization - Abstract
In this paper, in order to solve a class of nonlinear bilevel programming problems, we equivalently transform the nonlinear bilevel programming problems into corresponding single level nonlinear programming problems by using the Karush-Kuhn-Tucker optimality condition. Then, based on penalty function theory, we construct a smooth approximation method for obtaining optimal solutions of classic l1-exact penalty function optimality problems, which is equivalent to the single level nonlinear programming problems. Furthermore, using ∈-approximate optimal solution theory, we prove convergence of a simple ∈-approximate optimal algorithm. Finally, through adding parameters in the constraint set of objective function, we prove some perturbation convergence results for solving the nonlinear bilevel programming problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
29. A high sparse response surface method based on combined bases for complex products optimization.
- Author
-
Li, Pu, Li, Haiyan, Huang, Yunbao, Yang, Senquan, Yang, Haitian, and Liu, Yuesheng
- Subjects
- *
RESPONSE surfaces (Statistics) , *MATHEMATICAL optimization , *ORTHOGONAL polynomials , *APPROXIMATION theory , *ACCURACY - Abstract
Highlights • A high sparse response surface method based on combined bases is proposed. • Sparsest solution is relaxed to ℓp -norm (p= 1/2) minimum solution. • Cross-validation method is proposed to select the initial value. • High sparse representation decreases the number of sampling and improves the accuracy of response surface. Abstract Product optimization requires many times of simulation which is often time-consuming. The sparse response surface, which is constructed over single orthogonal polynomial bases and sparse coefficients from a few samplings, is employed to reduce simulation times. However, it still requires many samplings for response surface of complex products. In this paper, a High Sparse Response Surface (HSRS) method based on combined bases is proposed with the following main contributions: (1) compared with a single base, a base dictionary is combined with a variety of different base functions, and maybe construct sparser response surface by less expressive bases, which reduced the number of sampling and improved the approximation accuracy, (2) ℓ p -norm (p =1/2) minimum solution, which is calculated by the Conjugate Gradient-FOCal Underdetermined System Solver (CG-FOCUSS) method, is used to approximate the sparest solution through calculating cost and coefficient sparsity trade-off, and (3) cross-validation is employed to select good initial value to obtain approximation optimal solution, which reduces the influence of the initial value on the CG-FOCUSS algorithm result. Finally, HSRS is applied to three benchmark test functions and two engineering problem, and the results are compared with the single base sparse response surface. The results show that (1) about 14.3% to 44.4% sample points can be reduced for HSRS to achieve the same accuracy of single base sparse response surface, (2) the accuracy of HSRS with cross-validation can be increased by about 20.31% to 40.81%. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Pan-GGF: A probabilistic method for pan-sharpening with gradient domain guided image filtering.
- Author
-
Zhuang, Peixian, Liu, Qingshan, and Ding, Xinghao
- Subjects
- *
IMAGE reconstruction , *MULTISPECTRAL imaging , *MATHEMATICAL optimization , *APPROXIMATION theory , *MATHEMATICAL variables - Abstract
Highlights • We propose a probabilistic pan-sharpening method. • We use GGIF to enforce better fusion of PAN and MS images. • We impose L1 norm priors on image and gradient errors of PAN and MS images. • We derive an efficient optimization scheme for proposed objective function. Abstract In this paper, we develop a novel probabilistic pan-sharpening method with gradient domain guided image filtering (named Pan-GGF). We employ gradient domain guided image filtering (GGIF) to enforce effective spatial fusion of panchromatic and multispectral images, and the proposed scheme shows better spatial and spectral fusion than other fusion methods, such as projection substitution, detail injection and weighted combination models. A maximum a posterior (MAP) formulation is then presented with imposing l 1 norm priors on the errors between panchromatic and multispectral images in both image and gradient domains, and the proposed objective function is addressed through an efficient optimization scheme that alternates between auxiliary variables approximation and high resolution multispectral images reconstruction. Numerous experiments are performed to demonstrate the satisfactory performance of the proposed method in spatial and spectral fusion, and the proposed method outperforms several classical and state-of-the-art pan-sharpening methods in both subjective results and objective assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Analysis, Modeling and Optimization of Equal Segment Based Approximate Adders.
- Author
-
Dutt, Sunil, Dash, Satyabrata, Nandi, Sukumar, and Trivedi, Gaurav
- Subjects
- *
ADDERS (Digital electronics) , *MATHEMATICAL optimization , *APPROXIMATION theory , *COMPUTER architecture , *PARAMETERS (Statistics) - Abstract
Over the past decade, several approximate adders have been proposed in the literature based on the design concept of Equal Segment Adder (ESA). In this approach, an $N$ -bit adder is segmented into several smaller and independent equally sized accurate sub-adders. An $N$ -bit ESA has two primary design parameters: (i) Segment size ($k$ ), which represents the maximum length of carry propagation; and (ii) Overlapping bits ($l$ ), which represents the minimum number of bits used in carry prediction, where $1 \leq k < N$ and $0 \leq l < k$ . Based on the combinations of $k$ and $l$ , an $N$ -bit ESA has $N(N-1)/2$ possible configurations. In this paper, we analyse ESAs and propose analytical models to estimate accuracy, delay, power and area of ESAs. The key features of the proposed analytical models are that: (i) They are generalized, i.e., work for all possible configurations of an $N$ -bit ESA; and (ii) They are superior (i.e., estimate more accurately) or at par to the existing analytical models. From the proposed analytical models, we observe that in an $N$ -bit ESA, there exist multiple (more than one) configurations which exhibit similar accuracy. However, these configurations exhibit different delay, power and area. Therefore, for a given accuracy, the configurations which provide minimal delay, power and/or area need to be known apriori for efficient, intelligent and goal oriented implementations of ESAs. In this regard, we present an optimization framework that exploits the proposed analytical models to find the optimal configurations of an $N$ -bit ESA. Further, we know that accuracy of an ESA does not depend on the adder architecture used to implement it, however, its delay, power and area depend significantly. Consequently, the optimal configurations vary with adder architectures used to implement the ESA. In order to cover a wide range of adders, we consider three types of adder architecture in our analysis: (i) Architectures having smaller area ($O(N)$ ); (ii) Architectures having smaller delay ($O(log_2N)$ ); and (iii) Architectures having in-between delay ($O(N/4)$ ) and area ($O(2N)$ ). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. Enhanced Morris method for global sensitivity analysis: good proxy of Sobol' index.
- Author
-
Feng, Kaixuan, Lu, Zhenzhou, and Yang, Caiqiong
- Subjects
- *
SENSITIVITY analysis , *STRUCTURAL optimization , *MATHEMATICAL optimization , *DISTRIBUTION (Probability theory) , *APPROXIMATION theory - Abstract
Global sensitivity analysis (GSA) aims at quantifying the effects of inputs on the output response globally. GSA is useful for identifying a few important inputs from a model with large number of inputs, which is critical for structural design and optimization. The method of Sobol' and the Morris method are two popular GSA techniques, and they have been widely used in many areas of science and engineering. It was proved that the Morris index is a good proxy of the method of Sobol' in some papers. However, some of the quantitative relationships between Morris index and Sobol' index are established by only considering the input with standard uniform distribution. When the input does not follow standard uniform distribution, some relationships are no longer valid. Therefore, an enhanced Morris method is developed as a better proxy of the Sobol' index for the input with arbitrary distribution, and it does not increase the model evaluations compared with the original Morris method. Test examples show the performance of the approximation and its usefulness in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Scalable Approximations for Generalized Linear Problems.
- Author
-
Erdogdu, Murat A., Bayati, Mohsen, and Dicker, Lee H.
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *REGRESSION analysis , *ITERATIVE methods (Mathematics) , *LEAST squares - Abstract
In stochastic optimization, the population risk is generally approximated by the empirical risk which is in turn minimized by an iterative algorithm. However, in the large-scale setting, empirical risk minimization may be computationally restrictive. In this paper, we design an effcient algorithm to approximate the population risk minimizer in generalized linear problems such as binary classification with surrogate losses and generalized linear regression models. We focus on large-scale problems where the iterative minimization of the empirical risk is computationally intractable, i.e., the number of observations n is much larger than the dimension of the parameter p (n >> p >> 1). We show that under random sub-Gaussian design, the true minimizer of the population risk is approximately proportional to the corresponding ordinary least squares (OLS) estimator. Using this relation, we design an algorithm that achieves the same accuracy as the empirical risk minimizer through iterations that attain up to a quadratic convergence rate, and that are computationally cheaper than any batch optimization algorithm by at least a factor of O(p). We provide theoretical guarantees for our algorithm, and analyze the convergence behavior in terms of data dimensions. Finally, we demonstrate the performance of our algorithm on well-known classification and regression problems, through extensive numerical studies on large-scale datasets, and show that it achieves the highest performance compared to several other widely used optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
34. Dynamic Approximation of JPEG Hardware.
- Author
-
Sharmin Snigdha, Farhana, Sengupta, Deepashree, Hu, Jiang, and Sapatnekar, Sachin S.
- Subjects
- *
JPEG (Image coding standard) , *IMAGE compression , *APPROXIMATION theory , *DISCRETE cosine transforms , *COMPUTER input-output equipment , *MATHEMATICAL optimization - Abstract
JPEG compression based on the discrete cosine transform is a key building block in low-power multimedia applications. Approximate computation techniques are used to exploit the error tolerance of JPEG. An image-dependent framework is proposed in this paper to design optimized approximate hardware with variable approximate bit-widths for a user-specified error budget. The proposed method can dynamically adjust the extent of approximation in the system depending on the pixel values of the input image, thus leveraging the inherent sparsity of certain images. This novel technique not only improves the power-delay product by $3.4{\times }$ over the base case, i.e., where the JPEG hardware is accurate but also significantly outperforms the image-independent approximation case, which is solely based on the error tolerance of the JPEG algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Conditional gradient type methods for composite nonlinear and stochastic optimization.
- Author
-
Ghadimi, Saeed
- Subjects
- *
STOCHASTIC programming , *STOCHASTIC analysis , *APPROXIMATION theory , *MATHEMATICAL optimization , *MATHEMATICS theorems - Abstract
In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a (strongly) convex regularization term. While including a strongly convex term in the subproblems of the classical conditional gradient method improves its rate of convergence, it does not cost per iteration as much as general proximal type algorithms. More specifically, we present a unified analysis for the CGT method in the sense that it achieves the best known rate of convergence when the weakly smooth term is nonconvex and possesses (nearly) optimal complexity if it turns out to be convex. While implementation of the CGT method requires explicitly estimating problem parameters like the level of smoothness of the first term in the objective function, we also present a few variants of this method which relax such estimation. Unlike general proximal type parameter free methods, these variants of the CGT method do not require any additional effort for computing (sub)gradients of the objective function and/or solving extra subproblems at each iteration. We then generalize these methods under stochastic setting and present a few new complexity results. To the best of our knowledge, this is the first time that such complexity results are presented for solving stochastic weakly smooth nonconvex and (strongly) convex optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. On bounding the Thompson metric by Schatten norms.
- Author
-
Snyder, David A. and Srivastava, Hari M.
- Subjects
- *
GEOMETRIC analysis , *APPROXIMATION theory , *FROBENIUS groups , *METRIC geometry , *MATHEMATICAL optimization - Abstract
The Thompson metric provides key geometric insights in the study of non-linear matrix equations and in many optimization problems. However, knowing that an approximate solution is within d T units, in the Thompson metric, of the actual solution provides little insight into how good the approximation is as a matrix or vector approximation. That is, bounding the Thompson metric between an approximate and accurate solution to a problem does not provide obvious bounds either for the spectral or the Frobenius norm, both Schatten norms, of the difference between the approximation and accurate solution. This paper reports such an upper bound, namely that ∥ X − Y ∥ p ≤ 2 1 p e d − 1 e d max ∥ X ∥ p , ∥ Y ∥ p where ⋅ p denotes the Schatten p-norm and d denotes the Thompson metric between X and Y. Furthermore, a more geometric proof leads to a slightly better bound in the case of the Frobenius norm, ∥ X − Y ∥ 2 ≤ e d − 1 e 2 d + 1 ∥ X ∥ 2 2 + ∥ Y ∥ 2 2 ≤ 2 1 2 e d − 1 e 2 d + 1 max ∥ X ∥ p , ∥ Y ∥ p . [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. An improved adaptive detail enhancement algorithm for infrared images based on guided image filter.
- Author
-
Zhou, Bo, Luo, Yin, Yang, Mei, Chen, Baoguo, Wang, Mingchang, Peng, Li, and Liang, Kun
- Subjects
- *
MATHEMATICAL optimization , *IMAGING systems , *HISTOGRAMS , *ALGORITHMS , *APPROXIMATION theory - Abstract
Detail enhancement algorithms are important for raw infrared images to improve their overall contrast and highlight important information in them. To solve the problems that current algorithms like GF&DDE have, an improved adaptive detail enhancement algorithm for infrared images based on a guided image filter is proposed in this paper. It chooses the threshold for the base layer image adaptively according to the histogram statistical information and adjusts the mapping range of the histograms according to the dynamic range of the image. Besides, the detail layer is handled by a simpler adaptive gain control method to achieve the good detail enhancement effect. Finally, the base layer and the detail are merged according to the approximate proportion of the background and the details. Experimental results show that the proposed algorithm can adaptively and efficiently enhance different dynamic range images in different scenarios. Moreover, this algorithm has high real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. A NEW NUMERICAL APPROACH TO INVERSE TRANSPORT EQUATION WITH ERROR ANALYSIS.
- Author
-
QIN LI, RUIWEN SHU, and LI WANG
- Subjects
- *
MATHEMATICAL analysis , *MATHEMATICAL models , *APPROXIMATION theory , *MATHEMATICAL optimization , *OPTICAL properties - Abstract
The inverse radiative transfer problem finds broad applications in medical imaging, atmospheric science, astronomy, and many other areas. This problem intends to recover optical properties, denoted as absorption and the scattering coefficient of the media, through source-measurement pairs. A typical computational approach is to form the inverse problem as a PDE-constraint optimization, with the minimizer being the to-be-recovered coefficients. The method is tested to be efficient in practice, but it lacks analytical justification: there is no guarantee of the existence or uniqueness of the minimizer, and the error is hard to quantify. In this paper, we provide a different algorithm by levering the ideas from singular decomposition analysis. Our approach is to decompose the measurements into three components, two of which encode the information of the two coefficients, respectively. We then split the optimization problem into two subproblems and use those two components to recover the absorption and scattering coefficients separately. In this regard, we prove the well-posedness of the new optimization, and the error could be quantified with better precision. In the end, we incorporate the diffusive scaling and show that the error is harder to control in the diffusive limit. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Sparse signal recovery via non-convex optimization and overcomplete dictionaries.
- Author
-
Huang, Wei, Liu, Lu, Yang, Zhuo, and Zhao, Yao
- Subjects
- *
CONVEX domains , *MATHEMATICAL optimization , *MATRICES (Mathematics) , *PROBLEM solving , *LINEAR operators , *APPROXIMATION theory - Abstract
In this paper, we address the problem of recovering signals from undersampled data where such signals are not sparse in an orthonormal basis, but in an overcomplete dictionary. We show that if the combined matrix obeys a certain restricted isometry property and if the signal is sufficiently sparse, the reconstruction that relies on ℓ p minimization with 0 < p < 1 is exact. In addition, under a mild assumption about the dictionary D , we use a similar method [H. Rauhut et al., Compressed sensing and redundant dictionaries, IEEE Trans. Inf. Theory54(5) (2008) 2210–2219] to derive an estimation of the restricted isometry constant of the composed matrix A D. Finally, the performance of the ℓ p minimization is testified by some numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. A Linear Model for Dynamic Generation Expansion Planning Considering Loss of Load Probability.
- Author
-
Rashidaee, Seyyed A., Amraee, Turaj, and Fotuhi-Firuzabad, Mahmud
- Subjects
- *
ELECTRIC power production , *MATHEMATICAL optimization , *NONLINEAR analysis , *APPROXIMATION theory , *ENERGY consumption - Abstract
Computation of Loss of Load Probability (LOLP) is a challenge in Generation Expansion Planning (GEP) problem. This paper presents a dynamic GEP model considering LOLP as a reliability criterion. The objective function of the proposed dynamic GEP model includes the discounted total costs of investment cost, operation cost, and maintenance cost. The proposed LOLP constrained GEP problem is formulated as a mixed integer nonlinear programming (MINLP) problem. The MINLP formulation of GEP problem is then converted to a mixed integer linear programming (MIP) problem by applying several approximations to LOLP constraint. The utilized approximations are divided into two clusters for low and high order outages. A test case containing 12 types of installed and 5 types of candidate plants with a 14-year planning horizon is used to validate the efficiency of the proposed dynamic GEP problem. The developed MIP-based GEP model is solved using CPLEX solver and the obtained results are compared with the conventional LOLP-constrained GEP model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Top-k Critical Vertices Query on Shortest Path.
- Author
-
Ma, Jing, Yao, Bin, Gao, Xiaofeng, Shen, Yanyan, and Guo, Minyi
- Subjects
- *
GEOMETRIC vertices , *GRAPH theory , *MATHEMATICAL optimization , *APPROXIMATION theory , *SOCIAL networks - Abstract
Shortest path query is one of the most fundamental and classic problems in graph analytics, which returns the complete shortest path between any two vertices. However, in many real-life scenarios, only critical vertices on the shortest path are desirable and it is unnecessary to search for the complete path. This paper investigates the shortest path sketch by defining a top- $k$ critical vertices ($k$ CV) query on the shortest path. Given a source vertex $s$ and target vertex $t$ in a graph, $k$ CV query can return the top- $k$ significant vertices on the shortest path $SP(s,t)$ . The significance of the vertices can be predefined. The key strategy for seeking the sketch is to apply off-line preprocessed distance oracle to accelerate on-line real-time queries. This allows us to omit unnecessary vertices and obtain the most representative sketch of the shortest path directly. We further explore a series of methods and optimizations to answer $k$ CV query on both centralized and distributed platforms, using exact and approximate approaches, respectively. We evaluate our methods in terms of time, space complexity and approximation quality. Experiments on large-scale real-world networks validate that our algorithms are of high efficiency and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Optimal group route query: Finding itinerary for group of users in spatial databases.
- Author
-
Fan, Liyue, Bonomi, Luca, Shahabi, Cyrus, and Xiong, Li
- Subjects
- *
DATABASES , *LOCATION-based services , *QUERY (Information retrieval system) , *MATHEMATICAL optimization , *APPROXIMATION theory - Abstract
The increasing popularity of location-based applications creates new opportunities for users to travel together. In this paper, we study a novel spatio-social optimization problem, i.e., Optimal Group Route, for multi-user itinerary planning. With our problem formulation, users can individually specify sources and destinations, preferences on the Point-of-interest (POI) categories, as well as the distance constraints. The goal is to find a itinerary that can be traversed by all the users while maximizing the group’s preference of POI categories in the itinerary. Our work advances existing group trip planning studies by maximizing the group’s social experience. To this end, individual preferences of POI categories are aggregated by considering the agreement and disagreement among group members. Furthermore, planning a multi-user itinerary on large road networks is computationally challenging. We propose two efficient greedy algorithms with bounded approximation ratio, one exact solution which computes the optimal itinerary by exploring a limited number of paths in the road network, and a scaled approximation algorithm to speed up the dynamic programming employed by the exact solution. We conduct extensive empirical evaluations on two real-world road network/POI datasets and our results confirm the effectiveness and efficiency of our solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. A real-time interpolation strategy for transition tool path with C2 and G2 continuity.
- Author
-
Wang, Hui, Wu, Jianhua, Liu, Chao, and Xiong, Zhenhua
- Subjects
- *
INTERPOLATION , *ALGORITHMS , *MATHEMATICAL optimization , *APPROXIMATION theory , *NUMERICAL analysis - Abstract
A typical interpolation strategy for line segments consists of a transition scheme, a Look-ahead ACC/DEC scheduling, and an interpolation algorithm. In these three parts, the main computation occurs in the first and second part. Some research work has been carried out to decrease the computation in the previous literatures, but these methods occupy a lot of computing resources for the optimization process during the calculation of transition curve parameters and feed rates. Consequently, the computational efficiency of interpolation strategy is greatly reduced. To deal with the issue, a real-time interpolation strategy is proposed in this paper. In the transition scheme, a Bézier curve is utilized to smooth the line segments. Based on the relationship among the approximation error, the approximation radius, and the transition curve, the curve can be directly generated when the approximation error is given. In the ACC/DEC scheduling, a 3-segment feed rate profile with jerk continuity is constructed. Meanwhile, a Look-ahead planning based on Backward Scanning and Forward Revision (BSFR) algorithm is utilized to eliminate redundant computation. Compared with Zhao’s and Shi’s strategy, the proposed strategy has the merits of C2 and G2 continuity for the tool path, jerk continuity for the tool movement, and distinguished real-time performance for interpolation. The experiments of 3D pentagram and 2D butterfly are carried out with different strategies and their results demonstrate that the interpolation efficiency can be greatly improved with the proposed strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Distributionally Robust Chance-Constrained Approximate AC-OPF With Wasserstein Metric.
- Author
-
Duan, Chao, Fang, Wanliang, Jiang, Lin, Yao, Li, and Liu, Jun
- Subjects
- *
ELECTRIC power systems , *RENEWABLE energy sources , *APPROXIMATION theory , *MATHEMATICAL optimization , *ELECTRICAL engineering - Abstract
Chance constrained optimal power flow (OPF) has been recognized as a promising framework to manage the risk from variable renewable energy (VRE). In the presence of VRE uncertainties, this paper discusses a distributionally robust chance constrained approximate ac-OPF. The power flow model employed in the proposed OPF formulation combines an exact ac power flow model at the nominal operation point and an approximate linear power flow model to reflect the system response under uncertainties. The ambiguity set employed in the distributionally robust formulation is the Wasserstein ball centered at the empirical distribution. The proposed OPF model minimizes the expectation of the quadratic cost function w.r.t. the worst-case probability distribution and guarantees the chance constraints satisfied for any distribution in the ambiguity set. The whole method is data-driven in the sense that the ambiguity set is constructed from historical data without any presumption on the type of the probability distribution, and more data leads to smaller ambiguity set and less conservative strategy. Moreover, special problem structures of the proposed problem formulation are exploited to develop an efficient and scalable solution approach. Case studies are carried out on the IEEE 14 and 118 bus systems to show the accuracy and necessity of the approximate ac model and the attractive features of the distributionally robust optimization approach compared with other methods to deal with uncertainties. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. A new isogeometric topology optimization using moving morphable components based on R-functions and collocation schemes.
- Author
-
Xie, Xianda, Wang, Shuting, Xu, Manman, and Wang, Yingjun
- Subjects
- *
TOPOLOGY , *MATHEMATICAL optimization , *SUBSTITUTE products , *FINITE element method , *APPROXIMATION theory - Abstract
This paper presents a new isogeometric topology optimization (TO) method based on moving morphable components (MMC), where the R-functions are used to represent the topology description functions (TDF) to overcome the C 1 discontinuity problem of the overlapping regions of components. Three new ersatz material models based on uniform, Gauss and Greville abscissae collocation schemes are used to represent both the Young’s modulus of material and the density field based on the Heaviside values of collocation points. Three benchmark examples are tested to evaluate the proposed method, where the collocation schemes are compared as well as the difference between isogeometric analysis (IGA) and finite element method (FEM). The results show that the convergence rate using R-functions has been improved in a range of 17%–60% for different cases in both FEM and IGA frameworks, and the Greville collocation scheme outperforms the other two schemes in the MMC-based TO. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Distributed MMAS for weapon target assignment based on Spark framework.
- Author
-
Cao, Ming and Fang, Weiguo
- Subjects
- *
VECTORS (Calculus) , *VIRTUAL machine systems , *MATHEMATICAL optimization , *APPROXIMATION theory , *APPROXIMATE solutions (Logic) - Abstract
Weapon target allocation (WTA) is a classic NP-complete problem in the field of military operations research. In this paper, we addressed the multi-constraint WTA problems in multilayer defense scenario. To solve large-scale WTA problems effectively, a distributed MAX-MIN Ant System (MMAS) algorithm based on distributed computing framework Spark was developed and improved. An experiment environment comprising virtual machines was built for implementing the distributed MMAS. First, a small-scale WTA example, whose theoretical optimal solution can be obtained by existing optimization software, was taken as a benchmark problem to assess the performance of distributed MMAS. The result shows that it can find high-quality and robust approximate solutions. Then a large-scale WTA problem was constructed and used to further evaluate the performance of distributed MMAS in the experiment environment. The result shows that the distributed MMAS can also achieve high-quality approximate solutions with high robustness and computational efficiency even for large scale WTA problems. Our study demonstrates it is a promising approach for solving large-scale iteration-dependent optimization problems like WTA by means of incorporating heuristic optimization algorithms such as Ant Colony Optimization into distributed computing framework. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. Optimal power diagrams via function approximation.
- Author
-
Xiao, Yanyang, Chen, Zhonggui, Cao, Juan, Zhang, Yongjie Jessica, and Wang, Cheng
- Subjects
- *
APPROXIMATION theory , *MONTE Carlo method , *CENTROIDAL Voronoi tessellations , *ANISOTROPY , *MATHEMATICAL optimization - Abstract
In this paper, we present a novel method for generating cell complexes with anisotropy conforming to the Hessian of an arbitrary given function. This is done by variationally optimizing the discontinuous piecewise linear approximation of the given functions over power diagrams. The resulting cell complexes corresponding to the approximations are referred to as Optimal Power Diagram (OPD). A hybrid optimization technique, coupling a modified Monte Carlo method with a local search strategy, is tailored for effectively solving the specific optimization task. In contrast to the Optimal Voronoi Tessellation (OVT) method (Budninskiy et al., 2016), our OPD method does not restrict the target functions to be convex, providing more diverse classes of tessellations of the domain. Furthermore, our OPD method generally yields smaller approximation errors than the OVT method, which uses underlaid approximants. We conduct several experiments to demonstrate the efficacy of our optimization algorithm in finding good local minima and generating high-quality anisotropic polytopal meshes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Separations and Optimality of Constrained Multiobjective Optimization via Improvement Sets.
- Author
-
Chen, Jiawei, Huang, La, and Li, Shengjie
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *MATHEMATICAL functions , *NONLINEAR equations , *SET theory - Abstract
In this paper, we investigate the separations and optimality conditions for the optimal solution defined by the improvement set of a constrained multiobjective optimization problem. We introduce a vector-valued regular weak separation function and a scalar weak separation function via a nonlinear scalarization function defined in terms of an improvement set. The nonlinear separation between the image of the multiobjective optimization problem and an improvement set in the image space is established by the scalar weak separation function. Saddle point type optimality conditions for the optimal solution of the multiobjective optimization problem are established, respectively, by the nonlinear and linear separation methods. We also obtain the relationships between the optimal solution and approximate efficient solution of the multiobjective optimization problem. Finally, sufficient and necessary conditions for the (regular) linear separation between the approximate image of the multiobjective optimization problem and a convex cone are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. When are static and adjustable robust optimization problems with constraint-wise uncertainty equivalent?
- Author
-
Marandi, Ahmadreza and den Hertog, Dick
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *QUADRATIC programming , *QUADRATIC equations , *MATHEMATICAL functions - Abstract
Adjustable robust optimization (ARO) generally produces better worst-case solutions than static robust optimization (RO). However, ARO is computationally more difficult than RO. In this paper, we provide conditions under which the worst-case objective values of ARO and RO problems are equal. We prove that when the uncertainty is constraint-wise, the problem is convex with respect to the adjustable variables and concave with respect to the uncertain parameters, the adjustable variables lie in a convex and compact set and the uncertainty set is convex and compact, then robust solutions are also optimal for the corresponding ARO problem. Furthermore, we prove that if some of the uncertain parameters are constraint-wise and the rest are not, then under a similar set of assumptions there is an optimal decision rule for the ARO problem that does not depend on the constraint-wise uncertain parameters. Also, we show for a class of problems that using affine decision rules that depend on all of the uncertain parameters yields the same optimal objective value as when the rules depend solely on the non-constraint-wise uncertain parameters. Finally, we illustrate the usefulness of these results by applying them to convex quadratic and conic quadratic problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Approximations and solution estimates in optimization.
- Author
-
Royset, Johannes O.
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *CONSTRAINED optimization , *CONTINUOUS functions , *METRIC spaces , *STOCHASTIC convergence - Abstract
Approximation is central to many optimization problems and the supporting theory provides insight as well as foundation for algorithms. In this paper, we lay out a broad framework for quantifying approximations by viewing finite- and infinite-dimensional constrained minimization problems as instances of extended real-valued lower semicontinuous functions defined on a general metric space. Since the Attouch-Wets distance between such functions quantifies epi-convergence, we are able to obtain estimates of optimal solutions and optimal values through bounds of that distance. In particular, we show that near-optimal and near-feasible solutions are effectively Lipschitz continuous with modulus one in this distance. Under additional assumptions on the underlying metric space, we construct approximating functions involving only a finite number of parameters that still are close to an arbitrary extended real-valued lower semicontinuous functions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.