103 results
Search Results
2. High-Dimensional Gaussian Sampling: A Review and a Unifying Approach Based on a Stochastic Proximal Point Algorithm.
- Author
-
Vono, Maxime, Dobigeon, Nicolas, and Chainais, Pierre
- Subjects
MARKOV chain Monte Carlo ,NUMERICAL solutions for linear algebra ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in high dimensions. To tackle these issues, multiple methods have been proposed from different communities ranging from iterative numerical linear algebra to Markov chain Monte Carlo (MCMC) approaches. Surprisingly, no complete review and comparison of these methods has been conducted. This paper aims to review all these approaches by pointing out their differences, close relations, benefits, and limitations. In addition to reviewing the state of the art, this paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization. This framework offers a novel and unifying revisiting of most of the existing MCMC approaches while also extending them. Guidelines to choosing the appropriate Gaussian simulation method for a given sampling problem in high dimensions are proposed and illustrated with numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. RIEMANNIAN CONJUGATE GRADIENT METHODS: GENERAL FRAMEWORK AND SPECIFIC ALGORITHMS WITH CONVERGENCE ANALYSES.
- Author
-
HIROYUKI SATO
- Subjects
RIEMANNIAN manifolds ,CONJUGATE gradient methods ,EUCLIDEAN algorithm ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
Conjugate gradient methods are important first-order optimization algorithms both in Euclidean spaces and on Riemannian manifolds. However, while various types of conjugate gradient methods have been studied in Euclidean spaces, there are relatively fewer studies for those on Riemannian manifolds (i.e., Riemannian conjugate gradient methods). This paper proposes a novel general framework that unifies existing Riemannian conjugate gradient methods such as the ones that utilize a vector transport or inverse retraction. The proposed framework also develops other methods that have not been covered in previous studies. Furthermore, conditions for the convergence of a class of algorithms in the proposed framework are clarified. Moreover, the global convergence properties of several specific types of algorithms are extensively analyzed. The analysis provides the theoretical results for some algorithms in a more general setting than the existing studies and new developments for other algorithms. Numerical experiments are performed to confirm the validity of the theoretical results. The experimental results are used to compare the performances of several specific algorithms in the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. GENERALIZING THE OPTIMIZED GRADIENT METHOD FOR SMOOTH CONVEX MINIMIZATION.
- Author
-
DONGHWAN KIM and FESSLER, JEFFREY A.
- Subjects
CONVEX functions ,MATHEMATICAL bounds ,SMOOTHNESS of functions ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
This paper generalizes the optimized gradient method (OGM) [Y. Drori and M. Teboulle, Math. Program., 145 (2014), pp. 451-482], [D. Kim and J. A. Fessler, Math. Program., 159 (2016), pp. 81-107], [D. Kim and J. A. Fessler, J. Optim. Theory Appl., 172 (2017), pp. 187-205] that achieves the optimal worst-case cost function bound of first-order methods for smooth convex minimization [Y. Drori, J. Complexity, 39 (2017), pp. 1-16]. Specifically, this paper studies a generalized formulation of OGM and analyzes its worst-case rates in terms of both the function value and the norm of the function gradient. This paper also develops a new algorithm called OGM-OG that is in the generalized family of OGM and that has the best known analytical worst-case bound with rate O(1/N
1.5 ) on the decrease of the gradient norm among fixed-step first-order methods. This paper also proves that Nesterov's fast gradient method [Y. Nesterov, Dokl. Akad. Nauk. USSR, 269 (1983), pp. 543-547], [Y. Nesterov, Math. Program., 103 (2005), pp. 127-152] has an O(1/N1.5 ) worst-case gradient norm rate but with constant larger than OGM-OG. The proof is based on the worst-case analysis called the Performance Estimation Problem in [Y. Drori and M. Teboulle, Math. Program., 145 (2014), pp. 451-482]. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
5. DISTRIBUTED SUBGRADIENT-FREE STOCHASTIC OPTIMIZATION ALGORITHM FOR NONSMOOTH CONVEX FUNCTIONS OVER TIME-VARYING NETWORKS.
- Author
-
YINGHUI WANG, WENXIAO ZHAO, YIGUANG HONG, and MOHSEN ZAMANI
- Subjects
NONSMOOTH optimization ,TIME-varying networks ,CONVEX functions ,DISTRIBUTED algorithms ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper we consider a distributed stochastic optimization problem without gradient/subgradient information for local objective functions and subject to local convex constraints. Objective functions may be nonsmooth and observed with stochastic noises, and the network for the distributed design is time-varying. By adding stochastic dithers to local objective functions and constructing randomized differences motivated by the Kiefer--Wolfowitz algorithm, we propose a distributed subgradient-free algorithm for finding the global minimizer with local observations. Moreover, we prove that the consensus of estimates and global minimization can be achieved with probability one over the time-varying network, and we obtain the convergence rate of the mean average of estimates as well. Finally, we give numerical examples to illustrate the performance of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. STOCHASTIC MULTILEVEL COMPOSITION OPTIMIZATION ALGORITHMS WITH LEVEL-INDEPENDENT CONVERGENCE RATES.
- Author
-
BALASUBRAMANIAN, KRISHNAKUMAR, GHADIMI, SAEED, and NGUYEN, ANTHONY
- Subjects
MATHEMATICAL optimization ,PROBLEM solving ,ALGORITHMS ,MOVING average process - Abstract
In this paper, we study smooth stochastic multilevel composition optimization problems, where the objective function is a nested composition of T functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze their convergence to an e-stationary point of the problem. We show that the first algorithm, which is a generalization of [S. Ghadimi, A. Ruszczynski, and M. Wang, SIAM J. Optim., 30 (2020), pp. 960-979] to the T level case, can achieve a sample complexity of O
T (1/ε6 ) by using minibatches of samples in each iteration, where OT hides constants that depend on T. By modifying this algorithm using linearized stochastic estimates of the function values, we improve the sample complexity to OT (1/ε4 ). This modification not only removes the requirement of having a minibatch of samples in each iteration, but also makes the algorithm parameter-free and easy to implement. To the best of our knowledge, this is the first time that such an online algorithm designed for the (un)constrained multilevel setting obtains the same sample complexity of the smooth single-level setting, under standard assumptions (unbiasedness and boundedness of the second moments) on the stochastic first-order oracle. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
7. THE CONVEX FEASIBLE SET ALGORITHM FOR REAL TIME OPTIMIZATION IN MOTION PLANNING.
- Author
-
CHANGLIU LIU, CHUNG-YEN LIN, and MASAYOSHI TOMIZUKA
- Subjects
COMPUTER programming ,MATHEMATICAL optimization ,ARTIFICIAL intelligence ,OPTIMAL control theory ,ALGORITHMS - Abstract
With the development of robotics, there are growing needs for real time motion planning. However, due to obstacles in the environment, the planning problem is highly nonconvex, which makes it difficult to achieve real time computation using existing nonconvex optimization algorithms. This paper introduces the convex feasible set algorithm, which is a fast algorithm for nonconvex optimization problems that have convex costs and nonconvex constraints. The idea is to find a convex feasible set for the original problem and iteratively solve a sequence of subproblems using the convex constraints. The feasibility and the convergence of the proposed algorithm are proved in the paper. The application of this method on motion planning for mobile robots is discussed. The simulations demonstrate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. A LAGRANGE MULTIPLIER EXPRESSION METHOD FOR BILEVEL POLYNOMIAL OPTIMIZATION.
- Author
-
JIAWANG NIE, LI WANG, YE, JANE J., and ZHONG, SUHAN
- Subjects
BILEVEL programming ,ALGORITHMS ,MATHEMATICAL optimization ,LAGRANGE multiplier ,POLYNOMIALS - Abstract
This paper studies bilevel polynomial optimization. We propose a method to solve it globally by using polynomial optimization relaxations. Each relaxation is obtained from the Karush-Kuhn-Tucker (KKT) conditions for the lower level optimization and the exchange technique for semi-infinite programming. For KKT conditions, Lagrange multipliers are represented as polynomial or rational functions. The Moment-sum-of-squares relaxations are used to solve the polynomial optimization relaxations. Under some general assumptions, we prove the convergence of the algorithm for solving bilevel polynomial optimization problems. Numerical experiments are presented to show the efficiency of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. What Tropical Geometry Tells Us about the Complexity of Linear Programming.
- Author
-
Allamigeon, Xavier, Benchimol, Pascal, Gaubert, Stéphane, and Joswig, Michael
- Subjects
GEOMETRY ,MATHEMATICAL optimization ,LINEAR programming ,GAME theory ,INTERIOR-point methods ,ALGORITHMS - Abstract
Tropical geometry has been recently used to obtain new complexity results in convex optimization and game theory. In this paper, we present an application of this approach to a famous class of algorithms for linear programming, i.e., log-barrier interior point methods. We show that these methods are not strongly polynomial by constructing a family of linear programs with 3r + 1 inequalities in dimension 2r for which the number of iterations performed is in Ω(2
r ). The total curvature of the central path of these linear programs is also exponential in r, disproving a continuous analogue of the Hirsch conjecture proposed by Deza, Terlaky, and Zinchenko. These results are obtained by analyzing the tropical central path, which is the piecewise linear limit of the central paths of parameterized families of classical linear programs viewed through "logarithmic glasses." This allows us to provide combinatorial lower bounds for the number of iterations and the total curvature in a general setting. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
10. PRIMAL-DUAL ALGORITHMS FOR OPTIMIZATION WITH STOCHASTIC DOMINANCE.
- Author
-
HASKELL, WILLIAM B., SHANTHIKUMAR, J. GEORGE, and SHEN, Z. MAX
- Subjects
ITERATIVE methods (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC dominance ,MATHEMATICAL bounds - Abstract
Stochastic dominance, a pairwise comparison between random variables, is an effective tool for expressing risk aversion in stochastic optimization. In this paper, we develop a family of primal-dual algorithms for optimization problems with stochastic dominance-constraints. First, we develop an offline primal-dual algorithm and bound its optimality gap as a function of the number of iterations. Then, we extend this algorithm to the online setting where only one random sample is given in each decision epoch. We give probabilistic bounds o n the optimality gap in this setting. This technique also yields an online algorithm for the stochastic dominance-constrained multiarmed bandit with partial feedback. The paper concludes by discussing a dual approach for a batch learning problem with robust stochastic dominance constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. AN IMPROVED AND SIMPLIFIED FULL-NEWTON STEP O(N) INFEASIBLE INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION.
- Author
-
ROOS, C.
- Subjects
MATHEMATICAL optimization ,NEWTON-Raphson method ,FUNCTION algebras ,ALGORITHMS ,PERTURBATION theory ,MATHEMATICAL inequalities - Abstract
We present an improved version of an infeasible interior-point method for linear optimization published in 2006. In the earlier version each iteration consisted of one so-called feasibility step and a few--at most three--centering steps. In this paper each iteration consists of only a feasibility step, whereas the iteration bound improves the earlier bound by a factor 2 √2. The improvements are due to a new lemma that gives a much tighter upper bound for the proximity after the feasibility step. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. DISTRIBUTED CONTINUOUS-TIME ALGORITHMS FOR NONSMOOTH EXTENDED MONOTROPIC OPTIMIZATION PROBLEMS.
- Author
-
XIANLIN ZENG, PENG YI, YIGUANG HONG, and LIHUA XIE
- Subjects
CONTINUOUS time systems ,MATHEMATICAL optimization ,ALGORITHMS ,CONVEX functions ,DIFFERENTIAL inclusions ,VARIATIONAL inequalities (Mathematics) - Abstract
This paper studies distributed algorithms for the nonsmooth extended monotropic optimization problem, which is a general convex optimization problem with a certain separable structure. The considered nonsmooth objective function is the sum of local objective functions as-signed to agents in a multiagent network, with local set constraints and affine equality constraints. Each agent only knows its local objective function, local set constraint, and the information ex-changed between neighbors. To solve the constrained convex optimization problem, we propose two novel distributed continuous-time subgradient-based algorithms, with projected output feedback and derivative feedback, respectively. Moreover, we prove the convergence of proposed algorithms to the optimal solutions under some mild conditions and analyze convergence rates, with the help of the techniques of variational inequalities, decomposition methods, and differential inclusions. Finally, we give an example to illustrate the efficacy of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. The Singular Value Decomposition: Anatomy of Optimizing an Algorithm for Extreme Scale.
- Author
-
Gates, Mark, Haidar, Azzam, Kurzak, Jakub, Luszczek, Piotr, Tomov, Stanimire, Ichitaro Yamazaki, and Dongarra, Jack
- Subjects
SINGULAR value decomposition ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
The computation of the singular value decomposition, or SVD, has a long history with many improvements over the years, both in its implementations and algorithmically. Here, we survey the evolution of SVD algorithms for dense matrices, discussing the motivation and performance impacts of changes. There are two main branches of dense SVD methods: bidiagonalization and Jacobi. Bidiagonalization methods started with the implementation by Golub and Reinsch in Algol60, which was subsequently ported to Fortran in the EISPACK library, and was later more efficiently implemented in the LINPACK library, targeting contemporary vector machines. To address cache-based memory hierarchies, the SVD algorithm was reformulated to use Level 3 BLAS in the LAPACK library. To address new architectures, ScaLAPACK was introduced to take advantage of distributed computing, and MAGMA was developed for accelerators such as GPUs. Algorithmically, the divide and conquer and MRRR algorithms were developed to reduce the number of operations. Still, these methods remained memory bound, so two-stage algorithms were developed to reduce memory operations and increase the computational intensity, with efficient implementations in PLASMA, DPLASMA, and MAGMA. Jacobi methods started with the two-sided method of Kogbetliantz and the one-sided method of Hestenes. They have likewise had many developments, including parallel and block versions and preconditioning to improve convergence. In this paper, we investigate the impact of these changes by testing various historical and current implementations on a common, modern multicore machine and a distributed computing platform. We show that algorithmic and implementation improvements have increased the speed of the SVD by several orders of magnitude, while using up to 40 times less energy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. COMPLEXITY ANALYSIS OF SECOND-ORDER LINE-SEARCH ALGORITHMS FOR SMOOTH NONCONVEX OPTIMIZATION.
- Author
-
ROYER, CLÉMENT W. and WRIGHT, STEPHEN J.
- Subjects
- *
ALGORITHMS , *MATHEMATICAL regularization , *MACHINE learning , *ROBUST statistics , *SMOOTHNESS of functions , *MATHEMATICAL optimization - Abstract
There has been much recent interest in finding unconstrained local minima of smooth functions, due in part to the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposals, based chiey on first-order methodology, have also been shown to enjoy optimal iteration complexity rates, while providing additional guarantees on computational cost. In this paper, we present an algorithm with favorable complexity properties that differs in two significant ways from other recently proposed methods. First, it is based on line searches only: Each step involves computation of a search direction, followed by a backtracking line search along that direction. Second, its analysis is rather straightforward, relying for the most part on the standard technique for demonstrating sufficient decrease in the objective from backtracking. In the latter part of the paper, we consider inexact computation of the search directions, using iterative methods in linear algebra: the conjugate gradient and Lanczos methods. We derive modified convergence and complexity results for these more practical methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. A DERIVATIVE-FREE TRUST-REGION ALGORITHM FOR THE OPTIMIZATION OF FUNCTIONS SMOOTHED VIA GAUSSIAN CONVOLUTION USING ADAPTIVE MULTIPLE IMPORTANCE SAMPLING.
- Author
-
MAGGIAR, ALVARO, WÄCHTER, ANDREAS, DOLINSKAYA, IRINA S., and STAUM, JEREMY
- Subjects
DERIVATIVES (Mathematics) ,ALGORITHMS ,MATHEMATICAL optimization ,SMOOTHNESS of functions ,GAUSSIAN processes ,MATHEMATICAL convolutions ,STATISTICAL sampling - Abstract
In this paper we consider the optimization of a functional F defined as the convolution of a function f with a Gaussian kernel. We propose this type of objective function for the optimization of the output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the noisy function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to reuse all noisy function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability one, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with the deterministic derivative-free trust-region method the proposed method is based on. It is demonstrated that the proposed algorithm performs similarly efficiently in the early stages of the optimization and is able to overcome convergence problems of the original method, where the algorithm might get trapped in spurious local minima induced by the noise. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. CONVERGENCE AND CYCLING IN WALKER-TYPE SADDLE SEARCH ALGORITHMS.
- Author
-
LEVITT, ANTOINE and ORTNER, CHRISTOPH
- Subjects
ALGORITHMS ,MAXIMA & minima ,MATHEMATICAL optimization ,STOCHASTIC convergence ,METHODOLOGY - Abstract
Algorithms for computing local minima of smooth objective functions enjoy a mature theory as well as robust and effcient implementations. By comparison, the theory and practice of saddle search is destitute. In this paper, we present results for idealized versions of the dimer and gentlest ascent (GAD) saddle search algorithms that showcase the limitations of what is theoretically achievable within the current class of saddle search algorithms: (1) we present an improved estimate on the region of attraction of saddles, (2) we give explicit examples of potential energy wells from which GAD-type dynamics are unable to escape, and (3) we present a local analysis of \singular points" around which the dynamics gets trapped and prove the existence of quasi-periodic solutions. These results indicate that it is impossible to obtain globally convergent variants of dimer- and GAD-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. AN OPTIMAL CONTROL PROBLEM GOVERNED BY A REGULARIZED PHASE-FIELD FRACTURE PROPAGATION MODEL.
- Author
-
NEITZEL, I., WICK, T., and WOLLNER, W.
- Subjects
OPTIMAL control theory ,ALGORITHMS ,APPROXIMATION theory ,MATHEMATICAL optimization ,CALCULUS of variations - Abstract
This paper is concerned with an optimal control problem governed by a regularized fracture model using a phase-field technique. To avoid the nondifferentiability due to the irreversibility constraint on the fracture growth, the phase-field fracture model is relaxed using a penalization approach. The existence of a solution to the penalized fracture model is shown, and the existence of at least one solution for the regularized optimal control problem is established. Moreover, the linearized fracture model is considered and used to establish first order necessary conditions as well as to discuss QP-approximations to the nonlinear optimization problem. A numerical example suggests that these can be used to obtain a fast convergent algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. THE WATERMELON ALGORITHM FOR THE BILEVEL INTEGER LINEAR PROGRAMMING PROBLEM.
- Author
-
LIZHI WANG and PAN XU
- Subjects
ALGORITHMS ,INTEGERS ,LINEAR programming ,GENETIC algorithms ,MATHEMATICAL optimization - Abstract
This paper presents an exact algorithm for the bilevel integer linear programming (BILP) problem. The proposed algorithm, which we call the watermelon algorithm, uses a multiway disjunction cut to remove bilevel infeasible solutions from the search space, which was motivated by how watermelon seeds can be carved out by a scoop. Serving as the scoop, a polyhedron is designed to enclose as many bilevel infeasible solutions as possible, and then the complement of this polyhedron is applied to the search space as a multiway disjunction cut in a branch-and-bound framework. We have proved that the watermelon algorithm is able to solve all BILP instances finitely and correctly, providing either a global optimal solution or a certificate of infeasibility or unboundedness. Computational experiment results on two sets of small- to medium-sized instances suggest that the watermelon algorithm could be significantly more efficient than previous branch-and-bound based BILP algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR DUAL-DEGENERATE PARAMETRIC OPTIMIZATION PROBLEMS.
- Author
-
VYACHESLAV KUNGURTSEV and JÄSCHKE, JOHANNES
- Subjects
MATHEMATICAL optimization ,NONLINEAR theories ,ALGORITHMS ,PARAMETRIC modeling ,MATHEMATICAL functions ,STOCHASTIC convergence - Abstract
Most path-following algorithms for tracing a solution path of a parametric nonlinear optimization problem are only certifiably convergent under strong regularity assumptions about the problem functions. In particular, linear independence of the constraint gradients at the solutions is typically assumed, which implies unique multipliers. In this paper we propose a procedure designed to solve problems satisfying a weaker set of conditions, allowing for nonunique (but bounded) multipliers. Each iteration along the path consists of three parts: (1) a Newton corrector step for the primal and dual variables, which is obtained by solving a linear system ot equations, (2) a predictor step for the primal and dual variables, which is found as the solution of a quadratic programming problem, and (3) a jump step for the dual variables, which is found as t he solution of a linear programming problem. We present a convergence proof and demonstrate th e successful solution tracking of the algorithm numerically on a couple of illustrative examples. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. GLOBALLY CONVERGENT PRIMAL-DUAL ACTIVE-SET METHODS WITH INEXACT SUBPROBLEM SOLVES.
- Author
-
CURTIS, FRANK E. and ZHENG HAN
- Subjects
CONVEX domains ,MATHEMATICAL optimization ,ALGORITHMS ,SET theory ,PARTITIONS (Mathematics) - Abstract
We propose primal-dual active-set (PDAS) methods for solving large-scale instances of an important class of convex quadratic optimization problems (QPs). The iterates of the algorithms are partitions of the index set of variables, where corresponding to each partition there exist unique primal-dual variables that can be obtained by solving a (reduced) linear system. Algorithms of this type have recently received attention when solving certain QPs and linear complementarity problems since, with rapid changes in the active set estimate, they often converge in few iterations. Indeed, as discussed in this paper, convergence in a finite number of iterations is guaranteed when a basic PDAS method is employed to solve certain QPs for which a reduced Hessian of the objective function is (a perturbation of) an M-matrix. We propose three PDAS algorithms. The novelty of the algorithms is that they allow inexactness in the (reduced) linear system solves at all partitions except optimal ones. Such a feature is particularly important in large-scale settings when one employs iterative Krylov subspace methods to solve these systems. Our first algorithm is convergent when solving problems for which properties of the Hessian can be exploited to derive explicit bounds to be enforced on the (reduced) linear system residuals, whereas our second and third algorithms employ dynamic parameters to obviate the need of such explicit bounds. We prove that when applied to solve an important class of convex QPs, our algorithms converge from any initial partition. We also illustrate their practical behavior by providing the results of numerical experiments on a set of discretized optimal control problems, some of which are explicitly formulated to exhibit degeneracy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY.
- Author
-
HAGER, WILLIAM W. and HONGCHAO ZHANG
- Subjects
NONLINEAR theories ,POLYHEDRA ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL optimization - Abstract
An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version of the projection problem and then uses the relationship between the primal and dual to recover the projection. The techniques in the paper exploit sparsity. Sparse reconstruction by separable approximation (SpaRSA) is used to approximately identify active constraints in the polyhedron, and the dual active set algorithm (DASA) is used to compute a high precision solution. A linear convergence result is established for SpaRSA that does not require the strong concavity of the dual to the projection problem, and an earlier R-linear convergence rate is strengthened to a Q-linear convergence property. An algorithmic framework is developed for combining SpaRSA with an asymptotically preferred algorithm such as DASA. It is shown that only the preferred algorithm is executed asymptotically. Numerical results are given using the polyhedra associated with the Netlib LP test set. A comparison is made to the interior point method contained in the general purpose open source software package IPOPT for nonlinear optimization, and to the commercial package CPLEX, which contains an implementation of the barrier method that is targeted to problems with the structure of the polyhedral projection problem. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. A FILTER ACTIVE-SET ALGORITHM FOR BALL/SPHERE CONSTRAINED OPTIMIZATION PROBLEM.
- Author
-
CHUNGEN SHEN, LEI-HONG ZHANG, and WEI HONG YANG
- Subjects
FACTOR structure ,ALGORITHMS ,SET theory ,STATISTICAL correlation ,JACOBIAN matrices ,MATHEMATICAL optimization - Abstract
In this paper, we propose a filter active-set algorithm for the minimization problem over a product of multiple ball/sphere constraints. By making effective use of the special structure of the ball/sphere constraints, a new limited memory BFGS (L-BFGS) scheme is presented. The new L-BFGS implementation takes advantage of the sparse structure of the Jacobian of the constraints and generates curvature information of the minimization problem. At each iteration, only two or three reduced linear systems are required to solve for the search direction. The filter technique combined with the backtracking line search strategy ensures the global convergence, and the local superlinear convergence can also be established under mild conditions. The algorithm is applied to two specific applications, the nearest correlation matrix with factor structure and the maximal correlation problem. Our numerical experiments indicate that the proposed algorithm is competitive with some recently custom-designed methods for each individual application. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. PENALTY METHODS FOR A CLASS OF NON-LIPSCHITZ OPTIMIZATION PROBLEMS.
- Author
-
XIAOJUN CHEN, ZHAOSONG LU, and TING KEI PONG
- Subjects
LIPSCHITZ spaces ,MATHEMATICAL optimization ,SET theory ,POLYHEDRA ,ALGORITHMS - Abstract
We consider a class of constrained optimization problems with a possibly nonconvex non-Lipschitz objective and a convex feasible set being the intersection of a polyhedron and a possibly degenerate ellipsoid. Such problems have a wide range of applications in data science, where the objective is used for inducing sparsity in the solutions while the constraint set models the noise tolerance and incorporates other prior information for data fitting. To solve this class of constrained optimization problems, a common approach is the penalty method. However, there is little theory on exact penalization for problems with nonconvex and non-Lipschitz objective functions. In this paper, we study the existence of exact penalty parameters regarding local minimizers, stationary points, and epsilonε-minimizers under suitable assumptions. Moreover, we discuss a penalty method whose subproblems are solved via a nonmonotone proximal gradient method with a suitable update scheme for the penalty parameters and prove the convergence of the algorithm to a KKT point of the constrained problem. Preliminary numerical results demonstrate the efficiency of the penalty method for finding sparse solutions of underdetermined linear systems. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. CONDITIONAL GRADIENT SLIDING FOR CONVEX OPTIMIZATION.
- Author
-
GUANGHUI LAN and YI ZHOU
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL bounds ,ALGORITHMS ,SADDLEPOINT approximations ,CONVEX functions - Abstract
In this paper, we present a new conditional gradient type method for convex optimization by calling a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time to time and, as a result, can achieve the optimal complexity bounds in terms of not only the number of calls to the LO oracle but also the number of gradient evaluations. More specifically, we show that the CGS method requires O(1/√ε} and O(log (1/ε) gradient evaluations, respectively, for solving smooth and strongly convex problems, while still maintaining the optimal O(1/ε) bound on the number of calls to the LO oracle. We also develop variants of the CGS method which can achieve the optimal complexity bounds for solving stochastic optimization problems and an important class of saddle point optimization problems. To the best of our knowledge, this is the first time that these types of projection-free optimal first-order methods have been developed in the literature. Some preliminary numerical results have also been provided to demonstrate the advantages of the CGS method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. AN AUGMENTED LAGRANGIAN BASED ALGORITHM FOR DISTRIBUTED NONCONVEX OPTIMIZATION.
- Author
-
HOUSKA, BORIS, FRASCH, JANICK, and DIEHL, MORITZ
- Subjects
MATHEMATICAL optimization ,NONLINEAR programming ,NONCONVEX programming ,QUADRATIC programming ,ALGORITHMS ,MATHEMATICAL decomposition - Abstract
This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled ane constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interpreted as prices, a concept employed in dual decomposition methods and the alternating direction method of multipliers (ADMM). Here, each agent solves its own small-scale nonlinear programming problem and communicates with other agents by solving coupled quadratic programming problems. These coupled quadratic programming problems have equality constraints for which parallelizable methods are available. The use of techniques associated with standard sequential quadratic programming methods gives a method with superlinear or quadratic convergence rate under suitable conditions. This is in contrast to existing decomposition methods, such as ADMM, which have a linear convergence rate. It is shown how the proposed algorithm may be extended using globalization techniques that guarantee convergence to a local minimizer from any initial starting point. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. NOISE-TO-STATE EXPONENTIALLY STABLE DISTRIBUTED CONVEX OPTIMIZATION ON WEIGHT-BALANCED DIGRAPHS.
- Author
-
MATEOS-NÚÑEZ, DAVID and CORTÉS, JORGE
- Subjects
EXPONENTS ,STABILITY theory ,CONVEX domains ,MATHEMATICAL optimization ,DIRECTED graphs ,ALGORITHMS ,ADDITION (Mathematics) ,ESTIMATION theory - Abstract
This paper studies the robustness properties against additive persistent noise of a class of distributed continuous-time algorithms for convex optimization. A team of agents, each with its own private objective function, seeks to collectively determine the global decision vector that minimizes the sum of all the objectives. The team communicates over a weight-balanced, strongly connected digraph and both interagent communication and agent computation are corrupted by noise. Under the proposed distributed algorithm, each agent updates its estimate of the global solution using the gradient of its local objective function while, at the same time, seeking to agree with its neighbors' estimates via proportional-integral feedback on their disagreement. Under mild conditions on the local objective functions, we show that this strategy is noise-to-state exponentially stable in the second moment with respect to the optimal solution. Our technical approach combines notions and tools from graph theory, stochastic differential equations, Lyapunov stability analysis, and cocoercivity of vector fields. Simulations illustrate our results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. ANALYSIS AND DESIGN OF OPTIMIZATION ALGORITHMS VIA INTEGRAL QUADRATIC CONSTRAINTS.
- Author
-
LESSARD, LAURENT, RECHT, BENJAMIN, and PACKARD, ANDREW
- Subjects
INTEGRAL quadratic constraints ,MATHEMATICAL optimization ,ROBUST control ,SEMIDEFINITE programming ,STOCHASTIC convergence ,CONVEX functions ,ALGORITHMS ,MATHEMATICAL inequalities - Abstract
This paper develops a new framework to analyze and design iterative optimization algorithms built on the notion of integral quadratic constraints (IQCs) from robust control theory. IQCs provide sufficient conditions for the stability of complicated interconnected systems, and these conditions can be checked by semidefinite programming. We discuss how to adapt IQC theory to study optimization algorithms, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers. Using these inequalities, we derive numerical upper bounds on convergence rates for the Gradient method, the Heavy-ball method, Nesterov's accelerated method, and related variants by solving small, simple semidefinite programming problems. We also briefly show how these techniques can be used to search for optimization algorithms with desired performance characteristics, establishing a new methodology for algorithm design. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. AN ACCELERATED HPE-TYPE ALGORITHM FOR A CLASS OF COMPOSITE CONVEX-CONCAVE SADDLE-POINT PROBLEMS.
- Author
-
YUNLONG HE and MONTEIRO, RENATO D. C.
- Subjects
SADDLEPOINT approximations ,ALGORITHMS ,CONVEX functions ,MATHEMATICAL inequalities ,MATHEMATICAL decomposition ,MATHEMATICAL optimization ,NASH equilibrium - Abstract
This paper proposes a new algorithm for solving a class of composite convex-concave saddle-point problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for any constant choice of proximal stepsize. Moreover, a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems. In contrast to the smoothing technique of [Y. Nesterov, Math. Program., 103 (2005), pp. 127-152], our accelerated method does not assume that a feasible set is bounded due to its proximal point nature. Experiment results on three problem sets show that the new method outperforms Nesterov's smoothing technique of [Y. Nesterov, Math. Program., 103 (2005), pp. 127-152]. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. THE CYCLIC BLOCK CONDITIONAL GRADIENT METHOD FOR CONVEX OPTIMIZATION PROBLEMS.
- Author
-
BECK, AMIR, PAUWELS, EDOUARD, and SABACH, SHOHAM
- Subjects
MATHEMATICAL optimization ,CONVEX domains ,PROBLEM solving ,SMOOTHNESS of functions ,ALGORITHMS ,STOCHASTIC convergence - Abstract
In this paper we study the convex problem of optimizing the sum of a smooth function and a compactly supported nonsmooth term with a specific separable form. We analyze the block version of the generalized conditional gradient method when the blocks are chosen in a cyclic order. A global sublinear rate of convergence is established for two different stepsize strategies commonly used in this class of methods. Numerical comparisons of the proposed method to both the classical conditional gradient algorithm and its random block version demonstrate the effectiveness of the cyclic block update rule. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. IMPROVED ANALYSIS OF A MAX-CUT ALGORITHM BASED ON SPECTRAL PARTITIONING.
- Author
-
SOTO, JOSÉ A.
- Subjects
PARALLEL algorithms ,ALGORITHMS ,SEMIDEFINITE programming ,MATHEMATICAL programming ,MATHEMATICAL optimization - Abstract
Trevisan [SIAM J. Comput., 41 (2012), pp. 1769-1786] presented an algorithm for Max-Cut based on spectral partitioning techniques. This is the first algorithm for Max-Cut with an approximation guarantee strictly larger than 1/2 that is not based on semidefinite programming. Trevisan showed that its approximation ratio is of at least 0.531. In this paper we improve this bound up to 0.614247. We also define and extend this result for the more general maximum colored cut problem. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
31. CONVERGENCE RESULTS FOR PROJECTED LINE-SEARCH METHODS ON VARIETIES OF LOW-RANK MATRICES VIA ŁOJASIEWICZ INEQUALITY.
- Author
-
SCHNEIDER, REINHOLD and USCHMAJEW, ANDRÉ
- Subjects
STOCHASTIC convergence ,MATRICES (Mathematics) ,MATHEMATICAL inequalities ,METHOD of steepest descent (Numerical analysis) ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
The aim of this paper is to derive convergence results for projected line-search methods on the real-algebraic variety M
≤κ of real m×n matrices of rank at most k. Such methods extend Riemannian optimization methods, which are successfully used on the smooth manifold Mk of rankk matrices, to its closure by taking steps along gradient-related directions in the tangent cone, and afterwards projecting back to M≤κ . Considering such a method circumvents the difficulties which arise from the nonclosedness and the unbounded curvature of Mk. The pointwise convergence is obtained for real-analytic functions on the basis of a Lojasiewicz inequality for the projection of the antigradient to the tangent cone. If the derived limit point lies on the smooth part of M≤κ , i.e., in Mk, this boils down to more or less known results, but with the benefit that asymptotic convergence rate estimates (for specific step-sizes) can be obtained without an a priori curvature bound, simply from the fact that the limit lies on a smooth manifold. At the same time, one can give a convincing justification for assuming critical points to lie in Mk: if X is a critical point of f on M≤κ , then either X has rank k, or ∇f(X) = 0. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
32. CONSTRAINED BEST EUCLIDEAN DISTANCE EMBEDDING ON A SPHERE: A MATRIX OPTIMIZATION APPROACH.
- Author
-
SHUANGHUA BAI, HUO-DUO QI, and NAIHUA XIU
- Subjects
EUCLIDEAN distance ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,NEWTON-Raphson method ,MULTIDIMENSIONAL scaling - Abstract
The problem of data representation on a sphere of unknown radius arises from various disciplines such as statistics (spatial data representation), psychology (constrained multidimensional scaling), and computer science (machine learning and pattern recognition). The best representation often needs to minimize a distance function of the data on a sphere as well as to satisfy some Euclidean distance constraints. It is those spherical and Euclidean distance constraints that present an enormous challenge to the existing algorithms. In this paper, we reformulate the problem as an Euclidean distance matrix optimization problem with a low rank constraint. We then propose an iterative algorithm that uses a quadratically convergent Newton-CG method at each step. We study fundamental issues including constraint nondegeneracy and the nonsingularity of generalized Jacobian that ensure the quadratic convergence of the Newton method. We use some classic examples from the spherical multidimensional scaling to demonstrate the flexibility of the algorithm in incorporating various constraints. We also present an interesting application to the circle fitting problem. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. DUALITY BETWEEN SUBGRADIENT AND CONDITIONAL GRADIENT METHODS.
- Author
-
BACH, FRANCIS
- Subjects
SUBGRADIENT methods ,MATHEMATICAL optimization ,ALGORITHMS ,MACHINE learning ,NONSMOOTH optimization ,NEWTON-Raphson method - Abstract
Given a convex optimization problem and its dual, there are many possible firstorder algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality and implies notably that for certain problems, such as for supervised machine learning problems with nonsmooth losses or problems regularized by nonsmooth regularizers, the primal subgradient method and the dual conditional gradient method are formally equivalent. The dual interpretation leads to a form of line search for mirror descent, as well as guarantees of convergence for primal-dual certificates. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
34. REORDERING STRATEGY FOR BLOCKING OPTIMIZATION IN SPARSE LINEAR SOLVERS.
- Author
-
PICHON, GREGOIRE, FAVERGE, MATHIEU, RAMET, PIERRE, and ROMAN, JEAN
- Subjects
- *
LINEAR systems , *ITERATIVE methods (Mathematics) , *MATHEMATICAL optimization , *FACTORIZATION , *ALGORITHMS - Abstract
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse direct solvers are a time-consuming and key kernel for those applications and for more advanced solvers such as hybrid direct-iterative solvers. For this reason, optimizing their performance on modern architectures is critical. The preprocessing steps of sparse direct solvers|ordering and block-symbolic factorization|are two major steps that lead to a reduced amount of computation and memory and to a better task granularity to reach a good level of performance when using BLAS kernels. With the advent of GPUs, the granularity of the block computation has become more important than ever. In this paper, we present a reordering strategy that increases this block granularity. This strategy relies on block-symbolic factorization to refine the ordering produced by tools such as Metis or Scotch, but it does not impact the number of operations required to solve the problem. We integrate this algorithm in the PaStiX solver and show an important reduction of the number of off-diagonal blocks on a large spectrum of matrices. This improvement leads to an increase in efficiency of up to 20% on GPUs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
35. FEASIBLE METHOD FOR SEMI-INFINITE PROGRAMS.
- Author
-
SHUXIONG WANG and YAXIANG YUAN
- Subjects
MATHEMATICAL optimization ,PROBLEM solving ,ALGORITHMS ,MATHEMATICAL bounds ,APPROXIMATION theory ,MATHEMATICAL functions - Abstract
A new numerical method is presented for semi-infinite optimization problems which guarantees that each iterate is feasible for the original problem. The basic idea is to construct concave relaxations of the lower level problem, to compute the optimal values of the relaxation problems explicitly, and to solve the resulting approximate problems with finitely many constraints. The concave relaxations are constructed by replacing the objective function of the lower level problem by its concave upper bound functions. Under mild conditions, we prove that every accumulation point of the solutions of the approximate problems is an optimal solution of the original problem. An adaptive subdivision algorithm is proposed to solve semi-infinite optimization problems. It is shown that the Karush-Kuhn-Tucker points of the approximate problems converge to a Karush-Kuhn-Tucker point of the original problem within arbitrarily given tolerances. Numerical experiments show that our algorithm is much faster than the existing adaptive convexification algorithm in computation time. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. SURVEY and REVIEW.
- Author
-
Sanz-Serna, J. M.
- Subjects
SCATTERING (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
An introduction is presented in which the editor discusses articles in the issue including the Survey and Review article "Looking Back on Inverse Scattering Theory" by David Colton and Rainer Kress, and "The Singular Value Decomposition: Anatomy of Optimizing an Algorithm for Extreme Scale."
- Published
- 2018
- Full Text
- View/download PDF
37. Regularized Kaczmarz Algorithms for Tensor Recovery.
- Author
-
Xuemei Chen and Jing Qin
- Subjects
LOW-rank matrices ,ALGORITHMS ,REMOTE sensing ,MATHEMATICAL regularization ,MACHINE learning ,MATHEMATICAL optimization ,INPAINTING - Abstract
Tensor recovery has recently arisen in a lot of application fields, such as transportation, medical imaging, and remote sensing. Under the assumption that signals possess sparse and/or low-rank structures, many tensor recovery methods have been developed to apply various regularization techniques together with the operator-splitting type of algorithms. Due to the unprecedented growth of data, it becomes increasingly desirable to use streamlined algorithms to achieve real-time computation, such as stochastic optimization algorithms that have recently emerged as an efficient family of methods in machine learning. In this work, we propose a novel algorithmic framework based on the Kaczmarz algorithm for tensor recovery. We provide thorough convergence analysis and its applications from the vector case to the tensor one. Numerical results on a variety of tensor recovery applications, including sparse signal recovery, low-rank tensor recovery, image inpainting, and deconvolution, illustrate the enormous potential of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. THE SWITCH POINT ALGORITHM.
- Author
-
AGHAEE, MAHYA and HAGER, WILLIAM W.
- Subjects
QUASI-Newton methods ,CONJUGATE gradient methods ,ALGORITHMS ,MATHEMATICAL optimization ,FEEDBACK control systems - Abstract
The switch point algorithm is a new approach for solving optimal control problems whose solutions are either singular or bang-bang or both singular and bang-bang, and which possess a finite number of jump discontinuities in an optimal control at the points in time where the solution structure changes. Problems in this class can often be reduced to an optimization over the switching points. Formulas are derived for the derivative of the objective with respect to the switch points, the initial costate, and the terminal time. All these derivatives can be computed simultaneously in just one integration of the state and costate dynamics. Hence, gradient-based unconstrained optimization techniques, including the conjugate gradient method or quasi-Newton methods, can be used to compute an optimal control. The performance of the algorithm is illustrated using test problems with known solutions and comparisons with other algorithms from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. STOCHASTIC DYNAMIC LINEAR PROGRAMMING: A SEQUENTIAL SAMPLING ALGORITHM FOR MULTISTAGE STOCHASTIC LINEAR PROGRAMMING.
- Author
-
GANGAMMANAVAR, HARSHA and SEN, SUVRAJEET
- Subjects
LINEAR programming ,DYNAMIC programming ,STOCHASTIC programming ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC processes - Abstract
Multistage stochastic programming deals with operational and planning problems that involve a sequence of decisions over time while responding to an uncertain future. Algorithms designed to address multistage stochastic linear programming (MSLP) problems often rely upon scenario trees to represent the underlying stochastic process. When this process exhibits stagewise independence, sampling-based techniques, particularly the sto chastic dual dynamic programming algorithm, have received wide acceptance. However, these sampling-based methods still operate with a deterministic representation of the problem which uses the so-called sample average approximation. In this work, we present a sequential sampling approach for MSLP problems that allows the decision process to assimilate newly sampled data recursively. We refer to this method as the stochastic dynamic linear programming (SDLP) algorithm. Since we use sequential sampling, the algorithm does not necessitate a priori representation of uncertainty, through either a scenario tree or sample average approximation, both of which require a knowledge/estimation of the underlying distribution. This method constitutes a generalization of the stochastic decomposition algorithm for two-stage stochastic linear programming models. The approximations used within SDLP may be viewed either through the lens of proximal methods or via regularization. Furthermore, we introduce the notion of basic feasible policies which provide a piecewise affine solution discovery scheme, which is embedded within the optimization algorithm to identify incumbent solutions used in the context of proximal iterations. Finally, we show that the SDLP algorithm provides a sequence of decisions and corresponding value function estimates along a sequence of state tra jectories that asymptotically converge to their optimal counterparts, with probability one. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. A SUPERVISED LEARNING APPROACH INVOLVING ACTIVE SUBSPACES FOR AN EFFICIENT GENETIC ALGORITHM IN HIGH-DIMENSIONAL OPTIMIZATION PROBLEMS.
- Author
-
DEMO, NICOLA, TEZZELE, MARCO, and ROZZA, GIANLUIGI
- Subjects
SUPERVISED learning ,GENETIC algorithms ,MATHEMATICAL optimization ,STRUCTURAL optimization ,ALGORITHMS ,FUNCTION spaces ,TEST methods - Abstract
In this work, we present an extension of genetic algorithm (GA) which exploits the supervised learning technique called active subspaces (AS) to evolve the individuals on a lowerdimensional space. In many cases, GA requires in fact more function evaluations than other optimization methods to converge to the global optimum. Thus, complex and high-dimensional functions can end up extremely demanding (from the computational point of view) to be optimized with the standard algorithm. To address this issue, we propose to linearly map the input parameter space of the original function onto its AS before the evolution, performing the mutation and mate processes in a lower-dimensional space. In this contribution, we describe the novel method called ASGA, presenting differences and similarities with the standard GA method. We test the proposed method over n-dimensional benchmark functions--Rosenbrock, Ackley, Bohachevsky, Rastrigin, Schaffer N. 7, and Zakharov--and finally we apply it to an aeronautical shape optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. SEQUENTIAL QUADRATIC OPTIMIZATION FOR NONLINEAR EQUALITY CONSTRAINED STOCHASTIC OPTIMIZATION.
- Author
-
BERAHAS, ALBERT S., CURTIS, FRANK E., ROBINSON, DANIEL, and BAOYU ZHOU
- Subjects
CONSTRAINED optimization ,ALGORITHMS ,NONSMOOTH optimization ,NONLINEAR equations ,MATHEMATICAL optimization ,POINT set theory - Abstract
Sequential quadratic optimization algorithms are proposed for solving smooth nonlinear optimization problems with equality constraints. The main focus is an algorithm proposed for the case when the constraint functions are deterministic, and constraint function and derivative values can be computed explicitly, but the objective function is stochastic. It is assumed in this setting that it is intractable to compute objective function and derivative values explicitly, although one can compute stochastic function and gradient estimates. As a starting point for this stochastic setting, an algorithm is proposed for the deterministic setting that is modeled after a state-of-theart line-search SQP algorithm but uses a stepsize selection scheme based on Lipschitz constants (or adaptively estimated Lipschitz constants) in place of the line search. This sets the stage for the proposed algorithm for the stochastic setting, for which it is assumed that line searches would be intractable. Under reasonable assumptions, convergence (resp., convergence in expectation) from remote starting points is proved for the proposed deterministic (resp., stochastic) algorithm. The results of numerical experiments demonstrate the practical performance of our proposed techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. APPROXIMATION LIMITATIONS OF PURE DYNAMIC PROGRAMMING.
- Author
-
JUKNA, STASYS and SEIWERT, HANNES
- Subjects
DYNAMIC programming ,MATHEMATICAL optimization ,MATHEMATICAL models ,GREEDY algorithms ,RECURSION theory ,ALGORITHMS - Abstract
We prove the first, even superpolynomial, lower bounds on the size of tropical (min,+) and (max,+) circuits approximating given optimization problems. Many classical dynamic programming (DP) algorithms for optimization problems are pure in that they only use the basic min, max, + operations in their recursion equations. Tropical circuits constitute a rigorous mathematical model for this class of algorithms. An algorithmic consequence of our lower bounds for tropical circuits is that the approximation powers of pure DP algorithms and greedy algorithms are incomparable. That pure DP algorithms can hardly beat greedy in approximation is long known. New in this consequence is that the converse also holds. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. AN INEXACT VARIABLE METRIC PROXIMAL POINT ALGORITHM FOR GENERIC QUASI-NEWTON ACCELERATION.
- Author
-
LIN, HONGZHOU, MAIRAL, JULIEN, and HARCHAOUI, ZAID
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MACHINE learning ,MATHEMATICAL regularization - Abstract
We propose an inexact variable-metric proximal point algorithm to accelerate gradient-based optimization algorithms. The proposed scheme, called QNing, can notably be applied to incremental first-order methods such as the stochastic variance-reduced gradient descent algorithm and other randomized incremental optimization algorithms. QNing is also compatible with composite objectives, meaning that it has the ability to provide exactly sparse solutions when the objective involves a sparsity-inducing regularization. When combined with limited-memory BFGS rules, QNing is particularly effective at solving high-dimensional optimization problems while enjoying a worst-case linear convergence rate for strongly convex problems. We present experimental results where QNing gives significant improvements over competing methods for training machine learning methods on large samples and in high dimensions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. ONLINE BUY-AT-BULK NETWORK DESIGN.
- Author
-
CHAKRABARTY, DEEPARNAB, ENE, ALINA, KRISHNASWAMY, RAVISHANKAR, and PANIGRAHI, DEBMALYA
- Subjects
MATHEMATICAL optimization ,COMPUTER programming ,COMPUTER algorithms ,APPROXIMATION algorithms ,SEARCH algorithms - Abstract
We present the first online algorithms for the nonuniform, multicommodity buy-atbulk (MC-BB) network design problem. Our competitive ratios qualitatively match the best known approximation factors for the corresponding offline problems. In particular, we show (a) a polynomial time online algorithm with a polylogarithmic competitive ratio for the MC-BB problem in undirected edge-weighted graphs, (b) a quasi-polynomial time online algorithm with a polylogarithmic competitive ratio for the MC-BB problem in undirected node-weighted graphs, (c) for any fixed ε > 0, a polynomial time online algorithm with a competitive ratio of Õ(k1/2+ε) (where k is the number ofdemands, and the tilde hides polylog factors) for MC-BB in directed graphs, and (d) algorithmswith matching competitive ratios for the prize-collecting variant of all the preceding problems. Priorto our work, a logarithmic competitive ratio was known for undirected, edge-weighted graphs onlyfor the special case of uniform costs [B. Awerbuch and Y. Azar, FOCS, 1997, pp. 542{547], anda polylogarithmic-competitive algorithm was known for the edge-weighted single-sink problem [A.Meyerson, Procedings of SPAA, 2004, pp. 275{280]. We believe no online algorithm was known in thenode-weighted and directed settings, even for uniform costs. Our main technical contribution is anonline reduction theorem of MC-BB problems to their single-sink counterparts. We use the conceptof junction-tree solutions from [C. Chekuri, M. T. Hajiaghayi, G. Kortsarz, and M. R. Salavatipour, Proceedings of FOCS, 2006, pp. 677{686], which play an important role in solving the offline versionsof the problem via a greedy subroutine-an inherently offline procedure. We use just the existence of good junction-trees for our reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. ON REGULARIZATION AND ACTIVE-SET METHODS WITH COMPLEXITY FOR CONSTRAINED OPTIMIZATION.
- Author
-
BIRGIN, E. G. and MARTÍNEZ, J. M.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,LIPSCHITZ spaces ,MATHEMATICAL bounds ,MATHEMATICAL regularization - Abstract
The main objective of this research is to introduce a practical method for smooth bound-constrained optimization that possesses worst-case evaluation complexity O(є
-3/2 ) for finding an є-approximate first-order stationary point when the Hessian of the objective function is Lipschitz continuous. As other well-established algorithms for optimization with box constraints, the algorithm proceeds visiting the different faces of the domain aiming to reduce the norm of an internal projected gradient and abandoning active constraints when no additional progress is expected in the current face. The introduced method emerges as a particular case of a method for minimization with linear constraints. Moreover, the linearly constrained minimization algorithm is an instance of a minimization algorithm with general constraints whose implementation may be unaffordable when the constraints are complicated. As a procedure for leaving faces, a different method is employed that may be regarded as an independent device for constrained optimization. Such an independent algorithm may be employed to solve linearly constrained optimization problems on its own, without relying on the active-set strategy. A careful implementation and numerical experiments show that the algorithm that combines active sets with leaving-face iterations is more effective than the independent algorithm on which leaving-face iterations are based, although both exhibit similar complexities O(є-3/2 ). [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
46. QUASI-NONEXPANSIVE ITERATIONS ON THE AFFINE HULL OF ORBITS: FROM MANN'S MEAN VALUE ALGORITHM TO INERTIAL METHODS.
- Author
-
COMBETTES, PATRICK L. and GLAUDIN, LILIAN E.
- Subjects
NONEXPANSIVE mappings ,ITERATIVE methods (Mathematics) ,ORBITS (Astronomy) ,MEAN value theorems ,ALGORITHMS ,FIXED point theory ,MATHEMATICAL optimization - Abstract
Fixed point iterations play a central role in the design and the analysis of a large number of optimization algorithms. We study a new iterative scheme in which the update is obtained by applying a composition of quasi-nonexpansive operators to a point in the affine hull of the orbit generated up to the current iterate. This investigation unifies several algorithmic constructs, including Mann's mean value method, inertial methods, and multilayer memoryless methods. It also provides a framework for the development of new algorithms, such as those we propose for solving monotone inclusion and minimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. A MULTILEVEL PROXIMAL GRADIENT ALGORITHM FOR A CLASS OF COMPOSITE OPTIMIZATION PROBLEMS.
- Author
-
PARPAS, PANOS
- Subjects
ALGORITHMS ,MATHEMATICAL optimization - Abstract
Composite optimization models consist of the minimization of the sum of a smooth (not necessarily convex) function and a nonsmooth convex function. Such models arise in many applications where, in addition to the composite nature of the objective function, a hierarchy of models is readily available. It is common to take advantage of this hierarchy of models by first solving a low fidelity model and then using the solution as a starting point to a high fidelity model. We adopt an optimization point of view and show how to take advantage of the availability of a hierarchy of models in a consistent manner. We do not use the low fidelity model just for the computation of promising starting points but also for the computation of search directions. We establish the convergence and convergence rate of the proposed algorithm. Our numerical experiments on large scale image restoration problems and the transition path problem suggest that, for certain classes of problems, the proposed algorithm is significantly faster than the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. A SAMPLING KACZMARZ-MOTZKIN ALGORITHM FOR LINEAR FEASIBILITY.
- Author
-
DE LOERA, JESÚS A., HADDOCK, JAMIE, and NEEDELL, DEANNA
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,ITERATIVE methods (Mathematics) - Abstract
We combine two iterative algorithms for solving large-scale systems of linear inequalities: the relaxation method of Agmon, Motzkin, et al. and the randomized Kaczmarz method. We obtain a family of algorithms that generalize and extend both projection-based techniques. We prove several convergence results, and our computational experiments show our algorithms often outperform the original methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. CLOSED-FORM EXPRESSIONS FOR PROJECTORS ONTO POLYHEDRAL SETS IN HILBERT SPACES.
- Author
-
RUTKOWSKI, KRZYSZTOF E.
- Subjects
POLYHEDRAL functions ,STOCHASTIC partial differential equations ,BANACH spaces ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
We provide formulas for projectors onto a polyhedral set, i.e., the intersection of a finite number of half-spaces. To this aim we formulate the problem of finding the projection as a convex optimization problem and we solve explicitly sufficient and necessary optimality conditions. This approach has already been successfully applied in deriving formulas for projection onto the intersection of two half-spaces. We also discuss possible generalizations to Banach spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. A DISTRIBUTED BOYLE–DYKSTRA–HAN SCHEME.
- Author
-
PHADE, SOHAM R. and BORKAR, VIVEK S.
- Subjects
CONVEX sets ,DISTRIBUTED algorithms ,ALGORITHMS ,SUBSPACES (Mathematics) ,MATHEMATICAL optimization - Abstract
We present a provably convergent distributed analog of Han's algorithm for computing the projection of a point on the intersection of a finite collection of convex sets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.