263 results
Search Results
2. On the Parallelization Upper Bound for Asynchronous Stochastic Gradients Descent in Non-convex Optimization.
- Author
-
Wang, Lifu and Shen, Bo
- Subjects
ASYNCHRONOUS learning ,DISTRIBUTED algorithms ,DEEP learning ,PARALLEL algorithms ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
In deep learning, asynchronous parallel stochastic gradient descent (APSGD) is a broadly used algorithm to speed up the training process. In asynchronous system, the time delay of stale gradients in asynchronous algorithms is generally proportional to the total number of workers. When the number of workers is too large, the time delay will bring additional deviation from the accurate gradient due to delayed gradients and may cause a negative influence on the convergence speed of the algorithm. One may ask: How many workers can be used at most to achieve both the convergence and the speedup? In this paper, we consider the asynchronous training problem with the non-convex case. We theoretically study this problem to find an approximating second-order stationary point using asynchronous algorithms in non-convex optimization and investigate the behaviors of APSGD near-saddle points. This work gives the first theoretical guarantee to find an approximating second-order stationary point in asynchronous algorithms and a provable upper bound for the time delay. The techniques we provide can be generalized to analyze other types of asynchronous algorithms to understand the behaviors of asynchronous algorithms in distributed asynchronous parallel training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. A Hybrid Differential Dynamic Programming Algorithm for Constrained Optimal Control Problems. Part 2: Application.
- Author
-
Lantoine, Gregory and Russell, Ryan
- Subjects
DIFFERENTIABLE dynamical systems ,CONTROL theory (Engineering) ,MATHEMATICAL optimization ,NONLINEAR theories ,ALGORITHMS - Abstract
In the first part of this paper series, a new solver, called HDDP, was presented for solving constrained, nonlinear optimal control problems. In the present paper, the algorithm is extended to include practical safeguards to enhance robustness, and four illustrative examples are used to evaluate the main algorithm and some variants. The experiments involve both academic and applied problems to show that HDDP is capable of solving a wide class of constrained, nonlinear optimization problems. First, the algorithm is verified to converge in a single iteration on a simple multi-phase quadratic problem with trivial dynamics. Successively, more complicated constrained optimal control problems are then solved demonstrating robust solutions to problems with as many as 7 states, 25 phases, 258 stages, 458 constraints, and 924 total control variables. The competitiveness of HDDP, with respect to general-purpose, state-of-the-art NLP solvers, is also demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
4. Weiszfeld's Method: Old and New Results.
- Author
-
Beck, Amir and Sabach, Shoham
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,LOCATION problems (Programming) ,ALGORITHMS - Abstract
In 1937, the 16-years-old Hungarian mathematician Endre Weiszfeld, in a seminal paper, devised a method for solving the Fermat-Weber location problem-a problem whose origins can be traced back to the seventeenth century. Weiszfeld's method stirred up an enormous amount of research in the optimization and location communities, and is also being discussed and used till these days. In this paper, we review both the past and the ongoing research on Weiszfed's method. The existing results are presented in a self-contained and concise manner-some are derived by new and simplified techniques. We also establish two new results using modern tools of optimization. First, we establish a non-asymptotic sublinear rate of convergence of Weiszfeld's method, and second, using an exact smoothing technique, we present a modification of the method with a proven better rate of convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. A Sampling-and-Discarding Approach to Chance-Constrained Optimization: Feasibility and Optimality.
- Author
-
Campi, M. C. and Garatti, S.
- Subjects
MATHEMATICAL optimization ,MODULES (Algebra) ,MATHEMATICAL analysis ,CONSTRAINTS (Physics) ,ALGORITHMS - Abstract
In this paper, we study the link between a Chance-Constrained optimization Problem (CCP) and its sample counterpart (SP). SP has a finite number, say N, of sampled constraints. Further, some of these sampled constraints, say k, are discarded, and the final solution is indicated by $x^{\ast}_{N,k}$. Extending previous results on the feasibility of sample convex optimization programs, we establish the feasibility of $x^{\ast}_{N,k}$ for the initial CCP problem. Constraints removal allows one to improve the cost function at the price of a decreased feasibility. The cost improvement can be inspected directly from the optimization result, while the theory here developed permits to keep control on the other side of the coin, the feasibility of the obtained solution. In this way, trading feasibility for performance is put on solid mathematical grounds in this paper. The feasibility result here obtained applies to a vast class of chance-constrained optimization problems, and has the distinctive feature that it holds true irrespective of the algorithm used to discard k constraints in the SP problem. For constraints discarding, one can thus, e.g., resort to one of the many methods introduced in the literature to solve chance-constrained problems with discrete distribution, or even use a greedy algorithm, which is computationally very low-demanding, and the feasibility result remains intact. We further prove that, if constraints in the SP problem are optimally removed-i.e., one deletes those constraints leading to the largest possible cost improvement-, then a precise optimality link to the original chance-constrained problem CCP in addition holds. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
6. Combinatorial Optimization Algorithms for detecting Collapse Mechanisms of Concrete Slabs.
- Author
-
De Filippo, Michele and Kuang, Jun Shang
- Subjects
CONCRETE slabs ,COMBINATORIAL optimization ,MATHEMATICAL optimization ,CONSTRUCTION slabs ,ALGORITHMS ,APPLIED mathematics - Abstract
Nowadays, large part of the technical knowledge associated with collapses of slabs is based on past failures of bridges, floors, flat roofs and balconies. Collapse mechanisms tend to often differ from each other due to unique features which make it difficult to derive a generalised technique that can predict the right mechanism. This paper proposes a novel algorithm for tackling the problem of detection of collapse mechanisms, which is part of a pseudo-lower bound method for assessing concrete slabs in bridges and buildings. The problem is generalised to a combinatorial one, and the solution is based on a set of well-known combinatorial optimization algorithms. The proposed approach enables an identification of the domain of existence of yield-lines potentially leading to collapse. The output provides an estimation of a hampered domain of feasible yield-lines through which engineers can quickly identify zones of the slab and directions in which yield-lines leading to collapse are more likely to occur. Numerical applications of the algorithm are presented herein. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Sequential Penalty Algorithm for Nonlinear Constrained Optimization.
- Author
-
Zhang, J.L. and Zhang, X.S.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONLINEAR theories ,STOCHASTIC convergence ,MATHEMATICS ,MATHEMATICAL analysis - Abstract
In this paper, a new sequential penalty algorithm, based on the L[sub ∞] exact penalty function, is proposed for a general nonlinear constrained optimization problem. The algorithm has the following characteristics: it can start from an arbitrary initial point; the feasibility of the subproblem is guaranteed; the penalty parameter is adjusted automatically; global convergence without any regularity assumption is proved. The update formula of the penalty parameter is new. It is proved that the algorithm proposed in this paper behaves equivalently to the standard SQP method after sufficiently many iterations. Hence, the local convergence results of the standard SQP method can be applied to this algorithm. Preliminary numerical experiments show the efficiency and stability of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
8. Proximal Point Algorithms for Multi-criteria Optimization with the Difference of Convex Objective Functions.
- Author
-
Ji, Ying, Goh, Mark, and Souza, Robert
- Subjects
MATHEMATICAL optimization ,CONVEX functions ,SCALAR field theory ,NUMERICAL analysis ,ALGORITHMS - Abstract
This paper focuses on solving a class of multi-criteria optimization with the difference of convex objective functions. Proximal point algorithms, extensively studied for scalar optimization, are extended to our setting. We show that the proposed algorithms are well posed and globally convergent to a critical point. For an application, the new methods are used to a multi-criteria model arising in portfolio optimization. The numerical results show the efficiency of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. A Decentralized Multi-objective Optimization Algorithm.
- Author
-
Blondin, Maude J. and Hale, Matthew
- Subjects
MATHEMATICAL optimization ,DISTRIBUTED algorithms ,SCIENTIFIC community ,MULTIAGENT systems ,ALGORITHMS ,STOCHASTIC matrices - Abstract
During the past few decades, multi-agent optimization problems have drawn increased attention from the research community. When multiple objective functions are present among agents, many works optimize the sum of these objective functions. However, this formulation implies a decision regarding the relative importance of each objective: optimizing the sum is a special case of a multi-objective problem in which all objectives are prioritized equally. To enable more general prioritizations, we present a distributed optimization algorithm that explores Pareto optimal solutions for non-homogeneously weighted sums of objective functions. This exploration is performed through a new rule based on agents' priorities that generates edge weights in agents' communication graph. These weights determine how agents update their decision variables with information received from other agents in the network. Agents initially disagree on the priorities of objective functions, though they are driven to agree upon them as they optimize. As a result, agents still reach a common solution. The network-level weight matrix is (non-doubly) stochastic, contrasting with many works on the subject in which the network-level weight matrix is doubly-stochastic. New theoretical analyses are therefore developed to ensure convergence of the proposed algorithm. This paper provides a gradient-based optimization algorithm, proof of convergence to solutions, and convergence rates of the proposed algorithm. It is shown that agents' initial priorities influence the convergence rate of the proposed algorithm and that these initial choices affect its long-run behavior. Numerical results performed with different numbers of agents illustrate the performance and effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. On the Linear Convergence of Two Decentralized Algorithms.
- Author
-
Li, Yao and Yan, Ming
- Subjects
ALGORITHMS ,PROBLEM solving ,MATHEMATICAL optimization ,MATRIX functions - Abstract
Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with the accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized and centralized algorithms. In this paper, we fill some gaps by considering two decentralized algorithms: EXTRA and NIDS. They both converge linearly with strongly convex objective functions. We will answer two questions regarding them. What are the optimal upper bounds for their stepsizes? Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones? More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Perfect Duality in Solving Geometric Programming Problems Under Uncertainty.
- Author
-
Bricker, Dennis and Kortanek, K.
- Subjects
GEOMETRIC programming ,DUALITY theory (Mathematics) ,UNCERTAINTY (Information theory) ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
We examine computational solutions to all of the geometric programming problems published in a recent paper in the Journal of Optimization Theory and Applications. We employed three implementations of published algorithms interchangeably to obtain 'perfect duality' for all of these problems. Perfect duality is taken to mean that a computed solution of an optimization problem achieves two properties: (1) primal and dual feasibility and (2) equality of primal and dual objective function values, all within the accuracy of the machine employed. Perfect duality was introduced by Duffin (Math Program 4:125-143,1973). When primal and dual objective values differ, we say there is a duality gap. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Solving the Maximum Clique Problem with Symmetric Rank-One Non-negative Matrix Approximation.
- Author
-
Belachew, Melisew and Gillis, Nicolas
- Subjects
SUBGRAPHS ,COMBINATORIAL optimization ,MATHEMATICAL optimization ,GRAPH theory ,ALGORITHMS - Abstract
Finding complete subgraphs in a graph, that is, cliques, is a key problem and has many real-world applications, e.g., finding communities in social networks, clustering gene expression data, modeling ecological niches in food webs, and describing chemicals in a substance. The problem of finding the largest clique in a graph is a well-known difficult combinatorial optimization problem and is called the maximum clique problem. In this paper, we formulate a very convenient continuous characterization of the maximum clique problem based on the symmetric rank-one non-negative approximation of a given matrix and build a one-to-one correspondence between stationary points of our formulation and cliques of a given graph. In particular, we show that the local (resp. global) minima of the continuous problem corresponds to the maximal (resp. maximum) cliques of the given graph. We also propose a new and efficient clique finding algorithm based on our continuous formulation and test it on the DIMACS data sets to show that the new algorithm outperforms other existing algorithms based on the Motzkin-Straus formulation and can compete with a sophisticated combinatorial heuristic. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Homogeneous Self-dual Algorithms for Stochastic Semidefinite Programming.
- Author
-
Jin, S., Ariyawansa, K., and Zhu, Y.
- Subjects
SEMIDEFINITE programming ,STOCHASTIC analysis ,ALGORITHMS ,MATHEMATICAL decomposition ,HOMOGENEOUS polynomials ,MATHEMATICAL optimization - Abstract
Ariyawansa and Zhu have proposed a new class of optimization problems termed stochastic semidefinite programs to handle data uncertainty in applications leading to (deterministic) semidefinite programs. For stochastic semidefinite programs with finite event space, they have also derived a class of volumetric barrier decomposition algorithms, and proved polynomial complexity of certain members of the class. In this paper, we consider homogeneous self-dual algorithms for stochastic semidefinite programs with finite event space. We show how the structure in such problems may be exploited so that the algorithms developed in this paper have complexity similar to those of the decomposition algorithms mentioned above. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
14. On the Solution of Generalized Multiplicative Extremum Problems.
- Author
-
Ashtiani, Alireza M. and Ferreira, Paulo A. V.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,EIGENVALUES ,MATHEMATICAL inequalities ,STOCHASTIC convergence ,QUADRATIC programming - Abstract
The paper addresses the problem of maximizing a sum of products of positive and concave real-valued functions over a convex feasible set. A reformulation based on the image of the feasible set through the vector-valued function which describes the problem, combined with an adequate application of convex analysis results, lead to an equivalent indefinite quadratic extremum problem with infinitely many linear constraints. Special properties of this later problem allow to solve it by an efficient relaxation algorithm. Some numerical tests illustrate the approach proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
15. Applications of Variational Analysis to a Generalized Fermat-Torricelli Problem.
- Author
-
Mordukhovich, Boris and Nguyen Mau Nam
- Subjects
BANACH spaces ,MEASUREMENT of distances ,PLANE geometry ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper we develop new applications of variational analysis and generalized differentiation to the following optimization problem and its specifications: given n closed subsets of a Banach space, find such a point for which the sum of its distances to these sets is minimal. This problem can be viewed as an extension of the celebrated Fermat-Torricelli problem: given three points on the plane, find another point that minimizes the sum of its distances to the designated points. The generalized Fermat-Torricelli problem formulated and studied in this paper is of undoubted mathematical interest and is promising for various applications including those frequently arising in location science, optimal networks, etc. Based on advanced tools and recent results of variational analysis and generalized differentiation, we derive necessary as well as necessary and sufficient optimality conditions for the extended version of the Fermat-Torricelli problem under consideration, which allow us to completely solve it in some important settings. Furthermore, we develop and justify a numerical algorithm of the subgradient type to find optimal solutions in convex settings and provide its numerical implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
16. Pseudotransient Continuation for Solving Systems of Nonsmooth Equations with Inequality Constraints.
- Author
-
Chen, J. and Qi, L.
- Subjects
NUMERICAL solutions to nonlinear evolution equations ,NONSMOOTH optimization ,MATHEMATICAL inequalities ,ALGORITHMS ,STOCHASTIC convergence ,CONTINUATION methods ,MATHEMATICAL optimization ,CONSTRAINT satisfaction - Abstract
This paper investigates a pseudotransient continuation algorithm for solving a system of nonsmooth equations with inequality constraints. We first transform the inequality constrained system of nonlinear equations to an augmented nonsmooth system, and then employ the pseudotransient continuation algorithm for solving the corresponding augmented nonsmooth system. The method gets its global convergence properties from the dynamics, and inherits its local convergence properties from the semismooth Newton method. Finally, we illustrate the behavior of our approach by some numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
17. Convex Semi-Infinite Programming: Implicit Optimality Criterion Based on the Concept of Immobile Indices.
- Author
-
Kostyukova, O., Tchemisova, T., and Yermalinskaya, S.
- Subjects
MATHEMATICAL programming ,INDEXES ,FUNCTIONAL equations ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
We state a new implicit optimality criterion for convex semi-infinite programming (SIP) problems. This criterion does not require any constraint qualification and is based on concepts of immobile index and immobility order. Given a convex SIP problem with a continuum of constraints, we use an information about its immobile indices to construct a nonlinear programming (NLP) problem of a special form. We prove that a feasible point of the original infinite SIP problem is optimal if and only if it is optimal in the corresponding finite NLP problem. This fact allows us to obtain new efficient optimality conditions for convex SIP problems using known results of the optimality theory of NLP. To construct the NLP problem, we use the DIO algorithm. A comparison of the optimality conditions obtained in the paper with known results is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
18. Optimal Control of Differential—Algebraic Equations of Higher Index, Part 1: First-Order Approximations.
- Author
-
Pytlak, R.
- Subjects
DIFFERENTIAL-algebraic equations ,DIFFERENTIAL equations ,APPROXIMATION theory ,ADJOINT differential equations ,FUNCTIONAL analysis ,MATHEMATICAL optimization ,ALGORITHMS ,FIRST-order logic ,CONSTRAINED optimization - Abstract
This paper deals with optimal control problems described by higher index DAEs. We introduce a class of these problems which can be transformed to index one control problems. For this class of higher index DAEs, we derive first-order approximations and adjoint equations for the functionals defining the problem. These adjoint equations are then used to state, in the accompanying paper, the necessary optimality conditions in the form of a weak maximum principle. The constructive way used to prove these optimality conditions leads to globally convergent algorithms for control problems with state constraints and defined by higher index DAEs. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Global Convergence of a Class of Discrete-Time Interconnected Pendulum-Like Systems.
- Author
-
Yang, Y., Duan, Z. S., and Huang, L.
- Subjects
STOCHASTIC convergence ,DISCRETE-time systems ,DIGITAL control systems ,SYSTEM analysis ,MATRIX inequalities ,LARGE scale systems ,ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
This paper focuses on a class of discrete-time interconnected pendulumlike systems. Sufficient conditions for the global convergence of systems with both structured and unstructured uncertainties in its linear part are established in terms of linear matrix inequalities (LMIs). In the traditional decentralized control of large scale systems, the effects of interconnections were seldom studied. In this paper, a square matrix specifying the interconnecting structure is introduced in order to discuss the effects of nonlinear interconnections. It is shown that the global convergence of the whole interconnected system can be achieved by designing an appropriate interconnection matrix. In order to solve the nonlinear matrix inequalities (NMIs) arising in the synthesis problem, a global optimization algorithm is presented which can be used to handle a class of NMI problems. An illustrative example is given to demonstrate the applicability and validity of the main results and the presented algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
20. On the Method of Shortest Residuals for Unconstrained Optimization.
- Author
-
Pytlak, R. and Tarnawski, T.
- Subjects
CONJUGATE gradient methods ,APPROXIMATION theory ,NUMERICAL solutions to equations ,ITERATIVE methods (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,NUMERICAL analysis ,CALCULUS of variations - Abstract
The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm, first introduced by Lemaréchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an implementation of a standard Polak–Ribière conjugate gradient algorithm is also provided. It supports the claim that the method of shortest residuals is a viable technique, competitive to other conjugate gradient algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
21. Globally Convergent Optimization Algorithms on Riemannian Manifolds: Uniform Framework for Unconstrained and Constrained Optimization.
- Author
-
Yang, Y.
- Subjects
RIEMANNIAN manifolds ,MANIFOLDS (Mathematics) ,DIFFERENTIAL geometry ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,ACCELERATION of convergence in numerical analysis ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICS - Abstract
This paper proposes several globally convergent geometric optimization algorithms on Riemannian manifolds, which extend some existing geometric optimization techniques. Since any set of smooth constraints in the Euclidean space R
n (corresponding to constrained optimization) and the Rn space itself (corresponding to unconstrained optimization) are both special Riemannian manifolds, and since these algorithms are developed on general Riemannian manifolds, the techniques discussed in this paper provide a uniform framework for constrained and unconstrained optimization problems. Unlike some earlier works, the new algorithms have less restrictions in both convergence results and in practice. For example, global minimization in the one-dimensional search is not required. All the algorithms addressed in this paper are globally convergent. For some special Riemannian manifold other than Rn , the newalgorithms are very efficient. Convergence rates are obtained. Applications are discussed. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
22. Optimal Planetary Orbital Transfers via Chemical Engines and Electrical Engines.
- Author
-
Miele, A. and Wang, T.
- Subjects
ORBITAL transfer (Space flight) ,ALGORITHMS ,PROPELLANTS ,THRUST of rocket engines ,MATHEMATICAL optimization ,COST control ,SPACE flight ,ENGINES ,ALGEBRA - Abstract
Because the orbital periods for planetary orbital transfers are of order hour, the primary objective of an optimal trajectory is to minimize the propellant consumption. In this paper, we present a systematic investigation of optimal trajectories for planetary orbital transfer. Major results on thrust control, propellant consumption, and flight time are presented with particular reference to LEO-to-GEO transfer. The following results were obtained with the sequential gradient-restoration algorithm in either single-subarc form or multiple-subarc form: (i) For minimum propellant consumption, the thrust direction is tangent to the flight path. The thrust magnitude has a three-subarc form: powered flight with maximum thrust is followed by coasting flight, which is followed by powered flight with maximum thrust. (ii) The flight time is determined mainly by the thrust-to-weight ratio. A transfer via chemical engines is relatively short: usually, it requires less than one cycle to achieve the mission, which involves a large portion of coasting flight. A transfer via electrical engines is relatively long: usually, it requires a multicycle spiral trajectory to achieve the mission, which involves a large portion of powered flight, mostly in the first subarc. (iii) The propellant consumption is determined mainly by the specific impulse: the electrical engine is more efficient than the chemical engine, resulting in lower propellant consumption and higher payload. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
23. Scaling Damped Limited-Memory Updates for Unconstrained Optimization.
- Author
-
Biglari, Fahimeh and Mahmoodpur, Farideh
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONLINEAR programming ,QUASI-Newton methods ,HESSIAN matrices - Abstract
This paper investigates scaling a modified limited-memory algorithm to solve unconstrained optimization problems. The basic idea was to combine the damped techniques for the limited-memory update and the technique of equilibrating the inverse Hessian matrix. Enhanced curvature information about the objective function is stored in the form of a diagonal matrix and plays the dual roles of providing an initial matrix and equilibrating for damped limited-memory iterations. Numerical experiments indicated that the new algorithm is very effective. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. Optimal Thickness of a Cylindrical Shell Subject to Stochastic Forces.
- Author
-
Keyanpour, Mohammad and Nehrani, Ali
- Subjects
CYLINDRICAL shells ,STOCHASTIC processes ,VARIATIONAL principles ,STOCHASTIC partial differential equations ,ELASTICITY ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper, sizing of the thickness of a cylindrical shell subject to a stochastic force is considered. The variational principle of stochastic partial differential equations (PDEs) is applied to derive the necessary optimality conditions. The goal is to determine the optimal thickness of a cylindrical shell such that subject to a stochastic force it does not deform, although, because of the elasticity of a cylindrical shell, occasionally small deformations that do not destroy the structure are allowable. The sizing problem under a stochastic force is considered via a one-dimensional stochastic PDE-constrained optimization problem. Test examples are solved using a self-adjoint gradient algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. General Split Equality Equilibrium Problems with Application to Split Optimization Problems.
- Author
-
Chang, Shih-sen, Wang, Lin, Wang, Xiong, and Wang, Gang
- Subjects
MATHEMATICAL optimization ,HILBERT space ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL sequences - Abstract
The purpose of this paper is to introduce and study the general split equality equilibrium problem and the general split equilibrium problem in Hilbert spaces. In order to solve these problems, a new simultaneous iterative algorithm is proposed and several strong convergence theorems for the sequences generated by the algorithm are proved. As applications, we utilize our results to study the general split equality optimization problem and the general split optimization problem. The results presented in the paper extend and improve some recent results. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. A Projected Primal-Dual Method for Solving Constrained Monotone Inclusions.
- Author
-
Briceño-Arias, Luis and López Rivera, Sergio
- Subjects
MONOTONE operators ,ALGORITHMS ,CONVEX sets ,MATHEMATICAL optimization ,LAGRANGE multiplier - Abstract
In this paper, we provide an algorithm for solving constrained composite primal-dual monotone inclusions, i.e., monotone inclusions in which a priori information on primal-dual solutions is represented via closed and convex sets. The proposed algorithm incorporates a projection step onto the a priori information sets and generalizes methods proposed in the literature for solving monotone inclusions. Moreover, under the presence of strong monotonicity, we derive an accelerated scheme inspired on the primal-dual algorithm applied to the more general context of constrained monotone inclusions. In the particular case of convex optimization, our algorithm generalizes several primal-dual optimization methods by allowing a priori information on solutions. In addition, we provide an accelerated scheme under strong convexity. An application of our approach with a priori information is constrained convex optimization problems, in which available primal-dual methods impose constraints via Lagrange multiplier updates, usually leading to slow algorithms with unfeasible primal iterates. The proposed modification forces primal iterates to satisfy a selection of constraints onto which we can project, obtaining a faster method as numerical examples exhibit. The obtained results extend and improve several results in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. A New Nonsmooth Trust Region Algorithm for Locally Lipschitz Unconstrained Optimization Problems.
- Author
-
Akbari, Z., Yousefpour, R., and Reza Peyghami, M.
- Subjects
NONSMOOTH optimization ,ALGORITHMS ,MATHEMATICAL optimization ,LIPSCHITZ spaces ,FUNCTION spaces - Abstract
In this paper, a new nonsmooth trust region algorithm is proposed for solving unconstrained minimization problems with locally Lipschitz objective functions. At first, by using an approximation of the steepest descent direction, a local model is presented for locally Lipschitz functions. More precisely, in the quadratic model of classical trust region methods, the gradient vector is replaced by an approximation of the steepest descent direction. We then apply one of the efficient approaches of classical trust region methods in order to solve the obtained model. Using the BFGS updating formula for the Hessian approximation of the model, we show that the proposed algorithm is convergent under some mild and standard conditions on the objective function. Finally, the presented algorithm is implemented in the MATLAB environment and applied on some nonsmooth test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
28. Derivative-Free Methods for Mixed-Integer Constrained Optimization Problems.
- Author
-
Liuzzi, Giampaolo, Lucidi, Stefano, and Rinaldi, Francesco
- Subjects
CONSTRAINED optimization ,ENGINEERING ,MATHEMATICAL optimization ,INDUSTRIAL arts ,ALGORITHMS - Abstract
Methods which do not use any derivative information are becoming popular among researchers, since they allow to solve many real-world engineering problems. Such problems are frequently characterized by the presence of discrete variables, which can further complicate the optimization process. In this paper, we propose derivative-free algorithms for solving continuously differentiable Mixed Integer NonLinear Programming problems with general nonlinear constraints and explicit handling of bound constraints on the problem variables. We use an exterior penalty approach to handle the general nonlinear constraints and a local search approach to take into account the presence of discrete variables. We show that the proposed algorithms globally converge to points satisfying different necessary optimality conditions. We report a computational experience and a comparison with a well-known derivative-free optimization software package, i.e., NOMAD, on a set of test problems. Furthermore, we employ the proposed methods and NOMAD to solve a real problem concerning the optimal design of an industrial electric motor. This allows to show that the method converging to the better extended stationary points obtains the best solution also from an applicative point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis.
- Author
-
Bergou, El Houcine, Diouane, Youssef, and Gratton, Serge
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONCONVEX programming ,MATHEMATICAL regularization ,STOCHASTIC convergence - Abstract
Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity among second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such a scaled norm, the obtained method behaves as a line-search algorithm along the quasi-Newton direction with a special backtracking strategy. Under appropriate assumptions, the new algorithm enjoys the same convergence and complexity properties as adaptive regularized algorithm using cubics. The complexity for finding an approximate first-order stationary point can be improved to be optimal whenever a second-order version of the proposed algorithm is regarded. In a similar way, using the same scaled norm to define the trust-region neighborhood, we show that the trust-region algorithm behaves as a line-search algorithm. The good potential of the obtained algorithms is shown on a set of large-scale optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Envelope Functions: Unifications and Further Properties.
- Author
-
Giselsson, Pontus and Fält, Mattias
- Subjects
MATHEMATICAL functions ,MATHEMATICAL optimization ,NONSMOOTH optimization ,CONVEX domains ,ALGORITHMS - Abstract
Forward-backward and Douglas-Rachford splitting are methods for structured nonsmooth optimization. With the aim to use smooth optimization techniques for nonsmooth problems, the forward-backward and Douglas-Rachford envelopes where recently proposed. Under specific problem assumptions, these envelope functions have favorable smoothness and convexity properties and their stationary points coincide with the fixed-points of the underlying algorithm operators. This allows for solving such nonsmooth optimization problems by minimizing the corresponding smooth convex envelope function. In this paper, we present a general envelope function that unifies and generalizes existing ones. We provide properties of the general envelope function that sharpen corresponding known results for the special cases. We also present a new interpretation of the underlying methods as being majorization-minimization algorithms applied to their respective envelope functions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Highly Accurate Analytic Approximation to the Gaussian Q-function Based on the Use of Nonlinear Least Squares Optimization Algorithm.
- Author
-
Develi, I. and Basturk, A.
- Subjects
APPROXIMATION theory ,GAUSSIAN function ,LEAST squares ,NONLINEAR analysis ,ALGORITHMS ,MATHEMATICAL optimization ,COMPUTER simulation ,ACCURACY - Abstract
In this paper, as an extension of a previous study, an improved approximation for the Gaussian Q-function is presented. The nonlinear least squares algorithm is employed to optimize the coefficients of the proposed approximation. The accuracy of the presented approximation is evaluated using extensive computer simulations. Results show that the proposed approximation has superior accuracy in high arguments' region when compared to the performance of other approaches introduced in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
32. Study of a New Global Optimization Algorithm Based on the Standard PSO.
- Author
-
Yang, B. and Cheng, L.
- Subjects
GLOBAL optimization ,ALGORITHMS ,PARTICLE swarm optimization ,COMPARATIVE studies ,NUMERICAL calculations ,MATHEMATICAL optimization - Abstract
In this paper, a new global optimization algorithm is developed, which is named Particle Swarm Optimization combined with Particle Generator (PSO-PG). Based on a series of comparable numerical experiments, we show that the calculation accuracy of the new algorithm is greatly improved and optimization efficiency is increased as well, in comparison with those of the standard PSO. It is also found that the optimization results obtained from PSO-PG are almost independent of the coefficients adopted in the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
33. A Polynomial Arc-Search Interior-Point Algorithm for Linear Programming.
- Author
-
Yang, Yaguang
- Subjects
POLYNOMIALS ,INTERIOR-point methods ,ALGORITHMS ,LINEAR programming ,MATHEMATICAL optimization ,ELLIPSES (Geometry) ,COMPUTATIONAL complexity - Abstract
In this paper, ellipsoidal estimations are used to track the central path of linear programming. A higher-order interior-point algorithm is devised to search the optimizers along the ellipse. The algorithm is proved to be polynomial with the best complexity bound for all polynomial algorithms and better than the best known bound for higher-order algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
34. A Double-Parameter Scaling Broyden-Fletcher-Goldfarb-Shanno Method Based on Minimizing the Measure Function of Byrd and Nocedal for Unconstrained Optimization.
- Author
-
Andrei, Neculai
- Subjects
MATHEMATICAL optimization ,EIGENVALUES ,HESSIAN matrices ,ALGORITHMS ,NONLINEAR programming - Abstract
In this paper, the first two terms on the right-hand side of the Broyden-Fletcher-Goldfarb-Shanno update are scaled with a positive parameter, while the third one is also scaled with another positive parameter. These scaling parameters are determined by minimizing the measure function introduced by Byrd and Nocedal (SIAM J Numer Anal 26:727-739,
1989 ). The obtained algorithm is close to the algorithm based on clustering the eigenvalues of the Broyden-Fletcher-Goldfarb-Shanno approximation of the Hessian and on shifting its large eigenvalues to the left, but it is not superior to it. Under classical assumptions, the convergence is proved by using the trace and the determinant of the iteration matrix. By using a set of 80 unconstrained optimization test problems, it is proved that the algorithm minimizing the measure function of Byrd and Nocedal is more efficient and more robust than some other scaling Broyden-Fletcher-Goldfarb-Shanno algorithms, including the variants of Biggs (J Inst Math Appl 12:337-338,1973 ), Yuan (IMA J Numer Anal 11:325-332,1991 ), Oren and Luenberger (Manag Sci 20:845-862,1974 ) and of Nocedal and Yuan (Math Program 61:19-37,1993 ). However, it is less efficient than the algorithms based on clustering the eigenvalues of the iteration matrix and on shifting its large eigenvalues to the left, as shown by Andrei (J Comput Appl Math 332:26-44,2018 , Numer Algorithms 77:413-432,2018 ). [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
35. Adaptive Restart of the Optimized Gradient Method for Convex Optimization.
- Author
-
Kim, Donghwan and Fessler, Jeffrey A.
- Subjects
MATHEMATICAL optimization ,CONVEX functions ,ALGORITHMS ,EIGENVALUES ,MATRICES (Mathematics) - Abstract
First-order methods with momentum, such as Nesterov’s fast gradient method, are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence rates for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient method, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally strongly convex region. Recently, we introduced the optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function rate that is twice faster than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive a new first-order method that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. A New Double-Projection Method for Solving Variational Inequalities in Banach Spaces.
- Author
-
Cai, Gang, Gibali, Aviv, Iyiola, Olaniyi S., and Shehu, Yekini
- Subjects
VARIATIONAL inequalities (Mathematics) ,BANACH spaces ,ALGORITHMS ,HILBERT space ,MATHEMATICAL optimization - Abstract
In this paper, we study the variational inequalities involving monotone and Lipschitz continuous mapping in Banach spaces. A new and simple iterative method, which combines Halpern’s technique and the subgradient extragradient idea, is given. Under mild and standard assumptions, we establish the strong convergence of our algorithm in a uniformly smooth and convex Banach spaces. We also present a modification of our method using a line-search approach, this enable to obtain strong convergence in real and reflexive Banach spaces, without the prior knowledge of the Lipschitz constant. Numerical experiments illustrate the performances of our new algorithm and provide a comparison with related algorithms. Our results generalize and extend some of the existing works in Hilbert spaces to Banach spaces as well as provide an extension from weak to strong convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. On a Characterization of Evasion Strategies for Pursuit-Evasion Games on Graphs.
- Author
-
Ibragimov, Gafurjan and Luckraz, Shravan
- Subjects
MATHEMATICAL optimization ,GRAPH theory ,ALGORITHMS - Abstract
We give a characterization of robber-win strategies for general pursuit-evasion games with one evader and any finite number of pursuers on a finite graph. We also give an algorithm that solves robber-win games. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. An Algorithm for Minimum L-Infinity Solution of Under-determined Linear Systems.
- Author
-
Earle, Adam, Ali, M., and Fannuchi, Dario
- Subjects
LINEAR systems ,STOCHASTIC convergence ,NUMERICAL analysis ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
This paper presents a primal method for finding the minimum L-infinity solution to under-determined linear systems of equations. The method is a two-phase method. Line search is performed at both phases. We establish a condition for a direction to be descent. The convergence proof of the method is shown. Expedient numerical schemes can be used whenever appropriate. Results are presented, which show the superiority of the method over some well-known methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. Copositivity Detection of Tensors: Theory and Algorithm.
- Author
-
Chen, Haibin, Huang, Zheng-Hai, and Qi, Liqun
- Subjects
MATHEMATICAL symmetry ,ALGORITHMS ,MULTIVARIATE analysis ,POLYNOMIALS ,MATHEMATICAL optimization - Abstract
A symmetric tensor is called copositive if it generates a multivariate form taking nonnegative values over the nonnegative orthant. Copositive tensors have found important applications in polynomial optimization, tensor complementarity problems and vacuum stability of a general scalar potential. In this paper, we consider copositivity detection of tensors from both theoretical and computational points of view. After giving several necessary conditions for copositive tensors, we propose several new criteria for copositive tensors based on the representation of the multivariate form in barycentric coordinates with respect to the standard simplex and simplicial partitions. It is verified that, as the partition gets finer and finer, the concerned conditions eventually capture all strictly copositive tensors. Based on the obtained theoretical results with the help of simplicial partitions, we propose a numerical method to judge whether a tensor is copositive or not. The preliminary numerical results confirm our theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Facial Reduction Algorithms for Conic Optimization Problems.
- Author
-
Waki, Hayato and Muramatsu, Masakazu
- Subjects
ALGORITHMS ,PROBLEM solving ,MATHEMATICAL optimization ,FEASIBILITY studies ,MATHEMATICAL sequences ,NUMERICAL analysis - Abstract
In the conic optimization problems, it is well-known that a positive duality gap may occur, and that solving such a problem is numerically difficult or unstable. For such a case, we propose a facial reduction algorithm to find a primal-dual pair of conic optimization problems having the zero duality gap and the optimal value equal to one of the original primal or dual problems. The conic expansion approach is also known as a method to find such a primal-dual pair, and in this paper we clarify the relationship between our facial reduction algorithm and the conic expansion approach. Our analysis shows that, although they can be regarded as dual to each other, our facial reduction algorithm has ability to produce a finer sequence of faces of the cone including the feasible region. A simple proof of the convergence of our facial reduction algorithm for the conic optimization is presented. We also observe that our facial reduction algorithm has a practical impact by showing numerical experiments for graph partition problems; our facial reduction algorithm in fact enhances the numerical stability in those problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. Global Optimality Conditions and Optimization Methods for Quadratic Knapsack Problems.
- Author
-
Wu, Z., Yang, Y., Bai, F., and Mammadov, M.
- Subjects
KNAPSACK problems ,MATHEMATICAL optimization ,QUADRATIC programming ,DYNAMIC programming ,ALGORITHMS - Abstract
The quadratic knapsack problem ( QKP) maximizes a quadratic objective function subject to a binary and linear capacity constraint. Due to its simple structure and challenging difficulty, it has been studied intensively during the last two decades. This paper first presents some global optimality conditions for ( QKP), which include necessary conditions and sufficient conditions. Then a local optimization method for ( QKP) is developed using the necessary global optimality condition. Finally a global optimization method for ( QKP) is proposed based on the sufficient global optimality condition, the local optimization method and an auxiliary function. Several numerical examples are given to illustrate the efficiency of the presented optimization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
42. Proximal-Point Algorithm Using a Linear Proximal Term.
- Author
-
He, B., Fu, X., and Jiang, Z.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL inequalities ,LAGRANGE equations ,MATHEMATICAL analysis ,MATHEMATICS - Abstract
Proximal-point algorithms (PPAs) are classical solvers for convex optimization problems and monotone variational inequalities (VIs). The proximal term in existing PPAs usually is the gradient of a certain function. This paper presents a class of PPA-based methods for monotone VIs. For a given current point, a proximal point is obtained via solving a PPA-like subproblem whose proximal term is linear but may not be the gradient of any functions. The new iterate is updated via an additional slight calculation. Global convergence of the method is proved under the same mild assumptions as the original PPA. Finally, profiting from the less restrictions on the linear proximal terms, we propose some parallel splitting augmented Lagrangian methods for structured variational inequalities with separable operators. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
43. Min-Max Regret Robust Optimization Approach on Interval Data Uncertainty.
- Author
-
Assavapokee, T., Realff, M. J., and Ammons, J. C.
- Subjects
ROBUST optimization ,DISTRIBUTION (Probability theory) ,MATHEMATICAL optimization ,ALGORITHMS ,UNCERTAINTY (Information theory) ,LINEAR programming ,DECISION theory - Abstract
This paper presents a three-stage optimization algorithm for solving two-stage deviation robust decision making problems under uncertainty. The structure of the first-stage problem is a mixed integer linear program and the structure of the second-stage problem is a linear program. Each uncertain model parameter can independently take its value from a real compact interval with unknown probability distribution. The algorithm coordinates three mathematical programming formulations to iteratively solve the overall problem. This paper provides the application of the algorithm on the robust facility location problem and a counterexample illustrating the insufficiency of the solution obtained by considering only a finite number of scenarios generated by the endpoints of all intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. Equivalence of Two Nondegeneracy Conditions for Semidefinite Programs.
- Author
-
Flegel, M. L. and Kanzow, C.
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,SENSITIVITY theory (Mathematics) ,MATRICES (Mathematics) ,EIGENVALUES ,VECTOR analysis ,MATHEMATICAL optimization - Abstract
Nondegeneracy assumptions are often needed in order to prove the local fast convergence of suitable algorithms as well as in the sensitivity analysis for semidefinite programs. One of the more standard nondegeneracy conditions is the geometric condition used by Alizadeh et al. (Math. Program. 77:111-128, 1997). On the other hand, Kanzow and Nagel (SIAM J. Optim. 15:654-672, 2005) recently introduced an algebraic condition that was used in order to prove, for the first time, the local quadratic convergence of a suitable algorithm for the solution of semidefinite programs without using the strict complementarity assumption. The aim of this paper is to show that these two nondegeneracy conditions are equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
45. New Approach to Stochastic Optimal Control.
- Author
-
Josa-Fombellida, R. and Rincón-Zapatero, J. P.
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,FRACTIONAL integrals ,FRACTIONAL powers ,CONCAVE functions ,REAL variables ,ALGORITHMS ,STOCHASTIC control theory - Abstract
This paper provides new insights into the solution of optimal stochastic control problems by means of a system of partial differential equations, which characterize directly the optimal control. This new system is obtained by the application of the stochastic maximum principle at every initial condition, assuming that the optimal controls are smooth enough. The type of problems considered are those where the diffusion coefficient is independent of the control variables, which are supposed to be interior to the control region. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
46. Complementarity Active-Set Algorithm for Mathematical Programming Problems with Equilibrium Constraints.
- Author
-
Júdice, J. J., Sherali, H. D., Ribeiro, I. M.s, and Faustino, A. M.
- Subjects
MATHEMATICAL optimization ,LINEAR complementarity problem ,MATHEMATICAL programming ,LINEAR programming ,ALGORITHMS ,FINITE differences ,INFINITE-dimensional manifolds ,MATHEMATICAL analysis ,MATHEMATICS education - Abstract
In this paper, an algorithm for solving a mathematical programming problem with complementarity (or equilibrium) constraints (MPEC) is introduced, which uses the active-set methodology while maintaining the complementarity restrictions throughout the procedure. Finite convergence of the algorithm to a strongly stationary point of the MPEC is established under reasonable hypotheses. The algorithm can be easily implemented by adopting any active-set code for nonlinear programming. Computational experience is included to highlight the efficacy of the proposed method in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
47. Case Studies for a First-Order Robust Nonlinear Programming Formulation.
- Author
-
Hale, E. T. and Zhang, Y.
- Subjects
ROBUST optimization ,MATHEMATICAL optimization ,NONLINEAR programming ,FIRST-order logic ,MATHEMATICAL programming ,CASE studies ,MATHEMATICAL analysis ,CONSTRAINED optimization ,ALGORITHMS - Abstract
In this paper, we conduct three case studies to assess the effectiveness of a recently proposed first-order method for robust nonlinear programming [Zhang, Y.: J. Optim. Theory Appl. 132, 111-124 (2007)]. Three robust nonlinear programming problems were chosen from the literature using the criteria that results calculated using other methods must be available and the problems should be realistic, but fairly simple. Our studies show that the first-order method produced reasonable solutions when the level of uncertainty was small to moderate. In addition, we demonstrate a method for leveraging a theoretical result to eliminate constraint violations. Since the first-order method is relatively inexpensive in comparison to other robust optimization techniques, our studies indicate that, under moderate uncertainty, the first-order approach may be more suitable than other methods for large problems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
48. Adaptive Large-Neighborhood Self-Regular Predictor-Corrector Interior-Point Methods for Linear Optimization.
- Author
-
Salahi, M. and Terlaky, T.
- Subjects
INTERIOR-point methods ,MATHEMATICAL programming ,MATHEMATICAL optimization ,LINEAR programming ,ITERATIVE methods (Mathematics) ,FUNCTIONAL equations ,ALGORITHMS ,PROXIMITY spaces ,AFFINE algebraic groups - Abstract
It is known that predictor-corrector methods in a large neighborhood of the central path are among the most efficient interior-point methods (IPMs) for linear optimization (LO) problems. The best iteration bound based on the classical logarithmic barrier function isO(n log(n/ϵ)). In this paper, we propose a family of self-regular proximity-based predictor-corrector (SRPC) IPMs for LO in a large neighborhood of the central path. In the predictor step, we use either an affine scaling or a self-regular direction; in the corrector step, we use always a self-regular direction. Our new algorithms use a special proximity function with different search directions and thus allows us to improve the so far best theoretical iteration complexity for a family of SRPC IPMs. An O(√n exp((1 - q + log n)/2) log n log(n/ϵ)) worst-case iteration bound with superlinear convergence is established, where q is the barrier degree of the SR proximity function. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
49. Convex Optimization Approach to Dynamic Output Feedback Control for Delay Differential Systems of Neutral Type.
- Author
-
Park, J. H.
- Subjects
MATHEMATICAL optimization ,EXTERIOR differential systems ,CALCULUS of variations ,MATHEMATICS ,DIFFERENTIAL inequalities ,MATHEMATICAL analysis ,ALGORITHMS ,MATRICES (Mathematics) ,MATHEMATICAL inequalities - Abstract
In this paper. the design problem of the dynamic output feedback controller for the asymptotic stabilization of a class of linear delay differential systems of the neutral type is considered. A criterion for the existence of such controller is derived based on the matrix inequality approach combined with the Lyapunov method. A parametrized characterization of the controller is given in terms of the feasible solutions to certain matrix inequalities, which can be solved by various convex optimization algorithms. A numerical example is given to illustrate the proposed design method. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
50. On Central-Path Proximity Measures in Interior-Point Methods.
- Author
-
Gonzalez-Lima, M. D. and Roos, C.
- Subjects
INTERIOR-point methods ,MATHEMATICAL programming ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,ALGORITHMS ,MAXIMA & minima ,SIMULATION methods & models ,DYNAMIC programming ,INTEGER programming - Abstract
One of the main ingredients of interior-point methods is the generation of iterates in a neighborhood of the central path. Measuring how close the iterates are to the central path is an important aspect of such methods and it is accomplished by using proximity measure functions. In this paper, we propose a unified presentation of the proximity measures and a study of their relationships and computational role when using a generic primal-dual interior-point method for computing the analytic center for a standard linear optimization problem. We demonstrate that the choice of the proximity measure can affect greatly the performance of the method. It is shown that we may be able to choose the algorithmic parameters and the central-path neighborhood radius (size) in such a way to obtain comparable results for several measures. We discuss briefly how to relate some of these results to nonlinear programming problems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.