29 results
Search Results
2. ON MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHMS.
- Author
-
Salahi, M., Peng, J., and Terlaky, T.
- Subjects
ALGORITHMS ,FOUNDATIONS of arithmetic ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,NONSMOOTH optimization ,CONSTRAINED optimization ,LINEAR algebra ,STOCHASTIC convergence - Abstract
In this paper we discuss the polynomiality of a feasible version of Mehrotra's predictor-corrector algorithm whose variants have been widely used in several interior point method (IPM)-based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms in this algorithm may lead to small steps being taken in order to keep the iterates in a large neighborhood of the central path, which is important for proving polynomial complexity properties of this method. Motivated by this example, we introduce a safeguard in Mehrotra's algorithm that keeps the iterates in the prescribed neighborhood and allows us to obtain a positive lower bound on the step size. This safeguard strategy is also used when the affine scaling direction performs poorly. We prove that the safeguarded algorithm will terminate after at most O(n2log(x°}T s° /e) iterations. By modestly modifying the corrector direction, we reduce the iteration complexity to O(nlog(x°}T s° /e). To ensure fast asymptotic convergence of the algorithm, we changed Mehrotra's updating scheme of the centering parameter slightly while keeping the safeguard. The new algorithms have the same order of iteration complexity as the safeguarded algorithms but enjoy superlinear convergence as well. Numerical results using the McIPM and LIPSOL software packages are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2007
3. MESH ADAPTIVE DIRECT SEARCH ALGORITHMS FOR CONSTRAINED OPTIMIZATION.
- Author
-
Audet, Charles and Dennis Jr., J. E.
- Subjects
ALGORITHMS ,NONSMOOTH optimization ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,MATHEMATICS - Abstract
This paper addresses the problem of minimization of a nonsmooth function under general nonsmooth constraints when no derivatives of the objective or constraint functions are available. We introduce the mesh adaptive direct search (MADS) class of algorithms which extends the generalized pattern search (GPS) class by allowing local exploration, called polling, in an asymptotically dense set of directions in the space of optimization variables. This means that under certain hypotheses, including a weak constraint qualification due to Rockafellar, MADS can treat constraints by the extreme barrier approach of setting the objective to infinity for infeasible points and treating the problem as unconstrained. The main GPS convergence result is to identify limit points x0302;, where the Clarke generalized derivatives are nonnegative in a finite set of directions, called refining directions. Although in the unconstrained case, nonnegative combinations of these directions span the whole space, the fact that there can only be finitely many GPS refining directions limits rigorous justification of the barrier approach to finitely many linear constraints for GPS. The main result of this paper is that the general MADS framework is flexible enough to allow the generation of an asymptotically dense set of refining directions along which the Clarke derivatives are nonnegative. We propose an instance of MADS for which the refining directions are dense in the hypertangent cone at x0302; with probability 1 whenever the iterates associated with the refining directions converge to a single x0302;. The instance of MADS is compared to versions of GPS on some test problems. We also illustrate the limitation of our results with examples. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
4. ANALYSIS OF GENERALIZED PATTERN SEARCHES.
- Author
-
Audet, Charles and Dennis Jr., J. E.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,STOCHASTIC convergence ,MATHEMATICS - Abstract
This paper contains a new convergence analysis for the Lewis and Torczon generalized pattern search (GPS) class of methods for unconstrained and linearly constrained optimization. This analysis is motivated by a desire to understand the successful behavior of the algorithm under hypotheses that are satisfied by many practical problems. Specifically, even if the objective function is discontinuous or extended-valued, the methods find a limit point with some minimizing properties. Simple examples show that the strength of the optimality conditions at a limit point depends not only on the algorithm, but also on the directions it uses and on the smoothness of the objective at the limit point in question. The contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties and to the defining directions for the algorithm, and it gives previous results as corollaries. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
5. CONVERGENCE PROPERTIES OF THE BFGS ALGORITM.
- Author
-
Yu-Hong Dai
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,CONJUGATE gradient methods ,NUMERICAL solutions to equations ,APPROXIMATION theory - Abstract
The BFGS method is one of the most, famous quasi-Newton algorithms for unconstrained optimization. In 1984, Powell presented an example of a function of two variables that shows that the Polak Ribi[egrave;]re Polyak (PRP) conjugate gradient method and the BFGS quasi-Newt, on method may cycle around eight nonstationary points if each line search picks a local minimum that provides a reduction in the objective function. In this paper, a new technique of choosing parameters is introduced, and an example with only six cyclic points is provided. It is also noted through the examples that the BFGS method with Wolfe line searches need not converge for nonconvex objective functions. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
6. ERRATUM: MESH ADAPTIVE DIRECT SEARCH ALGORITHMS FOR CONSTRAINED OPTIMIZATION.
- Author
-
Audet, Charles, Custódio, A. L., and Dennis, Jr., J. E.
- Subjects
ALGORITHMS ,NONSMOOTH optimization ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
In [SIAM J. Optim., 17 (2006), pp. 188-217] Audet and Dennis proposed the class of mesh adaptive direct search (MADS) algorithms for minimization of a nonsmooth function under general nonsmooth constraints. The notation used in the paper evolved since the preliminary versions, and, unfortunately, even though the statement of Proposition 4.2 is correct, it is not compatible with the final notation. The purpose of this note is to show that the proposition is valid. [ABSTRACT FROM AUTHOR]
- Published
- 2007
7. TEAM THEORY AND PERSON-BY-PERSON OPTIMIZATION WITH BINARY DECISIONS.
- Author
-
BAUSO, D. and PESENTI, R.
- Subjects
MATHEMATICAL optimization ,MAXIMA & minima ,OPERATIONS research ,MATHEMATICAL analysis ,ALGORITHMS - Abstract
In this paper, we extend the notion of person-by-person (pbp) optimization to binary decision spaces. The novelty of our approach is the adaptation to a dynamic team context of notions borrowed from the pseudo-boolean optimization field as completely local-global or unimodal functions and submodularity. We also generalize the concept of pbp optimization to the case where groups of m decisions makers make joint decisions sequentially, which we refer to as mbm optimization. The main contribution is a description of sufficient conditions, verifiable in polynomial time, under which a pbp or an mbm optimization algorithm converges to the team-optimum. As a second contribution, we present a local and greedy algorithm characterized by approximate decision strategies (i.e., strategies based on a local state vector) that return the same decisions as in the complete information framework (where strategies are based on full state vector). As a last contribution, we also show that there exists a subclass of submodular team problems, recognizable in polynomial time, for which the pbp optimization converges for at least an opportune initialization of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
8. FINDING OPTIMAL ALGORITHMIC PARAMETERS USING DERIVATIVE-FREE OPTIMIZATION.
- Author
-
Audet, Charles and Orban, Dominique
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,PARAMETER estimation ,NUMERICAL grid generation (Numerical analysis) ,MATHEMATICAL analysis - Abstract
The objectives of this paper are twofold. We devise a general framework for identifying locally optimal algorithmic parameters. Algorithmic parameters are treated as decision variables in a problem for which no derivative knowledge or existence is assumed. A derivative-free method for optimization seeks to minimize some measure of performance of the algorithm being fine-tuned. This measure is treated as a black-box and may be chosen by the user. Examples are given in the text. The second objective is to illustrate this framework by specializing it to the identification of locally optimal trust-region parameters in unconstrained optimization. The derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods. We illustrate the flexibility of the latter and in particular make provision for surrogate objectives. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. Each function call may take several hours and may not always return a predictable result. A tailored surrogate function is used to guide the search towards a local solution. The parameters thus identified differ from traditionally used values, and allow one to solve a problem that remained otherwise unsolved in a reasonable time using traditional values. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
9. A FEASIBLE SEQUENTIAL LINEAR EQUATION METHOD FOR INEQUALITY CONSTRAINED OPTIMIZATION.
- Author
-
Yu-Fei Yang, Dong-Hui Li, and Liqun Qi
- Subjects
ALGORITHMS ,MATRICES (Mathematics) ,EQUATIONS ,MATHEMATICS ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
In this paper, by means of the concept of the working set, which is an estimate of the active set, we propose a feasible sequential linear equation algorithm for solving inequality constrained optimization problems. At each iteration of the proposed algorithm, we first solve one system of linear equations with a coefficient matrix of size m x m (where m is the number of constraints) to compute the working set; we then solve a subproblem which consists of four reduced systems of linear equations with a common coefficient matrix. Unlike existing QP-free algorithms, the subproblem is concerned with only the constraints corresponding to the working set. The constraints not in the working set are neglected. Consequently, the dimension of each subproblem is not of full dimension. Without assuming the isolatedness of the stationary points, we prove that every accumulation point of the sequence generated by the proposed algorithm is a KKT point, of the problem. Moreover, after finitely many iterations, the working set becomes independent of the iterates and is essentially the same as the active set of the KKT point. In other words, after finitely many steps, only those constraints which are active at the solution will be involved in the subproblem. Under some additional conditions, we show that the convergence rate is two-step superlinear or even Q-superlinear. We also report some preliminary numerical experiments to show that the proposed algorithm is practicable and effective for the test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
10. MESH PARTITIONING: A MULTILEVEL BALANCING AND REFINEMENT ALGORITHM.
- Author
-
Walshaw, C. and Cross, M.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,EQUATIONS ,MATHEMATICS ,TRANSITION flow ,MATHEMATICAL analysis - Abstract
Multilevel algorithms are a successful class of optimization techniques which addresses the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimization method which refines the partition at each graph level. In this paper we present an enhancement of the technique which uses imbalance to achieve higher quality partitions. We also present a formulation of the KernighanLin partition optimization algorithm which incorporates load-balancing. The resulting algorithm is tested against a different but related state-of-the-art partitioner and shown to provide improved results. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
11. SECOND-ORDER CONE RELAXATIONS FOR BINARY QUADRATIC POLYNOMIAL PROGRAMS.
- Subjects
POLYNOMIALS ,RELAXATION methods (Mathematics) ,MATHEMATICAL programming ,NUMERICAL analysis ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL analysis - Published
- 2011
- Full Text
- View/download PDF
12. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION.
- Author
-
Byrd, Richard H., Curtis, Frank E., and Nocedal, Jorge
- Subjects
ALGORITHMS ,STOCHASTIC convergence ,QUADRATIC programming ,LINEAR systems ,MATHEMATICAL analysis ,LINEAR algebra ,NONLINEAR programming ,LINEAR differential equations ,MATHEMATICAL optimization - Abstract
We present an algorithm for large-scale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for large-scale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
13. On the Equivalence between the Primal-Dual Schema and the Local Ratio Technique.
- Author
-
Bar-Yehuda, Reuven and Rawitz, Dror
- Subjects
- *
ALGORITHMS , *MATHEMATICS , *LINEAR programming , *MATHEMATICAL optimization , *ALGEBRAIC number theory , *MATHEMATICAL analysis - Abstract
We discuss two approximation paradigms that were used to construct many approximation algorithms during the last two decades, the primal-dual schema and the local ratio technique. Recently, primal-dual algorithms were devised by first constructing a local ratio algorithm and then transforming it into a primal-dual algorithm. This was done in the case of the $2$-approximation algorithms for the feedback vertex set problem and in the case of the first primal-dual algorithms for maximization problems. Subsequently, the nature of the connection between the two paradigms was posed as an open question by Williamson [Math. Program., 91 (2002), pp. 447--478]. In this paper we answer this question by showing that the two paradigms are equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
14. COMPARISON AND AUTOMATED SELECTION OF LOCAL OPTIMIZATION SOLVERS FOR INTERVAL GLOBAL OPTIMIZATION METHODS.
- Author
-
Markót, MiháLy Csaba and Schichl, Hermann
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,MATHEMATICS - Abstract
We compare six state-of-the-art local optimization solvers, with a focus on there efficiency when invoked within an interval-based global optimization algorithm. For comparison purposes we design three special performance indicators: a solution check indicator (measuring whether the local minimizes found are good candidates for near-optimal verified feasible points), a function value indicator (measuring the contribution to the progress of the global search), and a running time indicator (estimating the computational cost of the local search within the global search). The solvers are compared on the COCONUT Environment test set consisting of 1307 problems. Our main goal is to predict the behavior of the solvers in terms of the three performances indicators on a new problem. For this we introduce a k-nearest neighbor method applied over a feature space consisting of several categorical and numerical features of the optimization problems. The quality and robustness of the prediction is demonstrated by various quality measurements with detailed comparative tests. In particular, we found that on the test set we are able to pick a "best" solver in 66 89% of the cases and avoid picking all "useless" solvers in 95-99% of the cases (when a useful alternative exists). The resulting automated solver selection method is implemented as an inference engine of the COCONUT Environment. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
15. UNIFIED EMBEDDED PARALLEL FINITE ELEMENT COMPUTATIONS VIA SOFTWARE-BASED FRÉCHET DIFFERENTIATION.
- Author
-
LONG, KEVIN, KIRBY, ROBERT, and VAN BLOEMEN WAANDERS, BART
- Subjects
FINITE element method ,PARTIAL differential equations ,NUMERICAL calculations ,SENSITIVITY theory (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS ,EMBEDDINGS (Mathematics) ,MATHEMATICAL analysis - Abstract
Computational analysis of systems governed by partial differential equations (PDEs) requires not only the calculation of a solution but the extraction of additional information such as the sensitivity of that solution with respect to input parameters or the inversion of the system ill an optimization or design loop. Moving beyond the automation of discretization of PDEs by finite element methods, we present a mathematical framework that unifies the discretization of PDEs with these additional analysis requirements. In particular, Fréchet differentiation on a class of functionals together with a high-performance finite element framework has led to a code, called Sundance, that provides high-level programming abstractions for the automatic, efficient evaluation of finite variational forms together with the derived operators required by engineering analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
16. ACTIVE SET IDENTIFICATION FOR LINEARLY CONSTRAINED MINIMIZATION WITHOUT EXPLICIT DERIVATIVES.
- Author
-
LEWIS, ROBERT MICHAEL and TORCZON, VIRGINIA
- Subjects
MATHEMATICAL optimization ,SET theory ,ALGORITHMS ,ITERATIVE methods (Mathematics) ,MATHEMATICAL analysis - Abstract
We consider active set identification for linearly constrained optimization problems in the absence of explicit information about the derivative of the objective function. We begin by presenting some general results on active set identification that are not tied to any particular algorithm. These general results are sufficiently strong that, given a sequence of iterates converging to a Karush–Kuhn–Tucker point, it is possible to identify binding constraints for which there are nonzero multipliers. We then focus on generating set search methods, a class of derivative-free direct search methods. We discuss why these general results, which are posed in terms of the direction of steepest descent, apply to generating set search, even though these methods do not have explicit recourse to derivatives. Nevertheless, there is a clearly identifiable subsequence of iterations at which we can reliably estimate the set of constraints that are binding at a solution. We discuss how active set estimation can be used to accelerate generating set search methods and illustrate the appreciable improvement that can result using several examples from the CUTEr test suite. We also introduce two algorithmic refinements for generating set search methods. The first expands the subsequence of iterations at which we can make inferences about stationarity. The second is a more flexible step acceptance criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2009
17. THE EUCLIDEAN ORIENTEERING PROBLEM REVISITED.
- Author
-
Ke Chen and Har-Peled, Sariel
- Subjects
PLANE geometry ,AREA measurement ,EUCLID'S elements ,MATHEMATICAL analysis ,ALGORITHMS ,POLYNOMIALS ,APPROXIMATION theory ,MATHEMATICAL optimization ,MATHEMATICS - Abstract
We consider the rooted orienteering problem: Given a set P of n points in the plane, a starting point r ∊ P, and a length constraint B, one needs to find a path starting from r that visits as many points of P as possible and of length not exceeding B. We present a (1 - ϵ)-approximation algorithm for this problem that runs in n
O(1/ϵ) time; the computed path visits at least (1 - ϵ)kopt points of P, where kopt is the number of points visited by an optimal solution. This is the first polynomial time approximation scheme for this problem. The algorithm also works in higher dimensions. [ABSTRACT FROM AUTHOR]- Published
- 2008
- Full Text
- View/download PDF
18. WEAK SHARP MINIMA FOR SEMI-INFINITE OPTIMIZATION PROBLEMS WITH APPLICATIONS.
- Author
-
Xi Yin Zheng and Xiao Qi Yang
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL inequalities ,ALGORITHMS ,MATHEMATICS ,MATHEMATICAL analysis ,MAXIMA & minima - Abstract
We study local weak sharp minima and sharp minima for smooth semi-infinite optimization problems SIP. We provide severed dual and primal characterizations for a point to be a sharp minimum or a weak sharp minimum of SIP. As applications, we present several sufficient and necessary conditions of calmness for infinitely many smooth inequalities. In particular, we improve some calmness results in [R. Henrion and J. Outrata, Math. Program., 104 (2005), pp. 437-464]. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. A NEW ACTIVE SET ALGORITHM FOR BOX CONSTRAINED OPTIMIZATION.
- Author
-
Hager, William W. and Hongchao Zhang
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established. For a nondegenerate stationary point, the algorithm eventually reduces to unconstrained optimization without restarts. Similarly, for a degenerate stationary point, where the strong second order sufficient optimality condition holds, the algorithm eventually reduces to unconstrained optimization without restarts. A specific implementation of the ASA is given which exploits the recently developed cyclic Barzilai-Borwein (CBB) algorithm for the gradient projection step and the recently developed conjugate gradient algorithm CG_DESCENT for unconstrained optimization. Numerical experiments are presented using box constrained problems in the CUTEr and MINPACK-2 test problem libraries. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
20. LOWER BOUNDS FOR ON-LINE GRAPH PROBLEMS WITH APPLICATION TO ON-LINE CIRCUIT AND OPTICAL ROUTING.
- Author
-
Bartal, Yair, Fiat, Amos, and Leonardi, Stefano
- Subjects
COMPUTER assisted instruction ,ALGORITHMS ,MATHEMATICAL optimization ,MACHINE theory ,MATHEMATICAL analysis ,NUMERICAL analysis - Abstract
We present lower bounds on the competitive ratio of randomized algorithms for a wide class of on-line graph optimization problems, and we apply such results to on-line virtual circuit and optical routing problems. Lund and Yannakakis [The approximation of maximum subgraph problems, in Proceedings of the 20th International Colloquium on Automata, Languages and Programming, 1993, pp. 40-51] give inapproximability results for the problem of finding the largest vertex induced subgraph satisfying any nontrivial, hereditary property π—e.g., independent set, planar, acyclic, bipartite. We consider the on-line version of this family of problems, where some graph G is fixed and some subgraph H of G is presented on-line, vertex by vertex. The on-line algorithm must choose a subset of the vertices of H, choosing or rejecting a vertex when it is presented, whose vertex induced subgraph satisfies property π. Furthermore, we study the on-line version of graph coloring whose off-line version has also been shown to be inapproximable [C. Lund and M. Yannakakis, On the hardness of approximating minimization problems, in Proceedings of the 25th ACM Symposium on Theory of Computing, 1993], on-line max edge-disjoint paths, and on-line path coloring problems. Irrespective of the time complexity, we show an Ω(n
ϵ ) lower bound on the competitive ratio of randomized on-line algorithms for any of these problems. As a consequence, we obtain an Ω(nϵ ) lower bound on the competitive ratio of randomized on-line algorithms for virtual circuit routing on general networks, in contrast to the known results for some specific networks. Similar lower bounds are obtained for on-line optical routing as well. [ABSTRACT FROM AUTHOR]- Published
- 2006
- Full Text
- View/download PDF
21. Revisiting Asynchronous Parallel Pattern Search for Nonlinear Optimization.
- Author
-
Kolda, Tamara G.
- Subjects
COMPUTER systems ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL analysis - Abstract
We present a new asynchronous parallel pattern search (APPS) method which is different from that developed previously by Hough, Kolda, and Torczon. APPS efficiently uses parallel and distributed computing platforms to solve science and engineering design optimization problems where derivatives are unavailable and cannot be approximated. The original APPS was designed to be fault-tolerant as well as asynchronous and was based on a peer-to-peer design. Each process was in charge of a single, fixed search direction. Our new version is based instead on a manager-worker paradigm. Though less fault-tolerant, the resulting algorithm is more flexible in its use of distributed computing resources. We further describe how to incorporate a zero-order sufficient decrease condition and handle bound constraints. Convergence theory for all situations (unconstrained and bound constrained as well as simple and sufficient decrease) is developed. We close with a discussion of how the new APPS will better facilitate the future incorporation of linear and nonlinear constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
22. Minimizing within Convex Bodies Using a Convex Hull Method.
- Author
-
Lachand-Robert, Thomas and Oudet, Édouard
- Subjects
MATHEMATICAL optimization ,CONVEX functions ,ALGORITHMS ,CONVEX domains ,MATHEMATICAL analysis - Abstract
We present numerical methods to solve optimization problems on the space of convex functions or among convex bodies. Hence convexity is a constraint on the admissible objects, whereas the functionals are not required to be convex. To deal with this, our method mixes geometrical and numerical algorithms. We give several applications arising from classical problems in geometry and analysis: Alexandrov's problem of finding a convex body of prescribed surface function; Cheeger's problem of a subdomain minimizing the ratio surface area on volume; Newton's problem of the body of minimal resistance. In particular for the latter application, the minimizers are still unknown, except in some particular classes. We give approximate solutions better than the theoretical known ones, hence demonstrating that the minimizers do not belong to these classes. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
23. A Filter-Trust-Region Method for Unconstrained Optimization.
- Author
-
Gould, Nick I. M., Sainvitu, Caroline, and Toint, Philippe L.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,EQUATIONS ,LEAST squares ,MATHEMATICAL analysis - Abstract
A new filter-trust-region algorithm for solving unconstrained nonlinear optimization problems is introduced. Based on the filter technique introduced by Fletcher and Leyffer, it extends an existing technique of Gould, Leyffer, and Toint [SIAM J. Optim., 15 (2004), pp. 17--38] for nonlinear equations and nonlinear least-squares to the fully general unconstrained optimization problem. The new algorithm is shown to be globally convergent to at least one second-order critical point, and numerical experiments indicate that it is very competitive with more classical trust-region algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
24. Model Problems for the Multigrid Optimization of Systems Governed by Differential Equations.
- Author
-
Lewis, Robert Michael and Nash, Stephen G.
- Subjects
DIFFERENTIAL equations ,MATHEMATICAL optimization ,BESSEL functions ,CALCULUS ,ALGORITHMS ,MATHEMATICAL analysis ,OPERATIONS research ,TRANSCENDENTAL functions - Abstract
We discuss a multigrid approach to the optimization of systems governed by differential equations. Such optimization problems appear in many applications and are of a different nature than systems of equations. Our approach uses an optimization-based multigrid algorithm in which the multigrid algorithm relies explicitly on nonlinear optimization models as subproblems on coarser grids. Our goal is not to argue for a particular optimization-based multigrid algorithm, but instead to demonstrate how multigrid can be used to accelerate nonlinear programming algorithms. Furthermore, using several model problems we give evidence (both theoretical and numerical) that the optimization setting is well suited to multigrid algorithms. Some of the model problems show that the optimization problem may be more amenable to multigrid than the governing differential equation. In addition, we relate the multigrid approach to more traditional optimization methods as further justification for the use of an optimization-based multigrid algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
25. A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization.
- Author
-
Burke, James V., Lewis, Adrian S., and Overton, Michael L.
- Subjects
ALGORITHMS ,ALGEBRA ,MATHEMATICS ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
Let f be a continuous function on $\Rl^n$, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but is a function whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm. When f is locally Lipschitz and has bounded level sets, and the sampling radius $\eps$ is fixed, we show that, with probability 1, the algorithm generates a sequence with a cluster point that is Clarke $\eps$-stationary. Furthermore, we show that if f has a unique Clarke stationary point $\bar x$, then the set of all cluster points generated by the algorithm converges to $\bar x$ as $\eps$ is reduced to zero. Numerical results are presented demonstrating the robustness of the algorithm and its applicability in a wide variety of contexts, including cases where f is not locally Lipschitz at minimizers. We report approximate local minimizers for functions in the applications literature which have not, to our knowledge, been obtained previously. When the termination criteria of the algorithm are satisfied, a precise statement about nearness to Clarke $\eps$-stationarity is available. A MATLAB implementation of the algorithm is posted at http://www.cs.nyu.edu/overton/papers/gradsamp/alg. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
26. ROBUST STOPPING CRITERIA FOR DYKSTRA'S ALGORITHM.
- Author
-
Birgin, Ernesto G. and Raydan, Marcos
- Subjects
ALGORITHMS ,OPTIMAL stopping (Mathematical statistics) ,MATHEMATICAL optimization ,GRAPHICAL projection ,MATHEMATICAL analysis ,SIMULATION methods & models - Abstract
Dykstra's algorithm is a suitable alternating projection scheme for solving the optimization problem of finding the closest point to one given in the intersection of a finite number of closed and convex sets. It has been recently used in a wide variety of applications. However, in practice, the commonly used stopping criteria are not robust and could stop the iterative process prematurely at a point that does not solve the optimization problem. In this work we present a counterexample to illustrate the weakness of the commonly used criteria, and then we develop robust stopping rules. Additional experimental results are shown to illustrate the advantages of this new stopping criteria, including their associated computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
27. GLOBAL MINIMIZATION OF NORMAL QUARTIC POLYNOMIALS BASED ON GLOBAL DESCENT DIRECTIONS.
- Author
-
LIQUN QI, ZHONG WAN, and YU-FEI YANG
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,POLYNOMIALS ,ALGEBRA ,MOMENT spaces - Abstract
A normal quartic polynomial is a quartic polynomial whose fourth, degree term coefficient tensor is positive definite. Its minimization problem is one of the simplest cases of nonconvex global optimization, and has engineering applications. We call a direction a global descent direction of a function at a point if there is another point with a lower function value along this direction. For a normal quartic polynomial, we present a criterion to find a global descent direction at a noncritical point, a saddle point, or a local maximizer. We give sufficient conditions to judge whether a local minimizer is global and give a method for finding a global descent direction at a local, but not global, minim izer. We also give a formula at a critical point and a method at a noncritical point to find a one-dimensional global minimizer along a global descent direction. Based upon these, we propose a global descent algorithm for finding a global minimizer of a normal quartic polynomial when n = 2. For the case n > 3, we propose an algorithm for finding an e-global minimizer. At each iteration of a second algorithm, a system of constrained nonlinear equations is solved. Numerical tests show that these two algorithms are promising. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
28. NEWTON-TYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS.
- Author
-
IZMAILOV, A. F. and SOLODOV, M. V.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL analysis ,LINEAR complementarity problem ,NEWTON-Raphson method - Abstract
We consider equality-constrained optimization problems, where a given solution may not satisfy any constraint qualification but satisfies the standard second-order sufficient condition for optimality. Based on local identification of the rank of the constraints degeneracy via the singular-value decomposition, we derive a modified primal-dual optimality system whose solution is locally unique, nondegenerate, and thus can be found by standard Newton-type techniques. Using identification of active constraints, we further extend our approach to mixed equality- and inequality-constrained problems, and to mathematical programs with complementarity constraints {MPCC). In particular, for MPCC we obtain a local algorithm with quadratic convergence under the second-order sufficient condition only, without any constraint qualifications, not even the special MPCC constraint qualifications. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
29. REOPTIMIZATION WITH THE PRIMAL-DUAL INTERIOR POINT METHOD.
- Author
-
Gondzio, Jacek and Grothey, Andreas
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL analysis ,MATHEMATICS ,ALGORITHMS ,ALGEBRA - Abstract
Reoptimization techniques for an interior point, method applied to solving a sequence of linear programming problems are discussed. Conditions are giver, for problem perturbations that can be absorbed in merely one Newt, on step. The analysis is performed for both short-step and long step feasible path following methods. A practical procedure is then derived for an infeasible path-following method. It is applied in the context of crash start for several large-scale structured linear programs. Numerical results with OOPS, a new object oriented parallel solver, demonstrate the efficiency of the approach. For large structured linear programs, crash start leads to about 40% reduction in the number of iterations and translates into a 25% reduction of the solution time. The crash procedure parallelizes well, and speed ups between 3.1 3.8 on four processors are achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.