55 results
Search Results
2. Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems.
- Author
-
Sitthithakerngkiet, Kanokwan, Deepho, Jitsupa, Martínez-Moreno, Juan, and Kumam, Poom
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL optimization , *FIXED point theory , *HILBERT space , *NONEXPANSIVE mappings - Abstract
The purpose of this paper is to introduce a general iterative method for finding a common element of the set of common fixed points of an infinite family of nonexpansive mappings and the set of split variational inclusion problem in the framework Hilbert spaces. Strong convergence theorem of the sequences generated by the purpose iterative scheme is obtained. In the last section, we present some computational examples to illustrate the assumptions of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. Modified multiple search cooperative foraging strategy for improved artificial bee colony optimization with robustness analysis.
- Author
-
Harfouchi, F., Habbi, H., Ozturk, C., and Karaboga, D.
- Subjects
- *
BEE swarming , *ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *GROUP work in education - Abstract
Considering that extending the concept of bees partitioning into subgroups of foragers to the onlooker phase of the cooperative learning artificial bee colony (CLABC) strategy is not only feasible from algorithmic viewpoint but might reflect real behavioral foraging characteristics of bee swarms, this paper studies whether the performance of CLABC can be enhanced by developing a new model for the proposed cooperative foraging scheme. Relying on this idea, we design a modified cooperative learning artificial bee colony algorithm, referred to as mCLABC. The design procedure is built upon the definition of a partitioning scheme of onlookers allowing the generation of subgroups of foragers that might evolve differently by using specific solution search rules. In order to improve the involving of local and global search at both employed and onlooker levels, the multiple search mechanism is further tuned and scheduled according to a random selection strategy defined on the evolving parameters. Moreover, a detailed performance and robustness study of the proposed algorithm dealing with the analysis of the impact of different structural and parametric settings is conducted on benchmark optimization problems. Superior convergence performance, better solution quality, and strong robustness are the main features of the proposed strategy in comparison with recent ABC variants and advanced methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis.
- Author
-
Bergou, El Houcine, Diouane, Youssef, and Gratton, Serge
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *NONCONVEX programming , *MATHEMATICAL regularization , *STOCHASTIC convergence - Abstract
Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity among second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such a scaled norm, the obtained method behaves as a line-search algorithm along the quasi-Newton direction with a special backtracking strategy. Under appropriate assumptions, the new algorithm enjoys the same convergence and complexity properties as adaptive regularized algorithm using cubics. The complexity for finding an approximate first-order stationary point can be improved to be optimal whenever a second-order version of the proposed algorithm is regarded. In a similar way, using the same scaled norm to define the trust-region neighborhood, we show that the trust-region algorithm behaves as a line-search algorithm. The good potential of the obtained algorithms is shown on a set of large-scale optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. General Split Equality Equilibrium Problems with Application to Split Optimization Problems.
- Author
-
Chang, Shih-sen, Wang, Lin, Wang, Xiong, and Wang, Gang
- Subjects
- *
MATHEMATICAL optimization , *HILBERT space , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL sequences - Abstract
The purpose of this paper is to introduce and study the general split equality equilibrium problem and the general split equilibrium problem in Hilbert spaces. In order to solve these problems, a new simultaneous iterative algorithm is proposed and several strong convergence theorems for the sequences generated by the algorithm are proved. As applications, we utilize our results to study the general split equality optimization problem and the general split optimization problem. The results presented in the paper extend and improve some recent results. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Weiszfeld's Method: Old and New Results.
- Author
-
Beck, Amir and Sabach, Shoham
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *LOCATION problems (Programming) , *ALGORITHMS - Abstract
In 1937, the 16-years-old Hungarian mathematician Endre Weiszfeld, in a seminal paper, devised a method for solving the Fermat-Weber location problem-a problem whose origins can be traced back to the seventeenth century. Weiszfeld's method stirred up an enormous amount of research in the optimization and location communities, and is also being discussed and used till these days. In this paper, we review both the past and the ongoing research on Weiszfed's method. The existing results are presented in a self-contained and concise manner-some are derived by new and simplified techniques. We also establish two new results using modern tools of optimization. First, we establish a non-asymptotic sublinear rate of convergence of Weiszfeld's method, and second, using an exact smoothing technique, we present a modification of the method with a proven better rate of convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
7. A skin membrane-driven membrane algorithm for many-objective optimization.
- Author
-
Li, Zhangxiao, Zhang, Lei, Su, Yansen, Li, Jun, and Wang, Xun
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *ALGORITHMS , *NATURAL computation , *CELL membranes - Abstract
Many-objective optimization problems refer to problems that hold more than three conflicting objectives, which are more challenging than those with two or three objectives. Membrane computing models, usually termed P systems, are a class of living cell-inspired computing models, which provide a rich framework for solving a variety of challenging problems. In this paper, a membrane computing model-based algorithm is proposed for many-objective optimization. Specifically, the population in the skin membrane is divided into two subpopulations, one used for guiding the convergence of populations in the internal membrane, while the other taking charge of the diversity of populations. Experimental results on benchmark test problems for many-objective optimization indicate the superiority of the developed algorithm over existing evolutionary many-objective optimization algorithms and P systems based multi-objective optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Self-adaptive differential evolution algorithm with improved mutation strategy.
- Author
-
Wang, Shihao, Li, Yuzhen, Yang, Hongyu, and Liu, Hong
- Subjects
- *
DIFFERENTIAL evolution , *ALGORITHMS , *GENETIC mutation , *MATHEMATICAL optimization , *STOCHASTIC convergence - Abstract
Different mutation strategies and control parameters settings directly affect the performance of differential evolution (DE) algorithm. In this paper, a self-adaptive differential evolution algorithm with improved mutation strategy (IMSaDE) is proposed to improve optimization performance of DE. IMSaDE improves the “DE/rand/2” mutation strategy by incorporating elite archive strategy and control parameters adaptation strategy. Both strategies diversify the population and improve the convergence performance of the algorithm. IMSaDE was compared with eleven DE algorithms and six non-DE algorithms by using a set of 20 benchmark functions taken from the literature. Experimental results show that the overall performance of IMSaDE is better than the other competitors. In addition, the size of elite population has a significant impact on the performance of IMSaDE. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. An Algorithm for Minimum L-Infinity Solution of Under-determined Linear Systems.
- Author
-
Earle, Adam, Ali, M., and Fannuchi, Dario
- Subjects
- *
LINEAR systems , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
This paper presents a primal method for finding the minimum L-infinity solution to under-determined linear systems of equations. The method is a two-phase method. Line search is performed at both phases. We establish a condition for a direction to be descent. The convergence proof of the method is shown. Expedient numerical schemes can be used whenever appropriate. Results are presented, which show the superiority of the method over some well-known methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Restricted Normal Cones and Sparsity Optimization with Affine Constraints.
- Author
-
Bauschke, Heinz, Luke, D., Phan, Hung, and Wang, Xianfu
- Subjects
- *
MATHEMATICAL optimization , *PROBLEM solving , *PARALLEL computers , *STOCHASTIC convergence , *ALGORITHMS , *ESTIMATION theory - Abstract
The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely behaved nonconvex relaxations. In this paper we consider the elementary method of alternating projections (MAP) for solving the sparsity optimization problem without employing convex heuristics. In a parallel paper we recently introduced the restricted normal cone which generalizes the classical Mordukhovich normal cone and reconciles some fundamental gaps in the theory of sufficient conditions for local linear convergence of the MAP algorithm. We use the restricted normal cone together with the notion of superregularity, which is inherently satisfied for the affine sparse optimization problem, to obtain local linear convergence results with estimates for the radius of convergence of the MAP algorithm applied to sparsity optimization with an affine constraint. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. A Version of the Mirror descent Method to Solve Variational Inequalities.
- Author
-
Semenov, V.
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *MATHEMATICAL optimization , *ALGORITHMS , *OPERATOR theory , *STOCHASTIC convergence - Abstract
Nemirovski and Yudin proposed the mirror descent algorithm at the late 1970s to solve convex optimization problems. This method is suitable to solve huge-scale optimization problems. In the paper, we describe a new version of the mirror descent method to solve variational inequalities with pseudomonotone operators. The method can be interpreted as a modification of Popov's two-step algorithm with the use of Bregman projections on the feasible set. We prove the convergence of the sequences generated by the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Many-objective optimization based on information separation and neighbor punishment selection.
- Author
-
Shen, Ruimin, Zheng, Jinhua, Li, Miqing, and Zou, Juan
- Subjects
- *
MATHEMATICAL optimization , *PARETO analysis , *STOCHASTIC convergence , *ALGORITHMS , *COORDINATES - Abstract
Many-objective optimization refers to optimizing a multi-objective optimization problem (MOP) where the number of objectives is more than 3. Most classical evolutionary multi-objective optimization (EMO) algorithms use the Pareto dominance relation to guide the search, which usually perform poorly in the many-objective optimization scenario. This paper proposes an EMO algorithm based on information separation and neighbor punishment selection (ISNPS) to deal with many-objective optimization problems. ISNPS separates individual's behavior in the population into convergence information and distribution information by rotating the original coordinates in the objective space. Specifically, the proposed algorithm employs one coordinate to reflect individual's convergence and the remaining $$m-1$$ coordinates to reflect individual's distribution, where m is the number of objectives in a given MOP. In addition, a neighborhood punishment strategy is developed to prevent individuals from being crowded. From a series of experiments on 42 test instances with 3-10 objectives, ISNPS has been found to be very competitive against six representative algorithms in the EMO area. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. A Two-Stage Active-Set Algorithm for Bound-Constrained Optimization.
- Author
-
Cristofari, Andrea, Santis, Marianna, Lucidi, Stefano, and Rinaldi, Francesco
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL variables , *NONMONOTONIC logic - Abstract
In this paper, we describe a two-stage method for solving optimization problems with bound constraints. It combines the active-set estimate described in Facchinei and Lucidi (J Optim Theory Appl 85(2):265-289, 1995) with a modification of the non-monotone line search framework recently proposed in De Santis et al. (Comput Optim Appl 53(2):395-423, 2012). In the first stage, the algorithm exploits a property of the active-set estimate that ensures a significant reduction in the objective function when setting to the bounds all those variables estimated active. In the second stage, a truncated-Newton strategy is used in the subspace of the variables estimated non-active. In order to properly combine the two phases, a proximity check is included in the scheme. This new tool, together with the other theoretical features of the two stages, enables us to prove global convergence. Furthermore, under additional standard assumptions, we can show that the algorithm converges at a superlinear rate. Promising experimental results demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Optimized first-order methods for smooth convex minimization.
- Author
-
Kim, Donghwan and Fessler, Jeffrey
- Subjects
- *
CONVEX functions , *STOCHASTIC convergence , *CONJUGATE gradient methods , *ALGORITHMS , *MATHEMATICAL optimization - Abstract
We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle (Math Program 145(1-2):451-482, 2014. doi:) recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods (Polyak in USSR Comput Math Math Phys 4(5):1-17, 1964. doi:), and Nesterov's fast gradient methods (Nesterov in Sov Math Dokl 27(2):372-376, 1983; Math Program 103(1):127-152, 2005. doi:). However, the numerical method in Drori and Teboulle (2014) is computationally expensive for large N, and the corresponding numerically optimized first-order algorithm in Drori and Teboulle (2014) requires impractical memory and computation for large-scale optimization problems. In this paper, we propose optimized first-order algorithms that achieve a convergence bound that is two times smaller than for Nesterov's fast gradient methods; our bound is found analytically and refines the numerical bound in Drori and Teboulle (2014). Furthermore, the proposed optimized first-order methods have efficient forms that are remarkably similar to Nesterov's fast gradient methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. Optimum design of large-scale truss towers using cascade optimization.
- Author
-
Kaveh, A. and Ilchi Ghazaan, M.
- Subjects
- *
OPTIMAL designs (Statistics) , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL optimization , *TRUSSES - Abstract
High number of design variables, large size of the search space, and control of a great number of design constraints are major preventive factors in performing optimum design of real-world structures in a reasonable time. The present study intends to examine the computational performance of cascading in design optimization of truss towers with large numbers of design variables. The cascade optimization procedure utilized in this paper reduces the objective function value over a number of optimization stages by initially operating on a small number of design variables, which is gradually increased stage after stage. In fact, the early stages of optimization in the cascade procedure make use of the coarsest configurations with small numbers of design variables and the later stages exploit finer configurations with larger numbers of design variables. The optimization algorithm utilized in each stage of a cascade process is enhanced colliding bodies optimization. High solution accuracy and convergence speed of the proposed method are shown through three test examples. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. Injection moulding optimisation of multi-class design variables using a PSO algorithm.
- Author
-
Deng, Y.-M., Zheng, D., and Lu, X.-J.
- Subjects
- *
MOLDING (Founding) , *ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *COMPUTER software - Abstract
Injection moulding optimisation seeks to achieve the highest possible moulding quality under the specified constraints. To this end, the factors (design variables) affecting the moulding quality should be adjusted, including those of process parameters, mould design, part geometry, etc. Past work in this aspect is primarily focused on tuning the process parameters and mould design (e.g., gate location, runner and cooling channel layout), with less attention on the part geometry, and none on them all. To address this problem, this paper presents a PSO (particle swarm optimisation) algorithm for the optimisation of multi-class design variables, such as the part thickness, process parameters (melt temperature, mould temperature, injection time) and gate location. The optimisation is targeted at different aspects of moulding quality, including part warpage, weld lines, air traps, and so on. In applying the PSO algorithm, the paper proposes a modified elite archiving method, which can expedite the convergence speed, hence improving the efficiency of the algorithm. A computer program was developed that automates the steps such as adjusting the part thickness, the injection moulding process parameters and the gate location, activating the CAE software to simulate the injection moulding process, retrieving the simulation results, and evaluating the objective functions. The whole procedure iterates a number of generations by following the search process of the algorithm. A case study was also presented to illustrate as well as to test the proposed methodology, which was demonstrated as both effective and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
17. Global Convergence of a Class of Discrete-Time Interconnected Pendulum-Like Systems.
- Author
-
Yang, Y., Duan, Z. S., and Huang, L.
- Subjects
- *
STOCHASTIC convergence , *DISCRETE-time systems , *DIGITAL control systems , *SYSTEM analysis , *MATRIX inequalities , *LARGE scale systems , *ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICAL analysis - Abstract
This paper focuses on a class of discrete-time interconnected pendulumlike systems. Sufficient conditions for the global convergence of systems with both structured and unstructured uncertainties in its linear part are established in terms of linear matrix inequalities (LMIs). In the traditional decentralized control of large scale systems, the effects of interconnections were seldom studied. In this paper, a square matrix specifying the interconnecting structure is introduced in order to discuss the effects of nonlinear interconnections. It is shown that the global convergence of the whole interconnected system can be achieved by designing an appropriate interconnection matrix. In order to solve the nonlinear matrix inequalities (NMIs) arising in the synthesis problem, a global optimization algorithm is presented which can be used to handle a class of NMI problems. An illustrative example is given to demonstrate the applicability and validity of the main results and the presented algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
18. On the Method of Shortest Residuals for Unconstrained Optimization.
- Author
-
Pytlak, R. and Tarnawski, T.
- Subjects
- *
CONJUGATE gradient methods , *APPROXIMATION theory , *NUMERICAL solutions to equations , *ITERATIVE methods (Mathematics) , *MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *NUMERICAL analysis , *CALCULUS of variations - Abstract
The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm, first introduced by Lemaréchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an implementation of a standard Polak–Ribière conjugate gradient algorithm is also provided. It supports the claim that the method of shortest residuals is a viable technique, competitive to other conjugate gradient algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Globally Convergent Optimization Algorithms on Riemannian Manifolds: Uniform Framework for Unconstrained and Constrained Optimization.
- Author
-
Yang, Y.
- Subjects
- *
RIEMANNIAN manifolds , *MANIFOLDS (Mathematics) , *DIFFERENTIAL geometry , *MATHEMATICAL optimization , *MATHEMATICAL analysis , *ACCELERATION of convergence in numerical analysis , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICS - Abstract
This paper proposes several globally convergent geometric optimization algorithms on Riemannian manifolds, which extend some existing geometric optimization techniques. Since any set of smooth constraints in the Euclidean space Rn (corresponding to constrained optimization) and the Rn space itself (corresponding to unconstrained optimization) are both special Riemannian manifolds, and since these algorithms are developed on general Riemannian manifolds, the techniques discussed in this paper provide a uniform framework for constrained and unconstrained optimization problems. Unlike some earlier works, the new algorithms have less restrictions in both convergence results and in practice. For example, global minimization in the one-dimensional search is not required. All the algorithms addressed in this paper are globally convergent. For some special Riemannian manifold other than Rn, the newalgorithms are very efficient. Convergence rates are obtained. Applications are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
20. Denoising of Smooth Images Using L1-Fitting.
- Author
-
Kärkkäinen, T., Kunisch, K., and Majava, K.
- Subjects
- *
ALGORITHMS , *NUMERICAL analysis , *STOCHASTIC convergence , *MATHEMATICAL optimization , *PROBLEM solving - Abstract
In this paper, denoising of smooth ( H10-regular) images is considered. The purpose of the paper is basically twofold. First, to compare the denoising methods based on L1- and L2-fitting. Second, to analyze and realize an active-set method for solving the non-smooth optimization problem arising from the former approach. More precisely, we formulate the algorithm, proof its convergence, and give an efficient numerical realization. Several numerical experiments are presented, where the convergence of the proposed active-set algorithm is studied and the denoising properties of the methods based on L1- and L2-fitting are compared. Also a heuristic method for determining the regularization parameter is presented and tested. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
21. Global optimization of mixed-integer nonlinear programs: A theoretical and computational study.
- Author
-
Tawarmalani, Mohit and Sahinidis, Nikolaos V.
- Subjects
- *
INTEGER programming , *LINEAR programming , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL optimization - Abstract
This work addresses the development of an efficient solution strategy for obtaining global optima of continuous, integer, and mixed-integer nonlinear programs. Towards this end, we develop novel relaxation schemes, range reduction tests, and branching strategies which we incorporate into the prototypical branch-and-bound algorithm. In the theoretical/algorithmic part of the paper, we begin by developing novel strategies for constructing linear relaxations of mixed-integer nonlinear programs and prove that these relaxations enjoy quadratic convergence properties. We then use Lagrangian/linear programming duality to develop a unifying theory of domain reduction strategies as a consequence of which we derive many range reduction strategies currently used in nonlinear programming and integer linear programming. This theory leads to new range reduction schemes, including a learning heuristic that improves initial branching decisions by relaying data across siblings in a branch-and-bound tree. Finally, we incorporate these relaxation and reduction strategies in a branch-and-bound algorithm that incorporates branching strategies that guarantee finiteness for certain classes of continuous global optimization problems. In the computational part of the paper, we describe our implementation discussing, wherever appropriate, the use of suitable data structures and associated algorithms. We present computational experience with benchmark separable concave quadratic programs, fractional 0-1 programs, and mixed-integer nonlinear programs from applications in synthesis of chemical processes, engineering design, just-in-time manufacturing, and molecular design. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
22. Sequential Penalty Algorithm for Nonlinear Constrained Optimization.
- Author
-
Zhang, J.L. and Zhang, X.S.
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *NONLINEAR theories , *STOCHASTIC convergence , *MATHEMATICS , *MATHEMATICAL analysis - Abstract
In this paper, a new sequential penalty algorithm, based on the L[sub ∞] exact penalty function, is proposed for a general nonlinear constrained optimization problem. The algorithm has the following characteristics: it can start from an arbitrary initial point; the feasibility of the subproblem is guaranteed; the penalty parameter is adjusted automatically; global convergence without any regularity assumption is proved. The update formula of the penalty parameter is new. It is proved that the algorithm proposed in this paper behaves equivalently to the standard SQP method after sufficiently many iterations. Hence, the local convergence results of the standard SQP method can be applied to this algorithm. Preliminary numerical experiments show the efficiency and stability of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
23. A Globally Convergent Penalty-Free Method for Optimization with Equality Constraints and Simple Bounds.
- Author
-
Qiu, Songqiang and Chen, Zhongwen
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL optimization , *MATHEMATICAL bounds , *ALGORITHMS , *NONLINEAR programming , *ITERATIVE methods (Mathematics) - Abstract
In this paper, we propose an algorithm for the solution of nonlinear constrained programming. This algorithm does not use any penalty function or a filter. Instead, it uses the idea of maximal constraint violation to guarantee global convergence. The infeasibility of each iterate is controlled by a progressively decreasing limit. At each iteration, a normal step which is obtained by solving a tightened normal subproblem and a tangential step which is a solution of a tangential subproblem are computed successively. The algorithm reduces the value of objective function or improves feasibility according to the relation of the tangential predicted reduction and the predicted reduction of constraint violation achieved by the trial step. The framework of the algorithm ensures the global convergence without assuming regularity or boundedness of iterate sequence. Moreover, the algorithm does not need any restoration phase. We also include some preliminary yet promising numerical results on some standard test problems in constrained optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. Coordinate descent algorithms.
- Author
-
Wright, Stephen
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *PROBLEM solving , *APPROXIMATION theory , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate descent approach, together with variants and extensions and their convergence properties, mostly with reference to convex objectives. We pay particular attention to a certain problem structure that arises frequently in machine learning applications, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type. We also present some parallel variants and discuss their convergence properties under several models of parallel execution. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. Recent advances in trust region algorithms.
- Author
-
Yuan, Ya-xiang
- Subjects
- *
MATHEMATICAL optimization , *NUMERICAL analysis , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL regularization - Abstract
Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In the paper, we review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods are also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Block SOR methods for the solution of indefinite least squares problems.
- Author
-
Liu, Qiaohua and Liu, Aijing
- Subjects
- *
LEAST squares , *STOCHASTIC convergence , *JACOBI method , *QUADRATIC forms , *ALGORITHMS , *MATHEMATICAL optimization , *FACTORIZATION - Abstract
This paper describes a technique for constructing block SOR methods for the solution of the large and sparse indefinite least squares problem which involves minimizing a certain type of indefinite quadratic form. Two block SOR-based algorithms and convergence results are presented. The optimum parameters for the methods are also given. It has been shown both theoretically and numerically that the optimum block SOR methods have a faster convergence than block Jacobi and Gauss-Seidel methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. Adaptive bare-bones particle swarm optimization algorithm and its convergence analysis.
- Author
-
Zhang, Yong, Gong, Dun-wei, Sun, Xiao-yan, and Geng, Na
- Subjects
- *
PARTICLE swarm optimization , *ALGORITHMS , *STOCHASTIC convergence , *STOCHASTIC processes , *MATHEMATICAL optimization - Abstract
Bare-bones particle swarm optimization (BBPSO) was first proposed in 2003. Compared to the traditional particle swarm optimization, it is simpler and has only a few control parameters to be tuned by users. In this paper, an improved BBPSO algorithm with adaptive disturbance (ABPSO) is studied. By the proposed approaches, each particle has its own disturbance value, which is adaptively decided based on its convergence degree and the diversity of swarm. And an adaptive mutation operator is introduced to improve the global exploration of ABPSO. Moreover, the convergence of ABPSO is analyzed using stochastic process theory by regarding each particle's position as a stochastic vector. A series of experimental trials confirms that the proposed algorithm is highly competitive to other BBPSO-based algorithms, and its performance can be still further improved with the use of mutation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
28. MM algorithms for geometric and signomial programming.
- Author
-
Lange, Kenneth and Zhou, Hua
- Subjects
- *
GEOMETRIC programming , *ALGORITHMS , *GENERALIZATION , *MATHEMATICAL optimization , *ARITHMETIC , *MATHEMATICAL inequalities , *STOCHASTIC convergence - Abstract
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. A nonmonotone trust region method with new inexact line search for unconstrained optimization.
- Author
-
Liu, Jinghui and Ma, Changfeng
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *ALGORITHMS , *APPROXIMATION theory , *NUMERICAL analysis - Abstract
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
30. An Improved Spectral Conjugate Gradient Algorithm for Nonconvex Unconstrained Optimization Problems.
- Author
-
Deng, Songhai, Wan, Zhong, and Chen, Xiaohong
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL optimization , *QUASI-Newton methods , *STOCHASTIC convergence , *ALGORITHMS - Abstract
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
31. Superlinear Convergence of a General Algorithm for the Generalized Foley-Sammon Discriminant Analysis.
- Author
-
Zhang, Lei-Hong, Liao, Li-Zhi, and Ng, Michael
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *DISCRIMINANT analysis , *DIMENSION reduction (Statistics) , *HYPOTHESIS , *DATA mining , *MATHEMATICAL optimization - Abstract
Linear Discriminant Analysis (LDA) is one of the most efficient statistical approaches for feature extraction and dimension reduction. The generalized Foley-Sammon transform and the trace ratio model are very important in LDA and have received increasing interest. An efficient iterative method has been proposed for the resulting trace ratio optimization problem, which, under a mild assumption, is proved to enjoy both the local quadratic convergence and the global convergence to the global optimal solution (Zhang, L.-H., Liao, L.-Z., Ng, M.K.: SIAM J. Matrix Anal. Appl. 31:1584, ). The present paper further investigates the convergence behavior of this iterative method under no assumption. In particular, we prove that the iteration converges superlinearly when the mild assumption is removed. All possible limit points are characterized as a special subset of the global optimal solutions. An illustrative numerical example is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
32. An improved feasible QP-free algorithm for inequality constrained optimization.
- Author
-
Zhu, Zhi and Jian, Jin
- Subjects
- *
ALGORITHMS , *MATHEMATICAL inequalities , *MATHEMATICAL optimization , *LINEAR systems , *STOCHASTIC convergence , *ITERATIVE methods (Mathematics) , *PARAMETER estimation - Abstract
In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction. In view of the computational cost, the most attractive feature of the new algorithm is that only one system of linear equations is required to obtain the revised feasible descent direction. Thereby, per single iteration, it is only necessary to solve three systems of linear equations with the same coefficient matrix. In particular, without the positive definiteness assumption on the Hessian estimate, the proposed algorithm is still global convergence. Under some suitable conditions, the superlinear convergence rate is obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
33. A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations.
- Author
-
Curtis, Frank, Huber, Johannes, Schenk, Olaf, and Wächter, Andreas
- Subjects
- *
INTERIOR-point methods , *ALGORITHMS , *NONLINEAR theories , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NONCONVEX programming , *LINEAR systems , *LARGE scale systems - Abstract
This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447-3475, ), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to save computational expense. The implementation follows the proposed algorithm, but includes many practical enhancements, such as functionality to avoid the computation of a normal step during every iteration. The implementation is included in the IPOPT software package paired with an iterative linear system solver and preconditioner provided in PARDISO. Numerical results on a large nonlinear optimization test set and two PDE-constrained optimization problems with control and state constraints are presented to illustrate that the implementation is robust and efficient for large-scale applications. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
34. Coupling the Gradient Method with a General Exterior Penalization Scheme for Convex Minimization.
- Author
-
Peypouquet, Juan
- Subjects
- *
ALGORITHMS , *CONVEX functions , *CONJUGATE gradient methods , *APPROXIMATION theory , *STOCHASTIC convergence , *MATHEMATICAL optimization - Abstract
In this paper, we propose and analyze an algorithm that couples the gradient method with a general exterior penalization scheme for constrained or hierarchical minimization of convex functions in Hilbert spaces. We prove that a proper but simple choice of the step sizes and penalization parameters guarantees the convergence of the algorithm to solutions for the optimization problem. We also establish robustness and stability results that account for numerical approximation errors, discuss implementation issues and provide examples in finite and infinite dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
35. On the Solution of Generalized Multiplicative Extremum Problems.
- Author
-
Ashtiani, Alireza M. and Ferreira, Paulo A. V.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *EIGENVALUES , *MATHEMATICAL inequalities , *STOCHASTIC convergence , *QUADRATIC programming - Abstract
The paper addresses the problem of maximizing a sum of products of positive and concave real-valued functions over a convex feasible set. A reformulation based on the image of the feasible set through the vector-valued function which describes the problem, combined with an adequate application of convex analysis results, lead to an equivalent indefinite quadratic extremum problem with infinitely many linear constraints. Special properties of this later problem allow to solve it by an efficient relaxation algorithm. Some numerical tests illustrate the approach proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
36. A non-monotone line search multidimensional filter-SQP method for general nonlinear programming.
- Author
-
Gu, Chao and Zhu, Detong
- Subjects
- *
NONLINEAR programming , *STOCHASTIC convergence , *ALGORITHMS , *FILTERING software , *MATHEMATICAL optimization - Abstract
In this paper, we propose a non-monotone line search multidimensional filter-SQP method for general nonlinear programming based on the Wächter-Biegler methods for nonlinear equality constrained programming. Under mild conditions, the global convergence of the new method is proved. Furthermore, with the non-monotone technique and second order correction step, it is shown that the proposed method does not suffer from the Maratos effect, so that fast local convergence to second order sufficient local solutions is achieved. Numerical results show that the new approach is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
37. Hybrid differential evolution and Nelder-Mead algorithm with re-optimization.
- Author
-
Zhenxiao Gao, Tianyuan Xiao, and Wenhui Fan
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ANALYTIC functions , *MATHEMATICS - Abstract
Nonlinear optimization algorithms could be divided into local exploitation methods such as Nelder-Mead (NM) algorithm and global exploration ones, such as differential evolution (DE). The former searches fast yet could be easily trapped by local optimum, whereas the latter possesses better convergence quality. This paper proposes hybrid differential evolution and NM algorithm with re-optimization, called as DE-NMR. At first a modified NM, called NMR is presented. It re-optimizes from the optimum point at the first time and thus being able to jump out of local optimum, exhibits better properties than NM. Then, NMR is combined with DE. To deal with equal constraints, adaptive penalty function method is adopted in DE-NMR, which relaxes equal constraints into unequal constrained functions with an adaptive relaxation parameter that varies with iteration. Benchmark optimization problems as well as engineering design problems are used to experiment the performance of DE-NMR, with the number of function evaluation times being employed as the main index of measuring convergence speed, and objective function values as the main index of optimum's quality. Non-parametric tests are employed in comparing results with other global optimization algorithms. Results illustrate the fast convergence speed of DE-NMR. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
38. Log-Linear Convergence and Divergence of the Scale-Invariant (1+1)-ES in Noisy Environments.
- Author
-
Jebalia, Mohamed, Auger, Anne, and Hansen, Nikolaus
- Subjects
- *
MATHEMATICAL optimization , *MATHEMATICAL analysis , *ALGORITHMS , *STOCHASTIC convergence , *LOG-linear models - Abstract
Noise is present in many real-world continuous optimization problems. Stochastic search algorithms such as Evolution Strategies (ESs) have been proposed as effective search methods in such contexts. In this paper, we provide a mathematical analysis of the convergence of a (1+1)-ES on unimodal spherical objective functions in the presence of noise. We prove for a multiplicative noise model that for a positive expected value of the noisy objective function, convergence or divergence happens depending on the infimum of the support of the noise. Moreover, we investigate convergence rates and show that log-linear convergence is preserved in presence of noise. This result is a strong theoretical foundation of the robustness of ESs with respect to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
39. Pseudotransient Continuation for Solving Systems of Nonsmooth Equations with Inequality Constraints.
- Author
-
Chen, J. and Qi, L.
- Subjects
- *
NUMERICAL solutions to nonlinear evolution equations , *NONSMOOTH optimization , *MATHEMATICAL inequalities , *ALGORITHMS , *STOCHASTIC convergence , *CONTINUATION methods , *MATHEMATICAL optimization , *CONSTRAINT satisfaction - Abstract
This paper investigates a pseudotransient continuation algorithm for solving a system of nonsmooth equations with inequality constraints. We first transform the inequality constrained system of nonlinear equations to an augmented nonsmooth system, and then employ the pseudotransient continuation algorithm for solving the corresponding augmented nonsmooth system. The method gets its global convergence properties from the dynamics, and inherits its local convergence properties from the semismooth Newton method. Finally, we illustrate the behavior of our approach by some numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
40. A fast algorithm for the total variation model of image denoising.
- Author
-
Rong-Qing Jia and Hanqing Zhao
- Subjects
- *
ALGORITHMS , *PARTIAL differential equations , *STOCHASTIC convergence , *MATHEMATICAL optimization , *MATHEMATICAL models , *EQUATIONS - Abstract
The total variation model of Rudin, Osher, and Fatemi for image denoising is considered to be one of the best denoising models. In the past, its solutions were based on nonlinear partial differential equations and the resulting algorithms were very complicated. In this paper, we propose a fast algorithm for the solution of the total variation model. Our algorithm is very simple and does not involve partial differential equations. We also provide a rigorous proof for the convergence of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
41. Uniform tree approximation by global optimization techniques.
- Author
-
Bernardo Llanas and Francisco Sáinz
- Subjects
- *
TREE graphs , *MATHEMATICAL optimization , *ALGORITHMS , *CONVEX functions , *APPROXIMATION theory , *STOCHASTIC convergence , *ERROR analysis in mathematics - Abstract
Abstract  In this paper we present adaptive algorithms for solving the uniform continuous piecewise affine approximation problem (UCPA) in the case of Lipschitz or convex functions. The algorithms are based on the tree approximation (adaptive splitting) procedure. The uniform convergence is achieved by means of global optimization techniques for obtaining tight upper bounds of the local error estimate (splitting criterion). We give numerical results in the case of the function distance to 2D and 3D geometric bodies. The resulting trees can retrieve the values of the target function in a fast way. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
42. Convergence analysis of hybrid cellular automata for topology optimization.
- Author
-
C. Penninger, L. Watson, A. Tovar, and J. Renaud
- Subjects
- *
STOCHASTIC convergence , *CELLULAR automata , *HYBRID systems , *TOPOLOGY , *MATHEMATICAL optimization , *ALGORITHMS , *BONE mechanics - Abstract
Abstract The hybrid cellular automaton (HCA) algorithm was inspired by the structural adaptation of bones to their ever changing mechanical environment. This methodology has been shown to be an effective topology synthesis tool. In previous work, it has been observed that the convergence of the HCA methodology is affected by parameters of the algorithm. As a result, questions have been raised regarding the conditions by which HCA converges to an optimal design. The objective of this investigation is to examine the conditions that guarantee convergence to a Karush-Kuhn-Tucker (KKT) point. In this paper, it is shown that the HCA algorithm is a fixed point iterative scheme and the previously reported KKT optimality conditions are corrected. To demonstrate the convergence properties of the HCA algorithm, a simple cantilevered beam example is utilized. Plots of the spectral radius for projections of the design space are used to show regions of guaranteed convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
43. On Generalization Performance and Non-Convex Optimization of Extended υ-Support Vector Machine.
- Author
-
Takeda, Akiko and Sugiyama, Masashi
- Subjects
- *
ALGORITHMS , *STOCHASTIC convergence , *FRACTIONS , *GENERALIZATION , *MATHEMATICAL optimization - Abstract
The υ-support vector classification (υ-SVC) algorithm was shown to work well and provide intuitive interpretations, e.g., the parameter v roughly specifies the fraction of support vectors. Although ii corresponds to a fraction, it cannot take the entire range between 0 and 1 in its original form. This problem was settled by a non-convex extension of υ-SVC and the extended method was experimentally shown to generalize better than original υ-SVC. However, its good generalization performance and convergence properties of the optimization algorithm have not been studied yet. In this paper, we provide new theoretical insights into these issues and propose a novel υ-SVC algorithm that has guaranteed generalization performance and convergence properties. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
44. Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization.
- Author
-
Andrei, N.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *CONVEX functions , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β=(1− θ k) β+ θ k β. The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
45. Variable-Number Sample-Path Optimization.
- Author
-
Deng, Geng and Ferris, Michael
- Subjects
- *
SIMULATION methods & models , *MATHEMATICAL optimization , *MATHEMATICAL sequences , *ALGORITHMS , *NUMERICAL analysis , *STOCHASTIC convergence - Abstract
The sample-path method is one of the most important tools in simulation-based optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s unconstrained optimization by quadratic approximation (UOBYQA) method, which integrates a Bayesian variable-number sample-path (VNSP) scheme to choose appropriate number of samples at each iteration. The statistically accurate scheme determines the number of simulation runs, and guarantees the global convergence of the algorithm. The VNSP scheme saves a significant amount of simulation operations compared to general purpose ‘fixed-number’ sample-path methods. We present numerical results based on the new algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
46. Optimization methods and stability of inclusions in Banach spaces.
- Author
-
Klatte, Diethard and Kummer, Bernd
- Subjects
- *
MATHEMATICAL optimization , *LIPSCHITZ spaces , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL mappings , *APPROXIMATION theory - Abstract
Our paper deals with the interrelation of optimization methods and Lipschitz stability of multifunctions in arbitrary Banach spaces. Roughly speaking, we show that linear convergence of several first order methods and Lipschitz stability mean the same. Particularly, we characterize calmness and the Aubin property by uniformly (with respect to certain starting points) linear convergence of descent methods and approximate projection methods. So we obtain, e.g., solution methods (for solving equations or variational problems) which require calmness only. The relations of these methods to several known basic algorithms are discussed, and errors in the subroutines as well as deformations of the given mappings are permitted. We also recall how such deformations are related to standard algorithms like barrier, penalty or regularization methods in optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
47. A quadratically approximate framework for constrained optimization, global and local convergence.
- Author
-
Jian, Jin Bao
- Subjects
- *
ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL optimization , *FUNCTIONAL analysis , *MATHEMATICAL mappings , *CONTINUOUS functions - Abstract
This paper presents a quadratically approximate algorithm framework (QAAF) for solving general constrained optimization problems, which solves, at each iteration, a subproblem with quadratic objective function and quadratic equality together with inequality constraints. The global convergence of the algorithm framework is presented under the Mangasarian-Fromovitz constraint qualification (MFCQ), and the conditions for superlinear and quadratic convergence of the algorithm framework are given under the MFCQ, the constant rank constraint qualification (CRCQ) as well as the strong second-order sufficiency conditions (SSOSC). As an incidental result, the definition of an approximate KKT point is brought forward, and the global convergence of a sequence of approximate KKT points is analysed. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
48. A Smoothing-Type Algorithm for Solving Linear Complementarity Problems with Strong Convergence Properties.
- Author
-
Huang, Zheng-Hai and Gu, Wei-Zhe
- Subjects
- *
MATHEMATICAL research , *LINEAR complementarity problem , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL programming , *MATHEMATICAL optimization - Abstract
In this paper, we construct an augmented system of the standard monotone linear complementarity problem (LCP), and establish the relations between the augmented system and the LCP. We present a smoothing-type algorithm for solving the augmented system. The algorithm is shown to be globally convergent without assuming any prior knowledge of feasibility/infeasibility of the problem. In particular, if the LCP has a solution, then the algorithm either generates a maximal complementary solution of the LCP or detects correctly solvability of the LCP, and in the latter case, an existing smoothing-type algorithm can be directly applied to solve the LCP without any additional assumption and it generates a maximal complementary solution of the LCP; and that if the LCP is infeasible, then the algorithm detect correctly infeasibility of the LCP. To the best of our knowledge, such properties have not appeared in the existing literature for smoothing-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
49. Equivalence of Two Nondegeneracy Conditions for Semidefinite Programs.
- Author
-
Flegel, M. L. and Kanzow, C.
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *SENSITIVITY theory (Mathematics) , *MATRICES (Mathematics) , *EIGENVALUES , *VECTOR analysis , *MATHEMATICAL optimization - Abstract
Nondegeneracy assumptions are often needed in order to prove the local fast convergence of suitable algorithms as well as in the sensitivity analysis for semidefinite programs. One of the more standard nondegeneracy conditions is the geometric condition used by Alizadeh et al. (Math. Program. 77:111-128, 1997). On the other hand, Kanzow and Nagel (SIAM J. Optim. 15:654-672, 2005) recently introduced an algebraic condition that was used in order to prove, for the first time, the local quadratic convergence of a suitable algorithm for the solution of semidefinite programs without using the strict complementarity assumption. The aim of this paper is to show that these two nondegeneracy conditions are equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
50. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions.
- Author
-
Jian, Jin-bao, Hu, Qing-jie, Tang, Chun-ming, and Zheng, Hai-yan
- Subjects
- *
QUADRATIC programming , *CONSTRAINED optimization , *MATHEMATICAL optimization , *STOCHASTIC convergence , *QUADRATIC equations , *ALGORITHMS - Abstract
In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.