169 results
Search Results
2. Dai-Kou type conjugate gradient methods with a line search only using gradient.
- Author
-
Huang, Yuanyuan and Liu, Changhe
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,OPERATOR theory - Abstract
In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
3. A New Proof for Global Convergence of a Smoothing Homotopy Method for the Nonlinear Complementarity Problem.
- Author
-
Fan, Xiaona and Yan, Qinglun
- Subjects
HOMOTOPY equivalences ,STOCHASTIC convergence ,NONLINEAR equations ,PARTIAL differential equations ,NUMERICAL analysis - Abstract
In this paper, we propose a new proof for smoothing homotopy method based on the Fischer–Burmeister function to solve the nonlinear complementarity problem under a nonmonotone solution condition. Under this assumption condition imposed on the defined mapping F , global convergence of a smooth curve determined by the referred homotopy equation is established for almost all initial points in R + n and it is actually regarded as an interior point method. Besides, if the initial point is expanded to R n , the global convergence of the homotopy method is ensured under a similar condition. The numerical results are reported and illustrate that the method is efficient for some nonlinear complementarity problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. An approach based on dwindling filter method for positive definite generalized eigenvalue problem.
- Author
-
Arzani, F. and Peyghami, M. Reza
- Subjects
EIGENVALUES ,STOCHASTIC convergence ,MATRICES (Mathematics) ,NUMERICAL analysis ,MATHEMATICAL optimization - Abstract
In this paper, we present a new spectral residual method for solving large-scale positive definite generalized eigenvalue problems. The proposed algorithm is equipped with a dwindling multidimensional nonmonotone filter, in which the envelope dwindles as the step length decreases. We have also employed a relaxed nonmonotone line search technique in the structure of the algorithm which allows it to enjoy the nonmonotonicity from scratch. Under some mild and standard assumptions, the global convergence property of the proposed algorithm is established. An implementation of the new algorithm on some test problems shows the efficiency and effectiveness of the proposed algorithm in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. A non-monotone inexact regularized smoothing Newton method for solving nonlinear complementarity problems.
- Author
-
Zhu, Jianguang and Hao, Binbin
- Subjects
MONOTONE operators ,SMOOTHING (Numerical analysis) ,NEWTON-Raphson method ,NONLINEAR programming ,STOCHASTIC convergence ,ALGORITHMS ,NUMERICAL analysis - Abstract
In the paper [S.P. Rui and C.X. Xu, A smoothing inexact Newton method for nonlinear complementarity problems, J. Comput. Appl. Math. 233 (2010), pp. 2332–2338], the authors proposed an inexact smoothing Newton method for nonlinear complementarity problems (NCP) with the assumption that F is a uniform P function. In this paper, we present a non-monotone inexact regularized smoothing Newton method for solving the NCP which is based on Fischer–Burmeister smoothing function. We show that the proposed algorithm is globally convergent and has a locally superlinear convergence rate under the weaker condition that F is a P 0 function and the solution of NCP is non-empty and bounded. Numerical results are also reported for the test problems, which show the effectiveness of the proposed algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
6. Globally Convergent Broyden-Like Methods for Semismooth Equations and Applications to VIP, NCP and MCP.
- Author
-
Dong-Hui Li and Fukushima, Masao
- Subjects
SMOOTHING (Numerical analysis) ,NUMERICAL analysis ,NEWTON-Raphson method ,EQUATIONS ,STOCHASTIC convergence ,EQUALITY ,MATHEMATICAL inequalities - Abstract
In this paper, we propose a general smoothing Broyden-like quasi-Newton method for solving a class of nonsmooth equations. Under appropriate conditions, the proposed method converges to a solution of the equation globally and superlinearly. In particular, the proposed method provides the possibility of developing a quasi-Newton method that enjoys superlinear convergence even if strict complementarity fails to hold. We pay particular attention to semismooth equations arising from nonlinear complementarity problems, mixed complementarity problems and variational inequality problems. We show that under certain conditions, the related methods based on the perturbed Fischer-Burmeister function, Chen-Harker-Kanzow-Smale smoothing function and the Gabriel-Moré class of smoothing functions converge globally and superlinearly. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
7. A new nonmonotone filter Barzilai–Borwein method for solving unconstrained optimization problems.
- Author
-
Arzani, F. and Peyghami, M. Reza
- Subjects
MONOTONE operators ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper, a finite filter is used in the structure of the Barzilai–Browein (BB) gradient method in order to propose a new modified BB algorithm for solving large-scale unconstrained optimization problems. Our algorithm is equipped with a relaxed nonmonotone line search technique which allows the algorithm to enjoy the nonmonotonicity properties from scratch. Under some suitable conditions, the global convergence property of the new proposed algorithm is established. Numerical results on some test problems in CUTEr library show the efficiency and effectiveness of the new algorithm in practice too. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
8. A New Nonmonotone Adaptive Retrospective Trust Region Method for Unconstrained Optimization Problems.
- Author
-
Tarzanagh, D., Peyghami, M., and Bastin, F.
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,MONOTONE operators ,CONVEX functions - Abstract
In this paper, we propose a new nonmonotone adaptive retrospective Trust Region (TR) method for solving unconstrained optimization problems. Inspired by the retrospective ratio proposed in Bastin et al. (Math Program Ser A 123:395-418, ), a new nonmonotone TR ratio is introduced based on a convex combination of the nonmonotone classical and retrospective ratios. Due to the value of the new ratio, the TR radius is updated adaptively by a variant of the rule as proposed in Shi and Guo (J Comput Appl Math 213:509-520, ). Global convergence property of the new algorithm, as well as its superlinear convergence rate, is established under some standard assumptions. Numerical results on some test problems show the efficiency and effectiveness of the new method in practice, too. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. Solving Unconstrained Optimization with a New Type of Conjugate Gradient Method.
- Author
-
Shoid, Syazni, Rivaie, Mohd, Mamat, Mustafa, and Mohd, Ismail
- Subjects
CONSTRAINT satisfaction ,MATHEMATICAL optimization ,CONJUGATE gradient methods ,PROBLEM solving ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
Conjugate gradient (CG) methods have been widely used as schemes to solve large-scale unconstrained optimization problems. Numerous studies and modifications have been done recently to improve this method. In this paper, we proposed a new type of CG coefficients (β
k ) by modification of Polak and Ribiere (PR) method. This new βk is shown to possess global convergence properties by using exact line searches. Performance comparisons are made with the four most common βk proposed by the early researches. Numerical results also show that this new βk performed better. [ABSTRACT FROM AUTHOR]- Published
- 2014
- Full Text
- View/download PDF
10. Two spectral gradient projection methods for constrained equations and their linear convergence rate.
- Author
-
Liu, Jing and Duan, Yongrui
- Subjects
LINEAR equations ,STOCHASTIC convergence ,MATHEMATICAL optimization ,CONJUGATE gradient methods ,MATHEMATICAL mappings ,LIPSCHITZ spaces ,NUMERICAL analysis - Abstract
Due to its simplicity and numerical efficiency for unconstrained optimization problems, the spectral gradient method has received more and more attention in recent years. In this paper, two spectral gradient projection methods for constrained equations are proposed, which are combinations of the well-known spectral gradient method and the hyperplane projection method. The new methods are not only derivative-free, but also completely matrix-free, and consequently they can be applied to solve large-scale constrained equations. Under the condition that the underlying mapping of the constrained equations is Lipschitz continuous or strongly monotone, we establish the global convergence of the new methods. Compared with the existing gradient methods for solving such problems, the new methods possess a linear convergence rate under some error bound conditions. Furthermore, a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that they are efficient and promising in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. Three modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property.
- Author
-
Sun, Min and Liu, Jing
- Subjects
CONJUGATE gradient methods ,STOCHASTIC convergence ,MATHEMATICAL optimization ,NUMERICAL analysis ,MATHEMATICAL models - Abstract
In this paper, three modified Polak-Ribière-Polyak (PRP) conjugate gradient methods for unconstrained optimization are proposed. They are based on the two-term PRP method proposed by Cheng (Numer. Funct. Anal. Optim. 28:1217-1230, 2007), the three-term PRP method proposed by Zhang et al. (IMA J. Numer. Anal. 26:629-640, 2006), and the descent PRP method proposed by Yu et al. (Optim. Methods Softw. 23:275-293, 2008). These modified methods possess the sufficient descent property without any line searches. Moreover, if the exact line search is used, they reduce to the classical PRP method. Under standard assumptions, we show that these three methods converge globally with a Wolfe line search. We also report some numerical results to show the efficiency of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. A globally convergent damped Gauss-Newton method for solving the extended linear complementarity problem.
- Author
-
Na Huang and Changfeng Ma
- Subjects
GAUSS-Newton method ,STOCHASTIC convergence ,LINEAR complementarity problem ,NUMERICAL analysis ,PARAMETER estimation - Abstract
The extended linear complementarity problem (denoted by XLCP), of which the linear and horizontal linear complementarity problems are two special cases, can be reformulated as the solution of a nonsmooth system of equations. By the symmetrically perturbed smoothing Fischer-Burmeister function, the XLCP is approximated by a family of parameterized smoothness optimization problems. A smoothing damped Gauss-Newton method is designed for solving the XLCP. The proposed algorithm is proved to be convergent globally under suitable assumptions. Some numerical results are reported in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. Fast convergence of an inexact interior point method for horizontal complementarity problems.
- Author
-
Arias, C. A. and Martínez, J. M.
- Subjects
STOCHASTIC convergence ,ITERATIVE methods (Mathematics) ,NONLINEAR analysis ,LINEAR equations ,NUMERICAL analysis - Abstract
In Andreani et al. (Numer. Algorithms 57:457-485, 2011), an interior point method for the horizontal nonlinear complementarity problem was introduced. This method was based on inexact Newton directions and safeguarding projected gradient iterations. Global convergence, in the sense that every cluster point is stationary, was proved in Andreani et al. (Numer. Algorithms 57:457-485, 2011). In Andreani et al. (Eur. J. Oper. Res. 249:41-54, 2016), local fast convergence was proved for the underdetermined problem in the case that the Newtonian directions are computed exactly. In the present paper, it will be proved that the method introduced in Andreani et al. (Numer. Algorithms 57:457-485, 2011) enjoys fast (linear, superlinear, or quadratic) convergence in the case of truly inexact Newton computations. Some numerical experiments will illustrate the accuracy of the convergence theory. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. A derivative-free three-term projection algorithm involving spectral quotient for solving nonlinear monotone equations.
- Author
-
Peiting, Gao and Chuanjiang, He
- Subjects
ITERATIVE methods (Mathematics) ,NONLINEAR systems ,STOCHASTIC convergence ,NONLINEAR analysis ,NUMERICAL analysis - Abstract
In this paper, we develop a three-term conjugate gradient method involving spectral quotient, which always satisfies the famous Dai-Liao conjugacy condition and quasi-Newton secant equation, independently of any line search. This new three-term conjugate gradient method can be regarded as a variant of the memoryless Broyden-Fletcher-Goldfarb-Shanno quasi-Newton method with regard to spectral quotient. By combining this method with the projection technique proposed by Solodov and Svaiter in 1998, we establish a derivative-free three-term projection algorithm for dealing with large-scale nonlinear monotone system of equations. We prove the global convergence of the algorithm and obtain the R-linear convergence rate under some mild conditions. Numerical results show that our projection algorithm is effective and robust, and is more competitive with the TTDFP algorithm proposed Liu and Li [A three-term derivative-free projection method for nonlinear monotone system of equations. Calcolo. 2016;53:427-450]. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. Globally convergent Jacobi methods for positive definite matrix pairs.
- Author
-
Hari, Vjeran
- Subjects
STOCHASTIC convergence ,JACOBI method ,MATRICES (Mathematics) ,MATHEMATICAL symmetry ,NUMERICAL analysis - Abstract
The paper derives and investigates the Jacobi methods for the generalized eigenvalue problem Ax = λBx, where A is a symmetric and B is a symmetric positive definite matrix. The methods first “normalize” B to have the unit diagonal and then maintain that property during the iterative process. The global convergence is proved for all such methods. That result is obtained for the large class of generalized serial strategies from Hari and Begović Kovač (Trans. Numer. Anal. (ETNA) 47, 107-147, 2017). Preliminary numerical tests confirm a high relative accuracy of some of those methods, provided that both matrices are positive definite and the spectral condition numbers of Δ
A AΔA and ΔB BΔB are small, for some nonsingular diagonal matrices ΔA and ΔB . [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
16. A subspace conjugate gradient algorithm for large-scale unconstrained optimization.
- Author
-
Yang, Yueting, Chen, Yuting, and Lu, Yunlong
- Subjects
SUBSPACES (Mathematics) ,CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
In this paper, a subspace three-term conjugate gradient method is proposed. The search directions in the method are generated by minimizing a quadratic approximation of the objective function on a subspace. And they satisfy the descent condition and Dai-Liao conjugacy condition. At each iteration, the subspace is spanned by the current negative gradient and the latest two search directions. Thereby, the dimension of the subspace should be 2 or 3. Under some appropriate assumptions, the global convergence result of the proposed method is established. Numerical experiments show the proposed method is competitive for a set of 80 unconstrained optimization test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. An improved trust region method for unconstrained optimization.
- Author
-
Liu, Jun
- Subjects
RADIUS (Geometry) ,HESSIAN matrices ,NUMERICAL analysis ,ITERATIVE methods (Mathematics) ,ALGORITHMS ,STOCHASTIC convergence - Abstract
In this paper, a new trust region method for unconstrained optimization is proposed. In the new method, the trust radius adjusts itself adaptively. In our algorithm, we use the convex combination of the Hessian matrix at a previous iteration and current iteration to define a suitable trust region radius at each iteration. The global, superlinear and quadratic convergence results of the algorithm are established under reasonable assumptions. Finally, some numerical results are given. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. A NONLINEAR CONJUGATE GRADIENT ALGORITHM WITH AN OPTIMAL PROPERTY AND AN IMPROVED WOLFE LINE SEARCH.
- Author
-
YU-HONG DAI and CAI-XIA KOU
- Subjects
CONJUGATE gradient methods ,NUMERICAL analysis ,STOCHASTIC convergence ,ITERATIVE methods (Mathematics) ,QUADRATIC programming - Abstract
In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the original Wolfe line search and guarantee the global convergence of the conjugate gradient method under mild conditions. To accelerate the algorithm, we introduce adaptive restarts along negative gradients based on the extent to which the function approximates some quadratic function during previous iterations. Numerical experiments with the CUTEr collection show that the proposed algorithm is promising. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
19. An Active Set Modified Polak-Ribiére-Polyak Method for Large-Scale Nonlinear Bound Constrained Optimization.
- Author
-
Cheng, Wanyou and Li, Donghui
- Subjects
CONSTRAINED optimization ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ITERATIVE methods (Mathematics) ,NUMERICAL analysis ,NONLINEAR equations - Abstract
In this paper, we propose an active set modified Polak-Ribiére-Polyak method for solving large-scale optimization with simple bounds on the variables. The active set is guessed by an identification technique at each iteration and the recently developed modified Polak-Ribiére-Polyak method is used to update the variables with indices outside of the active set. Under appropriate conditions, we show that the proposed method is globally convergent. Numerical experiments are presented using box constrained problems in the CUTEr test problem libraries. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. A hybrid ODE-based method for unconstrained optimization problems.
- Author
-
Ou, Yi-gui and Wang, Guan-shu
- Subjects
MATHEMATICAL optimization ,SUBSPACES (Mathematics) ,NUMERICAL analysis ,ALGORITHMS ,STOCHASTIC convergence - Abstract
This paper presents a hybrid ODE-based method for unconstrained optimization problems, which combines the idea of IMPBOT with the subspace technique and a fixed step-length. The main characteristic of this method is that at each iteration, a lower dimensional system of linear equations is solved only once to obtain a trial step. Another is that when a trial step is not accepted, this proposed method uses minimization of a convex overestimation, thus avoiding performing a line search to compute a step-length. Under some reasonable assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computations, especially for solving small scale unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
21. A new trust region method with adaptive radius for unconstrained optimization.
- Author
-
Cui, Zhaocheng and Wu, Boying
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,ITERATIVE methods (Mathematics) ,GLOBAL analysis (Mathematics) ,PROBLEM solving ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper, we propose a new adaptive trust region method for unconstrained optimization problems. In the new method, we use the previous and the current iterative information and a new update rule to define the trust region radius at each iterate. The global and superlinear convergence properties of the method are established under reasonable assumptions. Preliminary numerical results show that the new method is efficient and attractive for unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
22. The global convergence of a descent PRP conjugate gradient method.
- Author
-
Min Li, Heying Feng, and Jianguo Liu
- Subjects
CONJUGATE gradient methods ,STOCHASTIC convergence ,NUMERICAL functions ,THEORY of descent (Mathematics) ,MATHEMATICAL bounds ,NUMERICAL analysis ,ALGORITHMS - Abstract
Recently, Yu and Guan proposed a modified PRP method (called DPRP method) which can generate sufficient descent directions for the objective function. They established the global convergence of the DPRP method based on the assumption that stepsize is bounded away from zero. In this paper, without the requirement of the positive lower bound of the stepsize, we prove that the DPRP method is globally convergent with a modified strong Wolfe line search. Moreover, we establish the global convergence of the DPRP method with a Armijo-type line search. The numerical results show that the proposed algorithms are efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2012
23. A new semismooth Newton method for NCPs based on the penalized KK function.
- Author
-
Zhu, Jianguang, Liu, Hongwei, and Hao, Binbin
- Subjects
SMOOTHNESS of functions ,NEWTON-Raphson method ,MATHEMATICAL programming ,STOCHASTIC convergence ,QUADRATIC equations ,GLOBAL analysis (Mathematics) ,NUMERICAL analysis - Abstract
In this paper, based on the Kanzow-Kleinmichel (KK) function, we introduce a new nonlinear complementarity problem (NCP) function: penalized KK function. We show that the function possesses desirable properties similar to those of the KK function. Based on this new NCP function, we reformulate the NCP to a system of semismooth equations. We also propose a new semismooth Levenberg–Marquardt method to solve the system of semismooth equations that employs both trust region techniques and line searches. The global and quadratic convergence properties can be established under very mild assumptions. Numerical results show the effectiveness of the proposed algorithm and also indicate that superior behaviour of the proposed new NCP function. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
24. A globally convergent non-interior point homotopy method for solving variational inequalities.
- Author
-
Shang, Yufeng, Xu, Qing, and Yu, Bo
- Subjects
STOCHASTIC convergence ,INTERIOR-point methods ,HOMOTOPY theory ,VARIATIONAL inequalities (Mathematics) ,SET theory ,GLOBAL analysis (Mathematics) ,NUMERICAL analysis - Abstract
In this paper, to solve the equivalent Karush–Kuhn–Tucker system of a variational inequality problem (VIP) in a unbounded set, a new homotopy is constructed. Existence and global convergence of the homotopy path, starting at almost any point which is not necessarily an interior point, are proved. Numerically tracing the homotopy path gives a non-interior point homotopy method for solving the VIP. Numerical tests are given to show the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
25. Solving generalized Nash equilibrium problem with equality and inequality constraints.
- Author
-
Xu, Qing, Dai, Xi, and Yu, Bo
- Subjects
NASH equilibrium ,HOMOTOPY theory ,MATHEMATICAL mappings ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
In this paper, a homotopy mapping for a generalized Nash equilibrium problem (GNEP) with equality and inequality constraints is constructed. From proving the existence of a homotopy path, a globally convergent homotopy method is developed to find a solution to the GNEP. Several numerical examples are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
26. Superlinearly Convergent Trust-Region Method without the Assumption of Positive-Definite Hessian.
- Author
-
ZHANG, J. L., WANG, Y., and ZHANG, X. S.
- Subjects
NONLINEAR programming ,STOCHASTIC convergence ,MATHEMATICAL functions ,ALGORITHMS ,NUMERICAL analysis ,MATHEMATICAL analysis ,MATHEMATICAL programming ,LINEAR programming ,NEWTON-Raphson method - Abstract
In this paper, we reinvestigate the trust-region method by reformulating its subproblem: the trust-region radius is guided by gradient information at the current iteration and is self-adaptively adjusted. A trust-region algorithm based on the proposed subproblem is proved to be globally convergent. Moreover, the superlinear convergence of the new algorithm is shown without the condition that the Hessian of the objective function at the solution be positive definite. Preliminary numerical results indicate that the performance of the new method is notable. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
27. GLOBAL CONVERGENCE OF SHORTEST-RESIDUAL FAMILY OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH.
- Author
-
Xia Li and Xiongda Chen
- Subjects
CONJUGATE gradient methods ,ITERATIVE methods (Mathematics) ,STOCHASTIC convergence ,MATHEMATICAL functions ,NUMERICAL analysis ,OPERATIONS research - Abstract
The shortest-residual family of conjugate gradient methods was first proposed by Hestenes and was studied by Pytlak, and Dai and Yuan. Recently, a no-line-search scheme in conjugate gradient methods was given by Sun and Zhang, and Chen and Sun. In this paper, we show the global convergence of two shortest-residual conjugate gradient methods (FRSR and PRPSR) without line search. In addition, computational results are presented to show that the methods with line search have similar numerical behavior to the methods without line search. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
28. ON THE FINITE CONVERGENCE OF THE DOUGLAS-RACHFORD ALGORITHM FOR SOLVING (NOT NECESSARILY CONVEX) FEASIBILITY PROBLEMS IN EUCLIDEAN SPACES.
- Author
-
BAUSCHKE, HEINZ H. and DAO, MINH N.
- Subjects
FEASIBILITY problem (Mathematical optimization) ,STOCHASTIC convergence ,ALGORITHMS ,PROBLEM solving ,EUCLIDEAN metric ,NUMERICAL analysis - Abstract
Solving feasibility problems is a central task in mathematics and the applied sciences. One particularly successful method is the Douglas--Rachford algorithm. In this paper, we provide many new conditions sufficient for finite convergence. Numerou s examples illustrate our results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method.
- Author
-
Karas, Elizabeth, Santos, Sandra, and Svaiter, Benar
- Subjects
LEAST squares ,NONLINEAR analysis ,STOCHASTIC convergence ,NUMERICAL analysis ,MATHEMATICAL statistics - Abstract
This paper presents a class of Levenberg-Marquardt methods for solving the nonlinear least-squares problem. Explicit algebraic rules for computing the regularization parameter are devised. In addition, convergence properties of this class of methods are analyzed. We prove that all accumulation points of the generated sequence are stationary. Moreover, q-quadratic convergence for the zero-residual problem is obtained under an error bound condition. Illustrative numerical experiments with encouraging results are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
30. A globally convergent gradient-like method based on the Armijo line search.
- Author
-
Kamandi, Ahmad and Amini, Keyvan
- Subjects
GRADIENT-index devices ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,NUMERICAL analysis - Abstract
In this paper, a new conjugate gradient-like algorithm is proposed to solve uncon-strained optimization problems. The step directions generated by the new algorithm satisfy sufficient descent condition independent of the line search. The global convergence of the new algorithm, with the Armijo backtracking line search, is proved. Numerical experiments indicate the efficiency and robustness of the new algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. A dai-liao hybrid conjugate gradient method for unconstrained optimization.
- Author
-
Salihu, Nasiru, Odekunle, Mathew Remilekun, Waziri, Mohammed Yusuf, Halilu, Abubakar Sani, and Salihu, Suraj
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,NUMERICAL analysis ,STOCHASTIC convergence ,WIRELESS sensor networks - Abstract
One of todays' best performing CG methods is Dai-Liao (DL) method which depends on non-negative parameter t and conjugacy conditions for its computation. Although numerous optimal selections for the parameter were suggested, the best choice of t remains a subject of consideration. The pure conjugacy condition adopts an exact line search for numerical experiments and convergence analysis. Though, a practical mathematical experiment implies using an inexact line search to find the step size. To avoid such drawbacks, Dai and Liao substituted the earlier conjugacy condition with an extended conjugacy condition. Therefore, this paper suggests a new hybrid CG that combines the strength of Liu and Storey and Conjugate Descent CG methods by retaining a choice of Dai-Liao parameter t that is optimal. The theoretical analysis indicated that the search direction of the new CG scheme is descent and satisfies sufficient descent condition when the iterates jam under strong Wolfe line search. The algorithm shown to converge globally using standard assumptions, where the numerical experimentation of the scheme demonstrated that the proposed method is robust and promising than some known methods applying the performance profile presented by Dolan and Mor'e on 250 unrestricted problems. Numerical assessment of the tested CG algorithms with sparse signal reconstruction and image restoration in compressive sensing problems, file restoration, image video coding and other applications show that these CG schemes are comparable and can be apply in different fields such as temperature, fire, seismic sensors and humidity detectors in forest and so on using the wireless sensor network techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. A sufficient descent Liu–Storey conjugate gradient method and its global convergence.
- Author
-
Li, Min and Qu, Aiping
- Subjects
CONJUGATE gradient methods ,STOCHASTIC convergence ,NONLINEAR theories ,MATHEMATICAL optimization ,MATHEMATICAL proofs ,NUMERICAL analysis - Abstract
In this paper, a modified Liu–Storey conjugate gradient method is proposed. The method can generate sufficient descent directions for non-linear unconstrained optimization problems. A global convergence result is established when the line search fulfils the strong Wolfe conditions. Moreover, the-linear convergence rate of the methods is proved. The extensive numerical results show that the proposed method is efficient for the unconstrained optimization problems in the CUTEr library. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. A Smoothing Newton Algorithm for a Class of Non-monotonic Symmetric Cone Linear Complementarity Problems.
- Author
-
Lu, Nan and Huang, Zheng-Hai
- Subjects
ALGORITHMS ,COMPLEMENTARITY constraints (Mathematics) ,SMOOTHING (Numerical analysis) ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
Recently, the study of symmetric cone complementarity problems has been a hot topic in the literature. Many numerical methods have been proposed for solving such a class of problems. Among them, the problems concerned are generally monotonic. In this paper, we consider symmetric cone linear complementarity problems with a class of non-monotonic transformations. A smoothing Newton algorithm is extended to solve this class of non-monotonic symmetric cone linear complementarity problems; and the algorithm is proved to be well-defined. In particular, we show that the algorithm is globally and locally quadratically convergent under mild assumptions. The preliminary numerical results are also reported. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
34. A nonmonotone trust region method with new inexact line search for unconstrained optimization.
- Author
-
Liu, Jinghui and Ma, Changfeng
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
35. Carleman estimates for global uniqueness, stability and numerical methods for coefficient inverse problems.
- Author
-
Klibanov, Michael V.
- Subjects
CARLEMAN theorem ,INVERSE problems ,MATHEMATICS ,QUASIANALYTIC functions ,STOCHASTIC convergence ,NUMERICAL analysis ,ALGORITHMS - Abstract
This is a review paper of the role of Carleman estimates in the theory of multidimensional coefficient inverse problems since the first inception of this idea in 1981. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
36. Smoothing SQP algorithm for semismooth equations with box constraints.
- Author
-
Wang, Changyu, Liu, Qian, and Ma, Cheng
- Subjects
SEMISMOOTH Newton methods ,ITERATIVE methods (Mathematics) ,STOCHASTIC convergence ,MATHEMATICS theorems ,NUMERICAL analysis - Abstract
In this paper, in order to solve semismooth equations with box constraints, we present a class of smoothing SQP algorithms using the regularized-smooth techniques. The main difference of our algorithm from some related literature is that the correspondent objective function arising from the equation system is not required to be continuously differentiable. Under the appropriate conditions, we prove the global convergence theorem, in other words, any accumulation point of the iteration point sequence generated by the proposed algorithm is a KKT point of the corresponding optimization problem with box constraints. Particularly, if an accumulation point of the iteration sequence is a vertex of box constraints and additionally, its corresponding KKT multipliers satisfy strictly complementary conditions, the gradient projection of the iteration sequence finitely terminates at this vertex. Furthermore, under local error bound conditions which are weaker than BD-regular conditions, we show that the proposed algorithm converges superlinearly. Finally, the promising numerical results demonstrate that the proposed smoothing SQP algorithm is an effective method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
37. A smoothing Levenberg-Marquardt method for generalized semi-infinite programming.
- Author
-
Liu, Weiai and Wang, Changyu
- Subjects
COMPUTER programming ,SMOOTHNESS of functions ,GENERALIZATION ,STOCHASTIC convergence ,MATHEMATICAL bounds ,NUMERICAL analysis - Abstract
This paper is concerned with a numerical method for generalized semi-infinite programming problem. We first reformulate the generalized semi-infinite programming problem into an invariable-dimensional KKT system. Then by using perturbed Fischer-Burmeister function, we present a smoothing Levenberg-Marquardt method for solving this system of semismooth equations and show its global convergence under common conditions. Furthermore, the local superlinear convergence of the method is proved under local error bound condition. Finally, numerical results are given. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
38. An ADM-based splitting method for separable convex programming.
- Author
-
Han, Deren, Yuan, Xiaoming, Zhang, Wenxing, and Cai, Xingju
- Subjects
SPLITTING extrapolation method ,CONVEX programming ,STOCHASTIC convergence ,NUMERICAL analysis ,IMAGE processing ,MATHEMATICAL models - Abstract
We consider the convex minimization problem with linear constraints and a block-separable objective function which is represented as the sum of three functions without coupled variables. To solve this model, it is empirically effective to extend straightforwardly the alternating direction method of multipliers (ADM for short). But, the convergence of this straightforward extension of ADM is still not proved theoretically. Based on ADM's straightforward extension, this paper presents a new splitting method for the model under consideration, which is empirically competitive to the straightforward extension of ADM and meanwhile the global convergence can be proved under standard assumptions. At each iteration, the new method corrects the output of the straightforward extension of ADM by some slight correction computation to generate a new iterate. Thus, the implementation of the new method is almost as easy as that of ADM's straightforward extension. We show the numerical efficiency of the new method by some applications in the areas of image processing and statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. Theoretical and numerical comparison of relaxation methods for mathematical programs with complementarity constraints.
- Author
-
Hoheisel, Tim, Kanzow, Christian, and Schwartz, Alexandra
- Subjects
NUMERICAL analysis ,RELAXATION methods (Mathematics) ,COMPARATIVE studies ,MATHEMATICAL programming ,MATHEMATICAL optimization ,PROBLEM solving ,STOCHASTIC convergence - Abstract
Mathematical programs with equilibrium constraints (MPECs) are difficult optimization problems whose feasible sets do not satisfy most of the standard constraint qualifications. Hence MPECs cause difficulties both from a theoretical and a numerical point of view. As a consequence, a number of MPEC-tailored solution methods have been suggested during the last decade which are known to converge under suitable assumptions. Among these MPEC-tailored solution schemes, the relaxation methods are certainly one of the most prominent class of solution methods. Several different relaxation schemes are available in the meantime, and the aim of this paper is to provide a theoretical and numerical comparison of these schemes. More precisely, in the theoretical part, we improve the convergence theorems of several existing relaxation methods. There, we also take a closer look at the properties of the feasible sets of the relaxed problems and show which standard constraint qualifications are satisfied for these relaxed problems. Finally, the numerical comparison is based on the MacMPEC test problem collection. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. A primal-dual interior point method for nonlinear semidefinite programming.
- Author
-
Yamashita, Hiroshi, Yabe, Hiroshi, and Harada, Kouhei
- Subjects
INTERIOR-point methods ,NONLINEAR systems ,SEMIDEFINITE programming ,ITERATIVE methods (Mathematics) ,APPROXIMATION theory ,PROOF theory ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
This paper is concerned with a primal-dual interior point method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a KKT point and the inner iteration (SDPLS) that calculates an approximate barrier KKT point. Algorithm SDPLS uses a commutative class of Newton-like directions for the generation of line search directions. By combining the primal barrier penalty function and the primal-dual barrier function, a new primal-dual merit function is proposed. We prove the global convergence property of our method. Finally some numerical experiments are given. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
41. The numerical study of a regularized smoothing Newton method for solving P 0-NCP based on the generalized smoothing Fischer–Burmeister function
- Author
-
Huang, Na and Ma, Changfeng
- Subjects
- *
SMOOTHING (Numerical analysis) , *NEWTON-Raphson method , *ALGEBRAIC functions , *NONSMOOTH optimization , *NUMERICAL solutions to equations , *MATHEMATICAL programming , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: The nonlinear complementarity problems (denoted by NCPs) usually are reformulated as the solution of a nonsmooth system of equations. In this paper, we will present a regularized smoothing Newton method for solving nonlinear complementarity problems with P 0-function (P 0-NCPs) based on the generalized smoothing Fischer–Burmeister NCP-function ϕ p (μ, a, b) with p >1, where μ is smoothing parameter. Without requiring strict complementarity assumption at the P 0-NCPs solution, the proposed algorithm is proved to be globally and superlinearly convergent under suitable assumptions. Furthermore, the algorithm is locally quadratic convergent under mild conditions. Numerical experiments indicate that the proposed method is quite effective. In addition, in this paper, the regularization parameter ε in our algorithm is viewed as an independent variable, hence, our algorithm seems to be simpler and more easily implemented compared to many existing literatures. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
42. A new conjugate gradient method based on Quasi-Newton equation for unconstrained optimization.
- Author
-
Li, Xiangli, Shi, Juanjuan, Dong, Xiaoliang, and Yu, Jianglan
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL optimization , *QUASI-Newton methods , *STOCHASTIC convergence , *NUMERICAL analysis , *SPECTRAL geometry - Abstract
Abstract The spectral conjugate gradient method is an effective method for large-scale unconstrained optimization problems. In this paper, based on Quasi-Newton direction and Quasi-Newton equation, a new spectral conjugate gradient method is proposed. This method is motivated by the three-term modified Polak-Ribiyre-Polyak (PRP) method and spectral parameters. The global convergence of algorithm is proved for general functions under a strong Wolfe line search. Numerical results show that the new algorithm is superior to the three-term modified PRP method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. A nonmonotone derivative-free algorithm for nonlinear complementarity problems based on the new generalized penalized Fischer-Burmeister merit function.
- Author
-
Zhu, Jianguang, Liu, Hongwei, Liu, Changhe, and Cong, Weijie
- Subjects
LINEAR complementarity problem ,STOCHASTIC convergence ,NUMERICAL analysis ,ALGORITHMS ,MATHEMATICAL sequences - Abstract
In this paper, we propose a new generalized penalized Fischer-Burmeister merit function, and show that the function possesses a system of favorite properties. Moreover, for the merit function, we establish the boundedness of level set under a weaker condition. We also propose a derivative-free algorithm for nonlinear complementarity problems with a nonmonotone line search. More specifically, we show that the proposed algorithm is globally convergent and has a locally linear convergence rate. Numerical comparisons are also made with the merit function used by Chen (J Comput Appl Math 232:455-471, ), which confirm the superior behaviour of the new merit function. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
44. A feasible direction method for the semidefinite program with box constraints
- Author
-
Xu, Yi, Sun, Wenyu, and Qi, Liqun
- Subjects
- *
FEASIBILITY studies , *SEMIDEFINITE programming , *LINEAR statistical models , *NONLINEAR theories , *STOCHASTIC convergence , *ALGORITHMS , *NUMERICAL analysis , *MATHEMATICAL optimization - Abstract
Abstract: In this paper, we try to solve the semidefinite program with box constraints. Since the traditional projection method for constrained optimization with box constraints is not suitable to the semidefinite constraints, we present a new algorithm based on the feasible direction method. In the paper, we discuss two cases: the objective function in semidefinite programming is linear and nonlinear, respectively. We establish the convergence of our algorithm, and report the numerical experiments which show the effectiveness of the algorithm. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
45. Sufficient descent conjugate gradient methods for large-scale optimization problems.
- Author
-
Zheng, Xiuyun, Liu, Hongwei, and Lu, Aiguo
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,CONSTRAINED optimization ,SEARCH algorithms ,GLOBAL analysis (Mathematics) - Abstract
Sufficient descent condition is very crucial in establishing the global convergence of nonlinear conjugate gradient method. In this paper, we modified two conjugate gradient methods such that both methods satisfy this property. Under suitable conditions, we prove the global convergence of the proposed methods. Numerical results show that the proposed methods are efficient for the given test problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
46. COMBINATION ADAPTIVE TRUST REGION METHOD BY NON-MONOTONE STRATEGY FOR UNCONSTRAINED NONLINEAR PROGRAMMING.
- Author
-
AMINI, KEYVAN and AHOOKHOSH, MASOUD
- Subjects
NONLINEAR programming ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,NUMERICAL analysis ,ITERATIVE methods (Mathematics) ,MATHEMATICAL functions - Abstract
In this paper, we present a new trust region method for unconstrained nonlinear programming in which we blend adaptive trust region algorithm by non-monotone strategy to propose a new non-monotone trust region algorithm with automatically adjusted radius. Both non-monotone strategy and adaptive technique can help us introduce a new algorithm that reduces the number of iterations and function evaluations. The new algorithm preserves the global convergence and has local superlinear and quadratic convergence under suitable conditions. Numerical experiments exhibit that the new trust region algorithm is very efficient and robust for unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
47. A BFGS trust-region method for nonlinear equations.
- Author
-
Yuan, Gonglin, Wei, Zengxin, and Lu, Xiwen
- Subjects
MATHEMATICAL optimization ,NONLINEAR statistical models ,STOCHASTIC convergence ,QUADRATIC programming ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper, a new trust-region subproblem combining with the BFGS update is proposed for solving nonlinear equations, where the trust region radius is defined by a new way. The global convergence without the nondegeneracy assumption and the quadratic convergence are obtained under suitable conditions. Numerical results show that this method is more effective than the norm method. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. A Convergence Proof for the Particle Swarm Optimiser.
- Author
-
van den Bergh, Frans and Engelbrecht, Andries Petrus
- Subjects
STOCHASTIC convergence ,PARTICLE swarm optimization ,STOCHASTIC analysis ,MATHEMATICAL functions ,COMPUTER algorithms ,NUMERICAL solutions to equations ,NUMERICAL analysis - Abstract
The Particle Swarm Optimiser (PSO) is a population based stochastic optimisation algorithm, empirically shown to be efficient and robust. This paper provides a proof to show that the original PSO does not have guaranteed convergence to a local optimum. A flaw in the original PSO is identified which causes stagnation of the swarm. Correction of this flaw results in a PSO algorithm with guaranteed convergence to a local minimum. Further extensions with provable global convergence are also described. Experimental results are provided to elucidate the behavior of the modified PSO as well as PSO variations with global convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2011
49. Discrepancy curves for multi-parameter regularization.
- Author
-
Lu, Shuai, Pereverzev, Sergei V., Shao, Yuanyuan, and Tautenhahn, Ulrich
- Subjects
NP-complete problems ,ANALYTIC functions ,PARAMETER estimation ,IRREGULARITIES of distribution (Number theory) ,MATHEMATICAL transformations ,STOCHASTIC convergence ,MONOTONIC functions ,NUMERICAL analysis ,APPROXIMATION theory - Abstract
For solving linear ill-posed problems regularization methods are required when the right-hand side is with some noise. In the present paper regularized solutions are obtained by multi-parameter regularization and the regularization parameters are chosen by a multi-parameter discrepancy principle. Under certain smoothness assumptions we provide order optimal error bounds that characterize the accuracy of the regularized solutions. For the computation of the regularization parameters fast algorithms of Newton type are applied which are based on special transformations. These algorithms are globally and monotonically convergent. Some of our theoretical results are illustrated by numerical experiments. We also show how the proposed approach may be employed for multi-task approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Globally convergent Jacobian-free nonlinear equation solvers based on non-monotone norm descent conditions and a modified line search technique.
- Author
-
Hanba, Shigeru
- Subjects
STRESS (Computer program language) ,STOCHASTIC convergence ,NUMERICAL analysis ,NONLINEAR differential equations ,MATHEMATICAL optimization - Abstract
A Jacobian-free nonlinear equation solver based on a search technique called a 'spiral search', which is a modification of the line search, is proposed in this paper. Under mild conditions, the solver is proved to be globally convergent. The method is then extended to an overdetermined system of nonlinear equations. Numerical results show that for some problems, the solver outperforms existing solvers based on the line search or the trust-region method. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.