9 results
Search Results
2. A class of accelerated Uzawa algorithms for saddle point problems.
- Author
-
Zheng, Qingqing and Ma, Changfeng
- Subjects
- *
SADDLEPOINT approximations , *ALGORITHMS , *PROBLEM solving , *EXTRAPOLATION , *PARAMETER estimation , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
In this paper, we establish a class of accelerated Uzawa (AU) algorithms for solving the large sparse nonsingular saddle point problems by making use of the extrapolation technique. This extrapolation technique is based on the eigenvalues of the iterative matrix. These AU algorithms involve two iteration parameters whose special choices can cover the known classical Uzawa method, as well as yield new ones. Firstly, the accelerated model for the Uzawa algorithm is established and the detail algorithm description of AU method is presented. Then the convergence analyse of the AU method is given. Moreover, theoretical analyses show that the AU algorithm converges faster than some Uzawa-type methods (the Uzawa method is also included in) when the eigenvalues of the iterative matrix and the parameter τ satisfy some conditions. Numerical experiments on a few model problems are presented to illustrate the theoretical results and examine the numerical effectiveness of the AU method. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
3. A subgradient extragradient algorithm for solving multi-valued variational inequality.
- Author
-
Fang, Changjie and Chen, Shenglan
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *ALGORITHMS , *PROBLEM solving , *SUBGRADIENT methods , *CONVEX sets , *STOCHASTIC convergence - Abstract
Abstract: In this paper, we propose a subgradient extragradient method for solving multi-valued variational inequality. It is showed that the method converges globally to a solution of multi-valued variational inequality, provided the multi-valued mapping is pseudomonotone with nonempty compact convex values. Convergence rate is analyzed. Preliminary computational experience is also reported. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
4. A superlinearly convergent norm-relaxed method of quasi-strongly sub-feasible direction for inequality constrained minimax problems.
- Author
-
Jian, Jin-bao, Li, Jie, Zheng, Hai-yan, and Li, Jian-ling
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL inequalities , *CHEBYSHEV approximation , *PROBLEM solving , *ALGORITHMS , *ITERATIVE methods (Mathematics) , *LINEAR equations - Abstract
Abstract: In this paper, nonlinear minimax problems with inequality constraints are discussed. Combined the norm-relaxed SQP method with the idea of strongly sub-feasible directions method, a new method of quasi-strongly sub-feasible directions (MQSSFD) with arbitrary initial point for the discussed problems is presented. At each iteration of the proposed algorithm, an improved search direction is obtained by solving a quadratic program (QP) which always has a solution, and a high-order correction direction is yielded via a system of linear equations (SLE) to avoid the Maratos effect. After finite iterations, the iteration point always get into the feasible set by introducing a new non-monotone curve search. Under some mild conditions including the weak Mangasarian–Fromovitz constraint qualification (MFCQ), the proposed algorithm possesses global convergence, and the superlinear convergence is obtained without the strict complementarity. Finally, some elementary numerical experiments are implemented and reported. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
5. On the convergence rate of Ye–Yuan’s modified alternating direction method of multipliers.
- Author
-
Shen, Yuan and Xu, Minghua
- Subjects
- *
STOCHASTIC convergence , *MULTIPLIERS (Mathematical analysis) , *ITERATIVE methods (Mathematics) , *CONSTRAINED optimization , *PROBLEM solving , *MATHEMATICAL variables , *ALGORITHMS - Abstract
Abstract: The alternating direction method of multipliers (ADMM) is known to be a classic and efficient method for constrained optimization problem with two blocks of variables, and its empirical efficiency has been well illustrated in various fields. Specially, for improving its speed performance, Ye and Yuan suggested to do an additional extension with an optimal step size on the variables after each iteration of the primary ADMM. Indeed, the numerical experiments indicate that this modified ADMM improves the speed performance of the ADMM by around 40% without changing the algorithmic framework much. Recently, the convergence rate of the primary ADMM is established. Inspired by its idea, in this paper, we show that this improved ADMM also has convergence rate. The reason that larger γ yields better speed performance is also investigated and explained. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
6. The steepest descent algorithm without line search for p-Laplacian.
- Author
-
Zhou, Guangming and Feng, Chunsheng
- Subjects
- *
METHOD of steepest descent (Numerical analysis) , *ALGORITHMS , *LAPLACIAN matrices , *MATHEMATICAL formulas , *PROBLEM solving , *STOCHASTIC convergence - Abstract
Abstract: In this paper, the steepest descent algorithm without line search is proposed for p-Laplacian. Its search direction is the weighted preconditioned steepest descent one, and step length is estimated by a formula except the first iteration. Continuation method is applied for solving the p-Laplacian with very large p. Lots of numerical experiments are carried out on these algorithms. All numerical results show the algorithm without line search can cut down some computational time. Fast convergence of these new algorithms is displayed by their step length figures. These figures show that if search direction is the steepest descent one, exact step lengths can be substituted properly with step lengths obtained by the formula. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
7. A restoration-free filter SQP algorithm for equality constrained optimization
- Author
-
Zhu, Xiaojing and Pu, Dingguo
- Subjects
- *
QUADRATIC programming , *ALGORITHMS , *CONSTRAINED optimization , *PROBLEM solving , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a trust-region sequential quadratic programming algorithm with a modified filter acceptance mechanism is proposed for nonlinear equality constrained optimization. The most important advantage of the proposed algorithm is its avoidance of any feasibility restoration phase, a necessity in traditional filter methods. We solve quadratic programming subproblems based on the well-known Byrd–Omojokun trust-region method. Inexact solutions to these subproblems are allowed. Under some standard assumptions, global convergence of the proposed algorithm is established. Numerical results show our approach is potentially useful. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
8. A new partial splitting augmented Lagrangian method for minimizing the sum of three convex functions
- Author
-
Cao, Cuixia, Han, Deren, and Xu, Lingling
- Subjects
- *
CONVEX functions , *CONVEX programming , *PROBLEM solving , *ALGORITHMS , *OPERATOR theory , *STOCHASTIC convergence , *LAGRANGE multiplier - Abstract
Abstract: In this paper, we propose a new partial splitting augmented Lagrangian method for solving the separable constrained convex programming problem where the objective function is the sum of three separable convex functions and the constraint set is also separable into three parts. The proposed algorithm combines the alternating direction method (ADM) and parallel splitting augmented Lagrangian method (PSALM), where two operators are handled by a parallel method, while the third operator and the former two are dealt with by an alternating manner. Under mild conditions, we prove the global convergence of the new method. We also report some preliminary numerical results on constrained matrix optimization problem, illustrating the advantage of the new algorithm over the most recently PADALM of Peng and Wu (2010) [12]. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
9. Nonmonotone algorithm for minimax optimization problems
- Author
-
Wang, Fusheng and Wang, Yanping
- Subjects
- *
MATHEMATICAL optimization , *NONMONOTONIC logic , *FINANCE , *MANAGEMENT , *ENGINEERING , *ALGORITHMS , *CHEBYSHEV approximation , *PROBLEM solving , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: Many real life problems can be stated as a minimax optimization problem, such as the problems in economics, finance, management, engineering and other fields. In this paper, we present an algorithm with nonmonotone strategy and second-order correction technique for minimax optimization problems. Using this scheme, the new algorithm can overcome the difficulties of the Maratos effect occurred in the nonsmooth optimization, and the global and superlinear convergence of the algorithm can be achieved accordingly. Numerical experiments indicate some advantages of this scheme. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.