21 results
Search Results
2. New proximal bundle algorithm based on the gradient sampling method for nonsmooth nonconvex optimization with exact and inexact information.
- Author
-
Hoseini Monjezi, N. and Nobakhtian, S.
- Subjects
NONSMOOTH optimization ,SAMPLING methods ,ALGORITHMS ,CONVEX functions - Abstract
In this paper, we focus on a descent algorithm for solving nonsmooth nonconvex optimization problems. The proposed method is based on the proximal bundle algorithm and the gradient sampling method and uses the advantages of both. In addition, this algorithm has the ability to handle inexact information, which creates additional challenges. The global convergence is proved with probability one. More precisely, every accumulation point of the sequence of serious iterates is either a stationary point if exact values of gradient are provided or an approximate stationary point if only inexact information of the function and gradient values is available. The performance of the proposed algorithm is demonstrated using some academic test problems. We further compare the new method with a general nonlinear solver and two other methods specifically designed for nonconvex nonsmooth optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Riemannian stochastic fixed point optimization algorithm.
- Author
-
Iiduka, Hideaki and Sakai, Hiroyuki
- Subjects
MATHEMATICAL optimization ,RIEMANNIAN manifolds ,CONVEX sets ,ALGORITHMS ,MACHINE learning ,CONVEX functions ,GEODESICS - Abstract
This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Self-adaptive algorithms for solving split feasibility problem with multiple output sets.
- Author
-
Taddele, Guash Haile, Kumam, Poom, Sunthrayuth, Pongsakorn, and Gebrie, Anteneh Getachew
- Subjects
HILBERT space ,CONVEX sets ,ALGORITHMS ,PROBLEM solving ,FEASIBILITY studies - Abstract
In this paper, we study the split feasibility problem with multiple output sets in Hilbert spaces. For solving the aforementioned problem, we propose two new self-adaptive relaxed CQ algorithms which involve computing of projections onto half-spaces instead of computing onto the closed convex sets, and it does not require calculating the operator norm. We establish a weak and a strong convergence theorems for the proposed algorithms. We apply the new results to solve some other problems. Finally, we present some numerical examples to show the efficiency and accuracy of our algorithm compared to some existing results. Our results extend and improve some existing methods in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces.
- Author
-
Sahu, D.R., Cho, Y.J., Dong, Q.L., Kashyap, M.R., and Li, X.H.
- Subjects
HILBERT space ,INVERSE problems ,ALGORITHMS ,RADIOTHERAPY - Abstract
The split feasibility problem is to find a point x
∗ with the property that x∗ ∈ C and Ax∗ ∈ Q, where C and Q are nonempty closed convex subsets of real Hilbert spaces X and Y, respectively, and A is a bounded linear operator from X to Y. The split feasibility problem models inverse problems arising from phase retrieval problems and the intensity-modulated radiation therapy. In this paper, we introduce a new inertial relaxed CQ algorithm for solving the split feasibility problem in real Hilbert spaces and establish weak convergence of the proposed CQ algorithm under certain mild conditions. Our result is a significant improvement of the recent results related to the split feasibility problem. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
6. Scaled three-term derivative-free methods for solving large-scale nonlinear monotone equations.
- Author
-
Li, Qun and Zheng, Bing
- Subjects
NONLINEAR equations ,CONJUGATE gradient methods ,HYPERPLANES ,ALGORITHMS - Abstract
In this paper, two effective derivative-free methods are proposed for solving large-scale nonlinear monotone equations, in which the search directions are sufficiently descent and independent of the line search. The methods are the extensions of the conjugate gradient methods proposed by Bojari and Eslahchi (Numer. Algorithms 83, pp. 901–933, 2020) combined with the hyperplane projection technique. Our approaches are low storage memory and derivative-free, which makes them suitable for large-scale nonsmooth monotone nonlinear equations. Under proper assumptions, we analyze the global convergence property of the proposed methods. Finally, numerical experiments show that the proposed methods outperform some existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Nonmonotone inexact restoration approach for minimization with orthogonality constraints.
- Author
-
Francisco, Juliano B., Gonçalves, Douglas S., Bazán, Fermín S. V., and Paredes, Lila L. T.
- Subjects
PROBLEM solving ,SET functions ,TANGENT function ,POINT set theory ,ALGORITHMS - Abstract
Minimizing a differentiable function restricted to the set of matrices with orthonormal columns finds applications in several fields of science and engineering. In this paper, we propose to solve this problem through a nonmonotone variation of the inexact restoration method consisting of two phases: restoration phase, aimed to improve feasibility, and minimization phase, aimed to decrease the function value in the tangent set of the constraints. For this, we give a suitable characterization of the tangent set of the orthogonality constraints, allowing us to (i) deal with the minimization phase efficiently and (ii) employ the Cayley transform to bring a point in the tangent set back to feasibility, leading to a SVD-free restoration phase. Based on previous global convergence results for the nonmonotone inexact restoration algorithm, it follows that any limit point of the sequence generated by the new algorithm is stationary. Moreover, numerical experiments on three different classes of the problem indicate that the proposed method is reliable and competitive with other recently developed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. A regularized limited memory BFGS method for nonconvex unconstrained minimization.
- Author
-
Liu, Tao-Wen
- Subjects
ALGORITHMS ,NONCONVEX programming ,MATHEMATICAL optimization ,MATHEMATICAL regularization ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
9. A modified ODE-based algorithm for unconstrained optimization problems.
- Author
-
Ou, Yi-gui and Ma, Wei
- Subjects
MATHEMATICAL optimization ,ORDINARY differential equations ,ALGORITHMS ,ITERATIVE methods (Mathematics) ,SYSTEM analysis ,NUMERICAL analysis - Abstract
This paper presents a modified ODE-based algorithm for unconstrained optimization problems. It combines the idea of IMPBOT algorithm with nonmonotone and subspace techniques. The main feature of this method is that at each iteration, a lower dimensional system of linear equations is solved to obtain a trial step. Under some standard assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. A unified kernel function approach to primal-dual interior-point algorithms for convex quadratic SDO.
- Author
-
Guoqiang Wang and Detong Zhu
- Subjects
KERNEL functions ,ALGORITHMS ,SEMIDEFINITE programming ,ITERATIVE methods (Mathematics) ,MATHEMATICAL optimization - Abstract
Kernel functions play an important role in the design and analysis of primal-dual interior-point algorithms. They are not only used for determining the search directions but also for measuring the distance between the given iterate and the μ-center for the algorithms. In this paper we present a unified kernel function approach to primal-dual interior-point algorithms for convex quadratic semidefinite optimization based on the Nesterov and Todd symmetrization scheme. The iteration bounds for large- and small-update methods obtained are analogous to the linear optimization case. Moreover, this unifies the analysis for linear, convex quadratic and semidefinite optimizations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
11. A non-monotone line search multidimensional filter-SQP method for general nonlinear programming.
- Author
-
Gu, Chao and Zhu, Detong
- Subjects
NONLINEAR programming ,STOCHASTIC convergence ,ALGORITHMS ,FILTERING software ,MATHEMATICAL optimization - Abstract
In this paper, we propose a non-monotone line search multidimensional filter-SQP method for general nonlinear programming based on the Wächter-Biegler methods for nonlinear equality constrained programming. Under mild conditions, the global convergence of the new method is proved. Furthermore, with the non-monotone technique and second order correction step, it is shown that the proposed method does not suffer from the Maratos effect, so that fast local convergence to second order sufficient local solutions is achieved. Numerical results show that the new approach is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
12. A class of new three-term descent conjugate gradient algorithms for large-scale unconstrained optimization and applications to image restoration problems.
- Author
-
Wang, Xiaoliang, Yuan, Gonglin, and Pang, Liping
- Subjects
CONJUGATE gradient methods ,IMAGE reconstruction ,ALGORITHMS ,NONLINEAR functions ,LIPSCHITZ continuity - Abstract
Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems since they have attractive practical factors such as simple computation, low memory requirement and strong global convergence property. Based on the sufficient descent property and the Dai-Liao conjugate condition, a class of new three-term descent conjugate gradient algorithms are proposed. The proposed algorithms automatically have the sufficient descent property and satisfy the conjugate condition. Under the standard Wolfe line search technique and some common conditions, the global convergence of the proposed algorithms for uniformly convex function and general nonlinear function are established. Numerical results also indicate that the proposed algorithms are more efficient and reliable than the other methods for the testing problems. At last, the proposed methods are applied to some image restoration problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Globally convergent diagonal Polak–Ribière–Polyak like algorithm for nonlinear equations.
- Author
-
Mohammad, Hassan and Awwal, Aliyu Muhammed
- Subjects
NONLINEAR equations ,ALGORITHMS ,SEARCH algorithms - Abstract
We present a derivative-free diagonal Polak–Ribière–Polyak like algorithm for solving large-scale systems of nonlinear equations. The search direction of the algorithm is obtained by incorporating a positive definite diagonal matrix with the positive Polak–Ribière–Polyak (PRP+) parameter. By employing a derivative-free line search technique, the global convergence and convergence rate of the algorithm are achieved. Numerical experiments performed on some large-scale systems of nonlinear equations demonstrated the good performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Derivative-free superiorization: principle and algorithm.
- Author
-
Censor, Yair, Garduño, Edgar, Helou, Elias S., and Herman, Gabor T.
- Subjects
ALGORITHMS ,IMAGE reconstruction algorithms ,PROBLEM solving ,IMAGE reconstruction ,SET functions ,CONSTRAINT algorithms - Abstract
The superiorization methodology is intended to work with input data of constrained minimization problems, that is, a target function and a set of constraints. However, it is based on an antipodal way of thinking to what leads to constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done by inserting target-function-reducing perturbations into a feasibility-seeking algorithm while retaining its feasibility-seeking ability and without paying a high computational price. A superiorized algorithm that employs component-wise target function reduction steps is presented. This enables derivative-free superiorization (DFS), meaning that superiorization can be applied to target functions that have no calculable partial derivatives or subgradients. The numerical behavior of our derivative-free superiorization algorithm is illustrated on a data set generated by simulating a problem of image reconstruction from projections. We present a tool (we call it a proximity-target curve) for deciding which of two iterative methods is "better" for solving a particular problem. The plots of proximity-target curves of our experiments demonstrate the advantage of the proposed derivative-free superiorization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Two adaptive scaled gradient projection methods for Stiefel manifold constrained optimization.
- Author
-
Oviedo, Harry, Dalmau, Oscar, and Lara, Hugo
- Subjects
CONSTRAINED optimization ,SMOOTHNESS of functions ,ALGORITHMS ,NONLINEAR programming ,FACTORIZATION - Abstract
This article is concerned with the problem of minimizing a smooth function over the Stiefel manifold. In order to address this problem, we introduce two adaptive scaled gradient projection methods that incorporate scaling matrices that depend on the step-size and a parameter that controls the search direction. These iterative algorithms use a projection operator based on the QR factorization to preserve the feasibility in each iteration. However, for some particular cases, the proposals do not require the use of any projection operator. In addition, we consider a Barzilai and Borwein-like step-size combined with the Zhang–Hager nonmonotone line-search technique in order to accelerate the convergence of the proposed procedures. We proved the global convergence for these schemes, and we evaluate their effectiveness and efficiency through an extensive computational study, comparing our approaches with other state-of-the-art gradient-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Fixing and extending some recent results on the ADMM algorithm.
- Author
-
Banert, Sebastian, Boţ, Radu Ioan, and Csetnek, Ernö Robert
- Subjects
COMPOSITION operators ,SMOOTHNESS of functions ,LINEAR operators ,ALGORITHMS ,NONSMOOTH optimization ,HILBERT space - Abstract
We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration, we allow the use of variable metrics, while the investigations are carried out in the setting of infinite-dimensional Hilbert spaces. This algorithmic scheme is investigated from the point of view of its convergence properties. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. A descent Dai-Liao conjugate gradient method for nonlinear equations.
- Author
-
Abubakar, Auwal Bala and Kumam, Poom
- Subjects
CONJUGATE gradient methods ,NONLINEAR equations ,ALGORITHMS ,NUMERICAL solutions to equations - Abstract
In this work, we propose an algorithm for solving system of nonlinear equations. The idea is a combination of the descent Dai-Liao method by Babaie-Kafaki and Gambari (Optim. Meth. Soft. 29(3), 583–591, 2014) and the hyperplane projection method. Using the monotonicity and Lipschitz continuity assumptions, we prove that the proposed method is globally convergent. Examples of numerical experiment show that the method is promising and efficient compared to the method proposed by Sun et al. (Journal of Inequalities and Applications 236, 1–8, 2017). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Hybridization of accelerated gradient descent method.
- Author
-
Petrović, Milena, Rakočević, Vladimir, Kontrec, Nataša, Panić, Stefan, and Ilić, Dejan
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,ITERATIVE methods (Mathematics) ,PROBLEM solving ,LINEAR systems ,STOCHASTIC convergence - Abstract
We present a gradient descent algorithm with a line search procedure for solving unconstrained optimization problems which is defined as a result of applying Picard-Mann hybrid iterative process on accelerated gradient descent SM method described in Stanimirović and Miladinović (Numer. Algor. 54, 503-520, 2010). Using merged features of both analyzed models, we show that new accelerated gradient descent model converges linearly and faster then the starting SM method which is confirmed trough displayed numerical test results. Three main properties are tested: number of iterations, CPU time and number of function evaluations. The efficiency of the proposed iteration is examined for the several values of the correction parameter introduced in Khan (2013). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Two accelerated nonmonotone adaptive trust region line search methods.
- Author
-
Babaie-Kafaki, Saman and Rezaee, Saeed
- Subjects
MONOTONIC functions ,RADIUS (Geometry) ,ALGORITHMS ,MATHEMATICAL optimization ,NUMERICAL analysis - Abstract
Hybridizing monotone and nonmonotone approaches, we employ a modified trust region ratio in which more information is provided about the agreement between the exact and the approximate models. Also, we use an adaptive trust region radius as well as two accelerated Armijo-type line search strategies to avoid resolving the trust region subproblem whenever a trial step is rejected. We show that the proposed algorithm is globally and locally superlinearly convergent. Comparative numerical experiments show practical efficiency of the proposed accelerated adaptive trust region algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems.
- Author
-
Boţ, Radu and Csetnek, Ernö
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,ALGEBRA ,INTEGRAL operators ,OPERATOR theory - Abstract
We introduce and investigate the convergence properties of an inertial forward-backward-forward splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitzian operator. By making use of the product space approach, we expand it to the solving of inclusion problems involving mixtures of linearly composed and parallel-sum type monotone operators. We obtain in this way an inertial forward-backward-forward primal-dual splitting algorithm having as main characteristic the fact that in the iterative scheme all operators are accessed separately either via forward or via backward evaluations. We present also the variational case when one is interested in the solving of a primal-dual pair of convex optimization problems with complexly structured objectives, which we also illustrate by numerical experiments in image processing. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. A note on the smoothing quadratic regularization method for non-Lipschitz optimization.
- Author
-
Huang, Yakui, Liu, Hongwei, and Cong, Weijie
- Subjects
MATHEMATICAL bounds ,SMOOTHING (Numerical analysis) ,ALGORITHMS ,MATHEMATICAL optimization ,APPROXIMATION theory - Abstract
We present a new smaller upper bound for all the elements in the associate generalized Hessian used in the smoothing quadratic regularization (SQR) algorithm proposed by Bian and Chen (SIAM J. Optim. 23: 1718-1741, ). We modify the SQR algorithm by making use of the new upper bound. Numerical results show that our new upper bound improves the performance of the SQR algorithm significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.