41 results
Search Results
2. Another modified version of RMIL conjugate gradient method.
- Author
-
Yousif, Osman Omer Osman and Saleh, Mohammed A.
- Subjects
- *
CONJUGATE gradient methods , *TECHNOLOGY convergence , *SIMPLICITY - Abstract
Due to their simplicity and global convergence properties, the conjugate gradient (CG) methods are widely used for solving unconstrained optimization problems, especially those of large scale. To establish the global convergence and to obtain better numerical performance in practice, much effort has been devoted to develop new CG methods or even to modify well- known methods. In 2012, Rivaie et al., have proposed a new CG method, called RMIL which has good numerical results and globally convergent under the exact line search. However, in 2016, Dai has pointed out a mistake in the steps of the proof of global convergence of RMIL and hence to guarantee the global convergence he suggested a modified version of RMIL, called RMIL+. In this paper, we present another modified version of RMIL, which is globally convergent via the exact line search. Furthermore, to support the theoretical proof of the global convergence of the modified version in practical computation, a numerical experiment based on comparing it with RMIL, RMIL+, and CG-DESCENT was done. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The L2 convergence of stream data mining algorithms based on probabilistic neural networks.
- Author
-
Rutkowska, Danuta, Duda, Piotr, Cao, Jinde, Rutkowski, Leszek, Byrski, Aleksander, Jaworski, Maciej, and Tao, Dacheng
- Subjects
- *
ARTIFICIAL neural networks , *DATA mining , *MATHEMATICAL proofs , *ONLINE algorithms , *ALGORITHMS , *TRACKING algorithms - Abstract
This paper concerns a new incremental approach to mining data streams. It is known that patterns in a data stream may evolve over time. In many cases, we need to track and analyze the nature of these changes. In the paper, the probabilistic neural networks are considered as basic models for tracking changes in data streams. We present globally convergent stream data mining algorithms applied to problems of regression, classification, and density estimation in a time-varying (drifting) environment. The algorithms are derived from the Parzen kernel-based probabilistic neural networks working in the online mode. For each problem, a theorem is presented ensuring the L 2 convergence of the algorithm designed for tracking drifting regression, density, or discriminant functions. Illustrative examples explain in detail how to choose the bandwidth of the Parzen kernel and the learning rate of the online algorithm. The performance of all algorithms is shown in exemplary simulations. It should be noted that this paper is one of very few, in the existing literature, presenting mathematically justified stream data mining algorithms. • The incremental version of the Generalized Regression Neural Network (IGRNN) able to track drifting regression functions. • The incremental version of the Probabilistic Neural Network (IPNN) working in non-stationary environments. • Application of IPNN for tracking drifting discriminant functions. • Mathematical proofs of the L 2 convergence of all the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. New DY-HS hybrid conjugate gradient algorithm for solving optimization problem of unsteady partial differential equations with convection term.
- Author
-
Yu, Yang, Wang, Yu, Deng, Rui, and Yin, Yu
- Subjects
- *
PARTIAL differential equations , *TRANSPORT equation , *LIPSCHITZ continuity , *COST functions , *CONTINUOUS processing - Abstract
This paper studies an optimization problem for the unsteady partial differential equations (PDEs) with convection term, widely used in continuous casting process. Considering the change of casting speed, a dynamic optimization method based on new DY-HS hybrid conjugate gradient algorithm (DY-HSHCGA) is proposed. In the DY-HSHCGA, the Dai–Yuan and the Hestenes–Stiefel conjugate gradient algorithms are convex combined, and a new conjugate parameter θ k is obtained through the condition of quasi-Newton direction. Moreover, Lipschitz continuity of the gradient of cost function, as an important conditions for convergence, is analyzed in this paper. On the basis on this condition, the global convergence of DY-HSHCGA is proved. Finally, the effectiveness of DY-HSHCGA is verified by some instances from the steel plant. Comparing with other algorithms DY-HSHCGA obviously accelerates the convergence rate and reduces the number of iteration. The optimizer based on the DY-HSHCGA shows a more stable results. • Optimization problem of an unsteady PDEs is investigated. • A new DY-HS hybrid conjugate gradient algorithm (DY-HSHCGA) is presented. • The global convergence of the DY-HSHCGA is analyzed. • The Lipschitz continuity of the gradient of cost function is proved • The effectiveness of DY-HSHCGA is verified by experimental simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Solve the relaxed OPF problem by generalized gradient smooth Newton methods.
- Author
-
Feng, Jian, Hu, Xu, and Song, Weizhao
- Subjects
- *
GLOBAL optimization , *ELECTRICAL load , *PARAMETER estimation , *NEWTON-Raphson method , *NONLINEAR equations - Abstract
Optimal Power flow (OPF) problem is a nonlinear nonconvex optimization problem. It faces three challenges when solving the algorithm based on traditional optimization methods, namely, global convergence, solving efficiency and global optimization. For complex and large-scale power systems, global optimization of algorithms is more difficult to achieve than global convergence and solution efficiency. Therefore, this paper proposes a method for finding the global optimal solution to the OPF problems. This method first transforms the OPF problem into a relaxed OPF problem. Then, by using the transformation function max { ⋅ } , the relaxed OPF problem is decomposed into two subproblems. Finally, the global optimal value and solution of the OPF problem are obtained by solving two sub problems using the generalized gradient Newton method and the smooth Newton method, respectively. Numerically in the experiments, we compare the proposed method with the interior-point method, the commercial solver SNOPT, NLP(KNITRO) and ESLP2. The results show that under the same order of magnitude of computational efficiency, the method proposed in this paper has more advantages. • The OPF problem is relaxed into an optimization problem with only inequalities. • Introduce estimation parameters to estimate the global optima objective value of the OPF problem. • Decompose the OPF problem into two sub problems: single variable root finding and box constraints only. • Provide sufficient and necessary conditions for the global optimal value of the OPF problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A projection-based hybrid PRP-DY type conjugate gradient algorithm for constrained nonlinear equations with applications.
- Author
-
Li, Dandan, Wang, Songhua, Li, Yong, and Wu, Jiaqi
- Subjects
- *
CONJUGATE gradient methods , *BURST noise , *IMAGE reconstruction , *ALGORITHMS , *BENCHMARK problems (Computer science) , *NONLINEAR equations - Abstract
Based on the convex combination technique, we propose a projection-based hybrid conjugate gradient algorithm for solving nonlinear equations with convex constraints in this paper. The conjugate parameter of the proposed algorithm is a convex combination of the modified Polak-Ribière-Polyak and Dai-Yuan type conjugate parameters, and the search direction has the sufficient descent property without the use of a line search strategy. The proposed hybrid algorithm's global convergence is established under appropriate assumptions. The numerical experiments demonstrate that the proposed algorithm is more efficient and competitive than existing methods under some benchmark test problems. Furthermore, it is also extended to solve the sparse signal and impulse noise image restoration problem that arises in compressive sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A truncated three-term conjugate gradient method with complexity guarantees with applications to nonconvex regression problem.
- Author
-
Hu, Qingjie, Zhu, Liping, Chang, Cuili, and Zhang, Wenqi
- Subjects
- *
CONJUGATE gradient methods - Abstract
In this paper, a truncated three-term conjugate gradient method is proposed for nonconvex unconstrained optimization. At each iteration, the search direction generated by this method is sufficiently descent independent of any line search. We investigate its global convergence with Wolfe line search under the standard assumptions. Moreover, we present its complexity analysis under the objective function with the Lipschitz continuously gradient and Armijo line search which implies that the proposed method is also globally convergent with Armijo line search. A remarkable point about its complexity is that the proposed method possesses the same complexity order as the classical gradient descent method without any restart strategy. Finally, the proposed method is applied to solve the nonconvex robust regression problems. Numerical results show that the proposed method is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Nonconvex Rician noise removal via convergent plug-and-play framework.
- Author
-
Wei, Deliang, Weng, Shiyang, and Li, Fang
- Subjects
- *
IMAGE processing - Abstract
• We propose a new plug-and-play deep neural network method to remove Rician noise. • We prove some mathematical properties and the convergence of the proposed method. • Experimental results show that the proposed method outperforms existing methods. Restoring images corrupted by Rician noise is a challenging issue in medical image processing. In the existing methods, the model-driven method can not recover the images well, and the learning-based methods lack good interpretability. In this paper, we propose a plug-and-play (PnP) method to remove Rician noise. Due to the statistical properties of Rician distribution and the implicit deep image priors, the problem is non-convex. We present a convergent PnP method to address these issues by an adaptively relaxed alternating direction method of multipliers. Theoretically, we give some useful mathematical properties and the global linear convergence of the proposed method by an adaptive relaxation strategy. Experimental results show that the proposed method outperforms the existing state-of-art traditional and learning-based methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. An adaptive modified three-term conjugate gradient method with global convergence.
- Author
-
Amini, Keyvan and Faramarzi, Parvaneh
- Subjects
- *
CONJUGATE gradient methods , *EIGENVALUES , *ALGORITHMS - Abstract
In this paper, an adaptive three-term conjugate gradient method is proposed based on the Perry family by using a modified secant condition. The new method depends on a positive parameter which has a critical role in the efficiency of the algorithm. We propose an adaptive choice for this parameter to control the condition number of the direction matrix by minimizing the distance between the smallest and largest eigenvalue. Global convergence of the new algorithm is proved for general functions, under standard assumptions. Experimental results verify the expected numerical behavior of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Multi-step inertial strictly contractive PRSM algorithms for convex programming problems with applications.
- Author
-
Deng, Zhao and Han, Deren
- Subjects
- *
CONVEX programming , *ALGORITHMS , *COMPUTED tomography - Abstract
The Peaceman–Rachford splitting method (PRSM) has been widely studied in recent years because of its excellent numerical performance, especially with inertial acceleration. In this paper, we propose a multi-step inertial strictly contractive PRSM (shortly, MISCPRSM) for solving convex optimization problems, which has not been studied before. This paper aims to give an exhaustive analysis of the value relationship between multiple inertial parameters. The second subproblem utilizes an additional customized indefinite proximal term in order to obtain a closed-form solution. The global convergence and complexity result of the introduced method are analyzed by using variational inequality and basic inequalities. Finally, through simulation and computed tomography experiments, the effectiveness and robustness of the MISCPRSM are proved, and the detailed values of inertial parameters are given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. From [formula omitted] subgradient to projection: A compact neural network for [formula omitted]-regularized logistic regression.
- Author
-
Mohammadi, Majid, Atashin, Amir Ahooye, and Tamburri, Damian A.
- Subjects
- *
LOGISTIC regression analysis , *FEATURE selection , *MATHEMATICAL optimization - Abstract
ℓ 1 regularization has been used for logistic regression to circumvent the overfitting and use the estimated sparse coefficient for feature selection. However, the challenge of such regularization is that the ℓ 1 regularization is not differentiable, making the standard convex optimization algorithm not applicable to this problem. This paper presents a simple projection neural network for ℓ 1 -regularized logistics regression. In contrast to many available solvers in the literature, the proposed neural network does not require any extra auxiliary variable nor smooth approximation, and its complexity is almost identical to that of the gradient descent for logistic regression without ℓ 1 regularization, thanks to the projection operator. We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value. The proposed neural solution significantly outperforms state-of-the-art methods concerning the execution time and is competitive in terms of accuracy and AUROC. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. A nonconvex function activated noise-tolerant neurodynamic model aided with Fischer-Burmeister function for time-varying quadratic programming in the presence of noises.
- Author
-
Sun, Yingyi, Cao, Jianmin, Sun, Zhongbo, and Tang, Shijun
- Subjects
- *
RECURRENT neural networks , *QUADRATIC programming , *NOISE , *NONLINEAR functions - Abstract
• Two ZNN-type models are proposed for TVQPPs with equality and inequality constraints, which removes convex condition of activation function. • An universal GNTZNNM-NAF framework is reconstructed from a control-based perspective for TVQPPs with equality and inequality constraints with different noises. • Global convergence and strong robustness of ZNN-type models are analyzed in detail. • Theoretical results demonstrate that the residual errors of ZNN-type models globally converge to zero even under different measurement noises. Time-varying quadratic programming problems (TVQPPs) with equality and inequality constraints often arise in the fields of scientific computation and engineering application. Zeroing neural network (ZNN), being a special kind of recurrent neural network, has shown powerful capabilities to compute a variety of zeroing finding problems with monotonically increasing odd activation function. However, the projection sets of nonconvex activation function are obviously excluded, which means a general conclusion remain unexplored. In addition, noises are always ubiquitous in actual applications. Nevertheless, most existing ZNN-based models usually assume that the solving process is free of noise before the calculation. In this paper, a general zeroing neural network model with nonconvex activated function (GZNNM-NAF) and a general noise-tolerant zeroing neural network model with nonconvex activated function (GNTZNNM-NAF), which are also viewed as ZNN-type models, are developed by the inspiration of the traditional ZNN model from a control-based perspective. The ZNN-type models break the limitation of the traditional ZNN models of activation function, which allows nonconvex sets for projection operations and combines nonlinear complementary function for dealing with inequality constraints arising in TVQPPs. Moreover, theoretical results indicate that the ZNN-type models globally converge to time-varying optimal solution of TVQPPs with equality and inequality constraints under the noise circumstance. According to the different cases of nonconvex activation function, robustness analyses are demonstrated in detail for TVQPPs. It may enlarge the scope of the proposed method, especially in the field of practical application. Finally, a numerical example and an application example to manipulator motion generation are analyzed to verify the superiority and robustness of the developed ZNN-type models for TVQPPs with different measurement noises. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A partition-based convergence framework for population-based optimization algorithms.
- Author
-
Li, Xinxin, Hua, Shuai, Liu, Qunfeng, and Li, Yun
- Subjects
- *
MEMETICS , *MATHEMATICAL optimization , *PARTICLE swarm optimization , *GLOBAL optimization , *GENETIC algorithms , *DIFFERENTIAL evolution - Abstract
[Display omitted] • A framework is proposed for population-based optimization algorithms' convergence. • DIRECT's partition and population evolutions are repeated in this framework. • The global convergence of the framework is proved. • Framework is applied successfully on three population-based optimization algorithms. Population-based optimization algorithms, such as genetic algorithm and particle swarm optimization, have become a class of important algorithms for solving global optimization problems. However, there is an issue that the global convergence is often absent for most of them. This paper proposes a partition-based convergence framework for population-based optimization algorithms to solve this troubling problem. In this framework, regular partitions and evolutions of populations are implemented alternatively. Specifically, the initial population is generated from a regular partition on the search space; after several generations of evolution of the population, the evolution result is returned to join in the regular partition again, and a new population is generated. Repeat such progress until some stop condition is satisfied. Global convergence is guaranteed for the framework. Then this convergence framework is applied to particle swarm optimization, differential evolution, and genetic algorithm. The modified algorithms are globally convergent and perform better than the original version. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. A Riemannian gradient ascent algorithm with applications to orthogonal approximation problems of symmetric tensors.
- Author
-
Sheng, Zhou, Yang, Weiwei, and Wen, Jie
- Subjects
- *
PROBLEM solving - Abstract
In this paper, we propose an ascent algorithm by exploiting the Riemannian gradient, which is to solve the optimization problems with orthogonality constraints. It applies to orthogonal approximation problems of symmetric tensors. The iteration possesses some nice properties and has the same expression as the Polar decomposition. We establish that the global convergence of our algorithm. Preliminary results show that the computational advantage of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Neurodynamic approaches for sparse recovery problem with linear inequality constraints.
- Author
-
Yang, Jiao, He, Xing, and Huang, Tingwen
- Subjects
- *
DISTRIBUTED algorithms , *SCALABILITY - Abstract
This paper develops two neurodynamic approaches for solving the L 1 -minimization problem with the linear inequality constraints. First, a centralized neurodynamic approach is proposed based on projection operator and nonnegative quadrant. The stability and global convergence of the centralized neurodynamic approach are analyzed by the Lyapunov method in detail. Considering that the distributed optimization problem has the advantages of information protection and scalability, the L 1 -minimization problem with linear inequality constraints is transformed into a distributed sparse optimization problem under mild conditions. Then, using the centralized neurodynamic approach and multi-agent consensus theory, a distributed neurodynamic approach is proposed for the distributed optimization problem. Furthermore, relevant theories show that each agent globally converges to an optimal solution of the distributed optimization problem. Finally, the presented centralized neurodynamic approach is applied to sparse recovery problem with L ∞ -norm noise constraints and the effectiveness of distributed approach is shown by several experiments on sparse signal recovery. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Splitting augmented Lagrangian-type algorithms with partial quadratic approximation to solve sparse signal recovery problems.
- Author
-
Jian, Jinbao, Huang, Qiongxuan, Yin, Jianghua, and Zhang, Wei
- Subjects
- *
SPARSE approximations , *NONSMOOTH optimization , *SMOOTHNESS of functions , *ALGORITHMS , *MACHINE learning - Abstract
In this paper, we focus on the two-block nonconvex and nonsmooth optimization with linear constraints, where the objective function is the sum of a convex but nonsmooth function and a smooth but nonconvex function. This problem encompasses many important applications in engineering and machine learning. Based on the splitting algorithm, making use of the quadratic approximation of the smooth part, and with the help of Armijo line search technique, we first propose a splitting augmented Lagrangian method with partial quadratic approximation. Under some mild conditions, the global convergence of the proposed method is proved. Moreover, when the gradient of the smooth part in the objective function is Lipschitz continuous, we then propose and analyze a variant of the aforementioned method without performing line search. We report some preliminary numerical results on solving nonconvex quadratic regularization problems to show the feasibility and effectiveness of the two proposed methods. Finally, we show applicability and encouraging efficiency of our methods by applying them to solve sparse signal recovery problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. An efficient trust region algorithm with bounded iteration sequence for unconstrained optimization and its application in support vector machine.
- Author
-
Yu, Zhensheng, Yuan, Yue, and Tian, Panjie
- Subjects
- *
SUPPORT vector machines , *ALGORITHMS - Abstract
In this paper, we propose an efficient trust region algorithm for unconstrained optimization problem. Different from the traditional trust region algorithm, our algorithm obtains the next iteration point by projection after getting the trust region trial step. The main feature of this algorithm is that it generates a bounded iteration sequence automatically, and it retains the excellent convergence properties. The numerical tests for support vector machine(SVM) and some unconstrained optimization test sets problems show the efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Computational issues in biaxial bending capacity assessment of RC and composite cross-sections exposed to fire.
- Author
-
Chiorean, Cosmin G.
- Subjects
- *
NEWTON-Raphson method , *ULTIMATE strength , *REINFORCED concrete , *CAPACITY (Law) , *BENDING strength , *ECCENTRIC loads - Abstract
• Biaxial bending strength capacity assessment of composite sections exposed to fire. • A global convergent strain-driven iterative search methodology has been developed. • A computationally effective adaptive plastic centroid (APC) has been introduced. • Non-convexity of nominal moment-capacity contours associated with strain-softening. • A new 3D α N -isogonic vertical interaction diagrams are identified. This paper introduces an advanced computational method for assessing the biaxial bending capacities of arbitrary-shaped reinforced concrete and steel–concrete composite cross-sections under fire conditions. The proposed approach involves a strain-driven iterative method coupled with an adaptive plastic centroid providing a "fail-safe" methodology by combining the bisection and damped Newton methods to improve its global convergence properties. Several key computational issues are addressed: (1) strength assessment criteria and its impact on computational outcomes, (2) the pathological behavior of local convergent iterative methods causing divergence or spurious solutions in stress-resultant space, (3) the softening behavior of concrete in compression affecting solution uniqueness and interaction diagram convexity, (4) non-planar vertical interaction diagrams induced by a mobile centroid, and (5) computational challenges related to solution non-uniqueness or non-existence in M-M stress resultant space when axial force falls outside the iso -load contour. An additional notable feature and novelty of the proposed method lie in its unique capability to assess true plane vertical interaction diagrams enabling also both ultimate and nominal strength assessment. Validation includes comparisons with other numerical results and experimental data from international literature, extending the benchmark results for the strength capacity assessment of composite cross-sections exposed to high temperatures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Two types of anti-noise integral enhanced recurrent neural networks for solving time-varying complex quadratic programming.
- Author
-
Song, Yating, Ren, Xiaohui, Zheng, Lunan, and Zhang, Zhijun
- Subjects
- *
QUADRATIC programming , *COMPLEX numbers , *REAL numbers , *ERROR functions , *PROBLEM solving , *RECURRENT neural networks - Abstract
The time-varying complex quadratic programming problem plays an important role in scientific research and engineering applications, and has received extensive attention. In this paper, by solving the problem in different number domains, two types of novel anti-noise integral enhanced recurrent neural networks (IERNNs) are proposed and analyzed. Specifically, the IERNN-A model solves the problems in the complex number domain, while the IERNN-B model solves the problems by converting the problems from the complex number domain into the real number domain. Both the IERNN-A and IERNN-B have the same design paradigm, i.e., an integral-type error function is first defined to obtain good anti-noise capability, and then substitute it into a differential-driven design formula to ensure convergence. Theory analysis and simulation results verify the global convergence and robustness of the proposed IERNNs. Furthermore, if a linear activation is used, the exponential convergence rate is obtained. Meanwhile, when time-varying bounded noise exists, the IERNNs are input-to-state stable. In addition, compared with other art-of-the-state methods, the proposed methods have faster convergence speed, stronger noise-suppression ability, and no overshoot, which are verified by the simulation and application of manipulator trajectory tracking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. An accelerated relaxed-inertial strategy based CGP algorithm with restart technique for constrained nonlinear pseudo-monotone equations to image de-blurring problems.
- Author
-
Jiang, Xianzhen and Huang, Zefeng
- Subjects
- *
CONJUGATE gradient methods , *LIPSCHITZ continuity , *ALGORITHMS , *LOGISTIC regression analysis , *NONLINEAR equations - Abstract
In this paper, an accelerated spectral conjugate gradient projection algorithm is proposed for solving the constrained nonlinear pseudo-monotone equations. We set a restart procedure related to the conjugate parameter in the search direction of the spectral conjugate gradient method. We use an accelerated gradient descent scheme to generate the spectral parameter. Finally, we adopt the relaxed-inertial strategy to accelerated algorithm iteration process. Global convergence is proved under mild conditions without the Lipschitz continuity assumption. Numerical comparisons with other methods show the performances of our algorithm for solving large-scale equations. Our applications are on regularized decentralized logistic regression and image de-blurring problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A hybrid BFGS-Like method for monotone operator equations with applications.
- Author
-
Abubakar, A.B., Kumam, P., Mohammad, H., Ibrahim, A.H., Seangwattana, T., and Hassan, B.A.
- Subjects
- *
OPERATOR equations , *NONLINEAR operators , *MAP projection , *NONLINEAR equations , *CONJUGATE gradient methods - Abstract
In this paper, a hybrid three-term conjugate gradient (CG) method is proposed to solve constrained nonlinear monotone operator equations. The search direction is computed such that it is close to the direction obtained by the memoryless Broyden–Fletcher–Goldferb–Shanno (BFGS) method. Without any condition, the search direction is sufficiently descent and bounded. Moreover, based on some conditions, the search direction satisfy the conjugacy condition without using any line search. The global convergence of the method is established under mild assumptions. Comparison with existing methods is done to test the efficiency of the proposed method through some numerical experiments. Lastly, the applicability of the proposed method is shown. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Global convergence of a descent PRP type conjugate gradient method for nonconvex optimization.
- Author
-
Hu, Qingjie, Zhang, Hongrun, and Chen, Yu
- Subjects
- *
CONJUGATE gradient methods , *MEMORY - Abstract
Nonlinear conjugate gradient methods are often used to solve large-scale unconstrained optimization problems owing to their lower memory requirement and less computation cost. In this paper, we propose a new Polak-Ribiére-Polyak type conjugate gradient method, which satisfies the sufficient descent condition independent of any line search. A remarkable property about this method is its strong global convergence with the standard Wolfe line search conditions as well as the standard Armijo line search strategy without convexity assumption of the objective function. The numerical results demonstrating the efficiency of the proposed method are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Inertial extragradient type method for mixed variational inequalities without monotonicity.
- Author
-
Jolaoso, Lateef O., Shehu, Yekini, and Yao, Jen-Chih
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *LITERATURE , *COST - Abstract
In this paper, we propose two inertial projection methods to solve mixed variational inequalities without assuming monotonicity on the cost operator. Consequently, we show that the sequence of iterates generated by our proposed methods converge globally to a solution of the mixed variational inequality under the condition that the set of solutions of the dual mixed variational inequality is nonempty. Our convergence results are obtained without assuming any further stringent condition imposed in other related results in the literature. Moreover, our results improve on many important related results in the literature. Some numerical illustrations are given to show the efficiency of our proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Linearized generalized ADMM-based algorithm for multi-block linearly constrained separable convex programming in real-world applications.
- Author
-
He, Jian, Li, Jinlin, Lu, Zhenrong, and Zhang, Bangzhong
- Subjects
- *
CONVEX programming , *ALGORITHMS - Abstract
We study a multi-block separable convex optimization problem with the linear constraints, where the objective function is the sum of m individual convex functions without overlapping variables. The linearized version of the generalized alternating direction method of multipliers (L-GADMM) is particularly efficient for the two-block separable convex programming problem and its convergence was proved when two blocks of variables are alternatively updated. However, the convergence analysis and practical applications of the extended L-GADMM (m ≥ 3) are in their early stages. In this paper, we extend this algorithm to the general case where the objective function consists of the sum of m -block convex functions. Theoretically, we prove global convergence of the new method and establish the worst-case convergence rate in the ergodic and nonergodic senses for the proposed algorithm. The efficiency of the new method is further demonstrated through numerical results on the calibration of the correlation matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. An efficient modified residual-based algorithm for large scale symmetric nonlinear equations by approximating successive iterated gradients.
- Author
-
Guo, Jie and Wan, Zhong
- Subjects
- *
CONJUGATE gradient methods , *NONLINEAR equations , *LARGE scale systems , *JACOBIAN matrices , *ALGORITHMS , *APPROXIMATION error - Abstract
In this paper, a new descent approximate modified residual algorithm is developed to solve a large scale system of nonlinear symmetric equations, where the basic strategy to improve its numerical performance is to approximately compute the gradients and the difference of gradients. The error bounds of this approximation are presented, and in virtue of this approximation, a conjugate gradient algorithm for solving large-scale optimization problems in the literature is extended to solve the system of nonlinear symmetric equations without needs of computing and storing the Jacobian matrices or their approximate matrices. It is proved that the obtained search directions in our developed algorithm are sufficiently decent with respect to the so-called approximate modified residues. Under mild assumptions, global and local convergence results of the developed algorithm are proved. Numerical tests indicate that the developed algorithm outperforms the other similar ones available in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Convergence properties of Levenberg–Marquardt methods with generalized regularization terms.
- Author
-
Ariizumi, Shumpei, Yamakawa, Yuya, and Yamashita, Nobuo
- Subjects
- *
NONLINEAR equations , *LINEAR equations , *DIFFERENTIABLE functions , *MATHEMATICAL regularization , *GENERALIZATION , *EQUATIONS - Abstract
Levenberg–Marquardt methods (LMMs) are the most typical algorithms for solving nonlinear equations F (x) = 0 , where F : R n → R m is a continuously differentiable function. They sequentially solve subproblems represented as squared residual of the Newton equations with the L 2 regularization to determine the search direction. However, since the subproblems of the LMMs are usually reduced to linear equations with n variables, it takes much time to solve them when m ≪ n. In this paper, we propose a new LMM which generalizes the L 2 regularization of the subproblems of the ordinary LMMs. By virtue of the generalization, we can choose a suitable regularization term for each given problem. Moreover, we show that a sequence generated by the proposed method converges globally and quadratically under some reasonable assumptions. Finally, we conduct numerical experiments to confirm that the proposed method performs better than the existing LMMs for some problems that satisfy m ≪ n. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A distributed neurodynamic algorithm for sparse signal reconstruction via [formula omitted]-minimization.
- Author
-
Han, Xin, He, Xing, and Ju, Xingxing
- Subjects
- *
SIGNAL reconstruction , *DISTRIBUTED algorithms , *MATRIX decomposition , *IMAGE reconstruction , *ALGORITHMS - Abstract
This paper proposes a distributed neurodynamic algorithm for sparse signal reconstruction by addressing ℓ 1 -minimization problems. Firstly, a ℓ 1 -minimization problem is transformed into a distributed model, drawing support from multi-agent consensus theory. Secondly, to address this distributed model, a novel distributed neurodynamic algorithm is proposed by employing derivative feedback and projection operator. It is proved that the proposed neurodynamic algorithm is globally convergent by utilizing the properties of projection operator and set-valued system. Moreover, compared with the existing distributed neurodynamic algorithms, the proposed neurodynamic algorithm is inverse-free and does not involve any matrix decomposition. Finally, some experimental results on sparse signal reconstruction indicate the effectiveness of the proposed neurodynamic algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems.
- Author
-
Chen, Yu, Kuang, Kai, and Yan, Xueling
- Subjects
- *
IMAGE reconstruction , *CONJUGATE gradient methods , *ALGORITHMS - Abstract
In this paper, a modified conjugate gradient method is proposed for nonconvex optimization. This method possesses the sufficient descent property independent of any line search. The global convergence property of the algorithm is established under the Wolfe line search strategy or the Armijo line search condition, respectively. Additionally, the complexity analysis of the proposed algorithm is investigated. To reach the point with the norm of the gradient below ɛ , the worst-case complexity bound matches that of the gradient method. Moreover, the obtained numerical results demonstrate that the modified method is effective for large-scale optimization problems and image restoration problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A fast inertial self-adaptive projection based algorithm for solving large-scale nonlinear monotone equations.
- Author
-
Zhang, N., Liu, J.K., Zhang, L.Q., and Lu, Z.L.
- Subjects
- *
NONLINEAR equations , *CONJUGATE gradient methods , *ALGORITHMS , *JACOBIAN matrices , *PROBLEM solving - Abstract
In this paper, we propose a fast inertial self-adaptive projection based algorithm for solving large-scale monotone equations with the relaxation factor. The modified hyperplane projection technique accelerates the computational performance of the proposed algorithm, and the self-adaptive parameter enhances its stability. Under some appropriate assumptions, the global convergence of the proposed algorithm is established. Moreover, the proposed algorithm is suitable to solve large-scale problems because it does not need gradient information or Jacobian matrix. The numerical results indicate that the new algorithm is robust and efficient by comparing with other popular algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Efficient low-rank regularization-based algorithms combining advanced techniques for solving tensor completion problems with application to color image recovering.
- Author
-
Ma, Xueshuang, Hu, Shenglong, and Wang, Jie
- Subjects
- *
COSINE function , *MATHEMATICAL regularization , *DISCRETE cosine transforms , *ALGORITHMS - Abstract
In this paper, a method combining several recent advances is proposed for solving the tensor completion problem. In the proposed method, the ɛ -sparsity index of a matrix, the balanced matricization schemes, the ket augmentation technique, and the discrete cosine transformation are carefully integrated. An easily implementable algorithm is presented to solve the proposed method, and each subproblem can be solved either exactly or approximately by closed formula. The global convergence of the algorithm is established without any assumptions. Experiments with real data show the ability of the proposed method over several state-of-the-art methods, especially with low sample ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. An efficient conjugate gradient-based algorithm for unconstrained optimization and its projection extension to large-scale constrained nonlinear equations with applications in signal recovery and image denoising problems.
- Author
-
Wu, Xiaoyu, Shao, Hu, Liu, Pengjie, Zhang, Yan, and Zhuo, Yue
- Subjects
- *
CONJUGATE gradient methods , *IMAGE denoising , *NONLINEAR equations , *LIPSCHITZ continuity , *MATHEMATICAL optimization - Abstract
In this paper, a hybrid version of the Rivaie–Mustafa–Ismail–Leong, RMIL for short, conjugate gradient method is proposed for unconstrained optimization problem. The search direction fulfills the sufficient descent condition and trust region property at each iteration, independent of any line search. We establish the global convergence of the proposed algorithm under normal assumptions and Wolfe line search. To move forward, incorporating with derivative-free projection technique, the proposed conjugate coefficient is extended to deal with nonlinear monotone equations. Without Lipschitz continuity assumption, the global convergence is proved with mild conditions. The numerical results for both unconstrained optimization instances and large-scale nonlinear equations prove the efficiency of the proposed methods with less computation cost. In particular, we also explore the practical applications on signal recovery and image denoising problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. An inertial proximal partially symmetric ADMM-based algorithm for linearly constrained multi-block nonconvex optimization problems with applications.
- Author
-
Wang, Xiaoquan, Shao, Hu, Liu, Pengjie, and Wu, Ting
- Subjects
- *
IMAGE processing , *ALGORITHMS , *LAGRANGE multiplier - Abstract
The alternating direction method of multipliers (ADMM) is an efficient splitting method for solving separable optimization with linear constraints. In this paper, an inertial proximal partially symmetric ADMM is proposed for solving linearly constrained multi-block nonconvex separable optimization, which can improve the computational efficiency by considering the ideas of inertial proximal point and linearization technique. Meanwhile, the proposed method updates the Lagrange multiplier twice and considers different relaxation factors in each iteration. Under the assumption that the generated sequence is bounded and the auxiliary function satisfies the Kurdyka-Łojasiewicz property, the global convergence of the proposed method with a more relaxed parameter range is analyzed. Moreover, some numerical results on SCAD (for smoothly clipped absolute deviation), image processing and robust PCA nonconvex problems are reported to demonstrate the efficiency and superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Global convergence of a modified spectral three-term CG algorithm for nonconvex unconstrained optimization problems.
- Author
-
Amini, Keyvan and Faramarzi, Parvaneh
- Subjects
- *
CONJUGATE gradient methods , *NONLINEAR functions , *ALGORITHMS - Abstract
Spectral conjugate gradient methods are an efficient family for solving unconstrained optimization problems that have been widely studied in recent decades. In this regard, Li et al. (2019) proposed a spectral three-term conjugate gradient method and proved the global convergence of this algorithm for uniformly convex functions. Our main motivation in this paper is to develop the convergence properties of this method such that the new method possesses suitable convergence for general nonlinear functions. To do end, we introduce a modified spectral conjugate gradient method based on the CG method by Li et al. (2019). We show that the new method fulfills the sufficient descent property without any line search. The new algorithm is globally convergent for general nonlinear functions without the convexity assumption on the objective function. The numerical results indicate that the behavior of the new algorithm is not only effective, but also promising versus other conjugate gradient methods dealing with unconstrained optimization problems of the CUTEst library. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Proximal ADMM for nonconvex and nonsmooth optimization.
- Author
-
Yang, Yu, Jia, Qing-Shan, Xu, Zhanbo, Guan, Xiaohong, and Spanos, Costas J.
- Subjects
- *
NONSMOOTH optimization , *DISTRIBUTED algorithms , *LYAPUNOV functions , *INTELLIGENT buildings , *AIR conditioning - Abstract
By enabling the nodes or agents to solve small-sized subproblems to achieve coordination, distributed algorithms are favored by many networked systems for efficient and scalable computation. While for convex problems, substantial distributed algorithms are available, the results for the more broad nonconvex counterparts are extremely lacking. This paper develops a distributed algorithm for a class of nonconvex and nonsmooth problems featured by (i) a nonconvex objective formed by both separate and composite components regarding the decision variables of interconnected agents, (ii) local bounded convex constraints, and (iii) coupled linear constraints. This problem is directly originated from smart buildings and is also broad in other domains. To provide a distributed algorithm with convergence guarantee, we revise the powerful alternating direction method of multiplier (ADMM) method and proposed a proximal ADMM. Specifically, noting that the main difficulty to establish the convergence for the nonconvex and nonsmooth optimization with ADMM is to assume the boundness of dual updates, we propose to update the dual variables in a discounted manner. This leads to the establishment of a so-called sufficiently decreasing and lower bounded Lyapunov function, which is critical to establish the convergence. We prove that the method converges to some approximate stationary points. We besides showcase the efficacy and performance of the method by a numerical example and the concrete application to multi-zone heating, ventilation, and air-conditioning (HVAC) control in smart buildings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. A criterion for the global convergence of conjugate gradient methods under strong Wolfe line search.
- Author
-
Yousif, Osman O.O., Mohammed, Mogtaba A.Y., Saleh, Mohammed A., and Elbashir, Murtada K.
- Abstract
From 1952 until now, the sufficient descent property and the global convergence of conjugate gradient (CG) methods have been studied extensively. However, the sufficient descent property and the global convergence of some CG methods such as the method of Polak, Ribière, and Polyak (PRP) and the method of Hestenes and Stiefel (HS) have not been established yet under the strong Wolfe line search. In this paper, based on Yousif (Yousif, 2020) we present a criterion that guarantees the generation of descent search directions property and the global convergence of CG methods when they are applied under the strong Wolfe line search. Moreover, the PRP and the HS methods are restricted in order to satisfy the presented criterion, so new modified versions of PRP and HS are proposed. Finally, to support the theoretical proofs, a numerical experiment is done. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. A Liu-Storey-type conjugate gradient method for unconstrained minimization problem with application in motion control.
- Author
-
Abubakar, Auwal Bala, Malik, Maulana, Kumam, Poom, Mohammad, Hassan, Sun, Min, Ibrahim, Abdulkarim Hassan, and Kiri, Aliyu Ibrahim
- Abstract
Conjugate gradient methods have played a vital role in finding the minimizers of large-scale unconstrained optimization problems due to the simplicity of their iteration, convergence properties and their low memory requirements. Based on the Liu-Storey conjugate gradient method, in this paper, we present a Liu-Storey type method for finding the minimizers of large-scale unconstrained optimization problems. The direction of the proposed method is constructed in such a way that the sufficient descent condition is satisfied. Furthermore, we establish the global convergence result of the method under the standard Wolfe and Armijo-like line searches. Numerical findings indicate that our presented approach is efficient and robust in solving large-scale test problems. In addition, an application of the method is explored. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Noise-suppressing zeroing neural network for online solving time-varying matrix square roots problems: A control-theoretic approach.
- Author
-
Sun, Zhongbo, Wang, Gang, Jin, Long, Cheng, Chao, Zhang, Bangcheng, and Yu, Junzhi
- Subjects
- *
SQUARE root , *ARTIFICIAL neural networks , *MATRICES (Mathematics) - Abstract
In this paper, the noise-suppressing zeroing neural network models (NSZNNMs) for online solving time-varying matrix square roots problems (TVMSRPs) are revisited and redesigned from a control viewpoint framework. Specifically, to solve TVMSRPs with different noises in real time, some noise-suppressing zeroing neural network functions are proposed. Moreover, a novel generally noise-suppressing zeroing neural network model (GNSZNNM) with generally noise-suppressing time-varying error-monitoring function is developed for online solving TVMSRPs with different measurement noises. In particular, the developed NSZNNMs globally converge to the time-varying theoretical solution of the TVMSRPs without noises, and exponentially converge to the theoretical solutions in the presence of noises, which are verified and analyzed theoretically. Compared with the classical zeroing neural network model (ZNNM), numerical results are provided to substantiate the efficiency and superiority of the developed NSZNNMs for online solving TVMSRPs with inherent tolerance to noises. In addition, a time-varying tensor square root problem is provided and illustrated to substantiate the potentially practical applications of the proposed NSZNNM for real-time TVMSRPs. The obtained results indicate that different activation functions can be utilized to accelerate the convergence speed of the GNSZNNM, which demonstrates its high efficiency and robustness. • Noise-tolerant neural networks are proposed for time-varying matrix square roots. • The superiorities are demonstrated for noise-tolerant zeroing neural networks. • Different activation functions may accelerate the convergence speed. • The MATLAB Simulink modeling is directly beneficial to the hardware implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. A cosh-based smoothing Newton algorithm for the real-time pricing problem in smart grid.
- Author
-
Li, Yuanyuan, Li, Junxiang, Yu, Zhensheng, Dong, Jingxin, and Zhou, Tingting
- Subjects
- *
ALGORITHMS , *SMOOTHNESS of functions , *PROBLEM solving , *ELECTRICITY pricing , *GRIDS (Cartography) , *NEWTON-Raphson method , *CONSTRAINED optimization - Abstract
• A cosh-based smoothing function is set to approximate complementary equation. • The smoothing approximation function is independent of users to suit large scale. • Multi-classes users are considered to optimize electricity prices in smart grid. • Nonsingularity of Jacobian and global and local quadratic convergence are proved. • Effectiveness of smoothing Newton pricing algorithm is shown by numerical results. In a smart grid system, different types of electricity users such as residential, commercial and industrial users have different utility functions. This paper proposes a nonlinear constrained optimization model for real-time pricing that maximizes the total utilities of all the different user groups. A Karush–Kuhn–Tucker equation system is employed to solve the model. However, it is a challenging task to solve the system due to the nonlinear complementary condition. To tackle the challenge, a cosh-based smoothing approximation function is proposed to substitute the nonlinear complementary condition. Subsequently, the smoothing Newton algorithm is developed to solve the new equation system. The global convergence and local quadratic convergence of the algorithm are proved. Numerical experiments are performed to test the smoothing method and compare the solutions of real-time pricing, fixed pricing and time-of-use pricing strategies applied in the smart grid system. The results show that the real-time pricing mechanism is the most suitable in saving energy and reducing peaks and troughs in energy consumption. This also indicates that it is effective to use the smoothing Newton algorithm to solve the problem of real-time electricity pricing for smart grid. We obtain that the smoothing approximation function is effective for the supply–demand balance constraints in smart grid. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems.
- Author
-
Li, Xiangli, Zhao, Wenjuan, and Dong, Xiaoliang
- Subjects
- *
CONJUGATE gradient methods , *ALGORITHMS - Abstract
Conjugate gradient method is an effective method for solving large-scale unconstrained optimization problems. This paper proposes a new conjugate gradient algorithm based on the self-scaling memoryless BFGS update, which uses the Wolfe line search. The descent and global convergence of the method are given under mild conditions. Numerical experiments show that our new conjugate gradient method is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. A parameterized Douglas–Rachford splitting algorithm for nonconvex optimization.
- Author
-
Bian, Fengmiao and Zhang, Xiaoqun
- Subjects
- *
LOW-rank matrices , *MATHEMATICAL optimization , *ALGORITHMS , *LEAST squares , *DATA science , *CONVEX functions - Abstract
• The parameterized Douglas–Rachford splitting algorithm provides great flexibility for practical applications. We established the global convergence of PDR splitting method in a nonconvex setting. Moreover, we provide a convergence analysis for an adaptive DR algorithm in nonconvex settings as a byproduct. • Numerical experiments on nonconvex optimization problems such as sparse signal recovery nonconvex set feasibility problems and low rank matrix completion show the superiority compared to the other methods. In this paper, we study a parameterized Douglas–Rachford splitting method in Wang-Wang (2019)[5] for a class of nonconvex optimization problem. A new merit function is constructed to establish the convergence of the whole sequence generated by the parameterized Douglas–Rachford splitting method. As a by-product, this also provides convergence results of a special case of the adaptive Douglas–Rachford algorithm proposed by Dao and Phan (2019)[22] in nonconvex settings. We then apply the parameterized Douglas–Rachford splitting method to three important classes of nonconvex optimization problems arising in data science: sparsity constrained least squares problem, feasibility problem and low rank matrix completion. Numerical results validate the effectiveness of the parameterized Douglas–Rachford splitting method compared with some other classical methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. A modified multivariate spectral gradient algorithm for solving absolute value equations.
- Author
-
Yu, Zhensheng, Li, Lin, and Yuan, Yue
- Subjects
- *
ABSOLUTE value , *ALGORITHMS , *VALUATION of real property , *EQUATIONS - Abstract
In this paper, a modified multivariate spectral gradient algorithm is proposed for solving the absolute value equations A x − | x | = b , where A ∈ R n × n , b ∈ R n and x ∈ R n. Some properties of the absolute value equations are analyzed, and the global convergence of the proposed algorithm based on some appropriate assumptions is discussed. Several examples are given to illustrate the effectiveness and competitiveness of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.