4,595 results on '"global convergence"'
Search Results
2. A modified PRP conjugate gradient method with inertial extrapolation for sparse signal reconstruction.
- Author
-
Zhang, Yuanshou, Sun, Min, and Liu, Jing
- Subjects
- *
SIGNAL reconstruction , *EXTRAPOLATION , *NOISE , *CONJUGATE gradient methods - Abstract
It is widely known that the inertial technique of the heavy-ball method can accelerate its convergence speed. In this paper, by embedding the inertial technique in the famous PRP conjugate gradient method, we propose a modified PRP conjugate gradient method with inertial extrapolation (PRPCG-IE) for sparse signal reconstruction. Its direction satisfies the sufficient descent property, which is independent of any line search. Global convergence of PRPCG-IE is established under some standard conditions. PRPCG-IE is applied to two sparse signal reconstruction problems with noise, and preliminary experimental results demonstrate the effectiveness of PRPCG-IE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A modified PRP conjugate gradient method for unconstrained optimization and nonlinear equations.
- Author
-
Cui, Haijuan
- Subjects
- *
NONLINEAR equations , *CONJUGATE gradient methods - Abstract
A modified Polak Ribiere Polyak(PRP) conjugate gradient(CG) method is proposed for solving unconstrained optimization problems. The search direction generated by this method satisfies sufficient descent condition at each iteration and this method inherits one remarkable property of the standard PRP method. Under the standard Armijo line search, the global convergence and the linearly convergent rate of the presented method is established. Some numerical results are given to show the effectiveness of the proposed method by comparing with some existing CG methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Zero-One Composite Optimization: Lyapunov Exact Penalty and a Globally Convergent Inexact Augmented Lagrangian Method.
- Author
-
Zhang, Penghe, Xiu, Naihua, and Luo, Ziyan
- Subjects
SMOOTHNESS of functions ,LINEAR operators ,SUPPORT vector machines ,LYAPUNOV functions ,OPERATOR functions - Abstract
We consider the problem of minimizing the sum of a smooth function and a composition of a zero-one loss function with a linear operator, namely the zero-one composite optimization problem (0/1-COP). It has a vast body of applications, including the support vector machine (SVM), calcium dynamics fitting (CDF), one-bit compressive sensing (1-bCS), and so on. However, it remains challenging to design a globally convergent algorithm for the original model of 0/1-COP because of the nonconvex and discontinuous zero-one loss function. This paper aims to develop an inexact augmented Lagrangian method (IALM), in which the generated whole sequence converges to a local minimizer of 0/1-COP under reasonable assumptions. In the iteration process, IALM performs minimization on a Lyapunov function with an adaptively adjusted multiplier. The involved Lyapunov penalty subproblem is shown to admit the exact penalty theorem for 0/1-COP, provided that the multiplier is optimal in the sense of the proximal-type stationarity. An efficient zero-one Bregman alternating linearized minimization algorithm is also designed to achieve an approximate solution of the underlying subproblem in finite steps. Numerical experiments for handling SVM, CDF, and 1-bCS demonstrate the satisfactory performance of the proposed method in terms of solution accuracy and time efficiency. Funding: This work was supported by the Fundamental Research Funds for the Central Universities [Grant 2022YJS099] and the National Natural Science Foundation of China [Grants 12131004 and 12271022]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. An efficient projection algorithm for large-scale system of monotone nonlinear equations with applications in signal recovery.
- Author
-
Abbass, Ghulam, Chen, Haibo, Abdullahi, Muhammad, and Muhammad, Abba Baba
- Subjects
NONLINEAR equations ,COMPRESSED sensing ,NUMERICAL analysis ,COMPARATIVE studies ,EQUATIONS - Abstract
The hybrid conjugate gradient (CG) method, recently proposed by authors in [11] for solving unconstrained optimization problems, has gained attention. This approach shares similarities with the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi-Newton (BFGS). In this paper, we modified the search direction proposed in [11] and extended it to solve a nonlinear monotone system of equations. Our modified search direction is bounded and satisfies the descent condition regardless of any line search. With the aid of some valid assumptions, the global convergence of the algorithm is presented. To assess the effectiveness and reliability of our approach, we conduct numerical experiments using six test problems with extremely large dimensions ranging from 1000 to 800,000. Comparative numerical analysis with other methods reveals that our proposed approach excels in both theoretical and numerical aspects. In addition, we explore our proposed method to conduct sparse signal recovery in a setting of compressed sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Convergence of the complex block Jacobi methods under the generalized serial pivot strategies.
- Author
-
Begović Kovač, Erna and Hari, Vjeran
- Subjects
- *
JACOBI operators , *MATRICES (Mathematics) , *EIGENVALUES - Abstract
The paper considers the convergence of the complex block Jacobi diagonalization methods under the large set of the generalized serial pivot strategies. The global convergence of the block methods for Hermitian, normal and J -Hermitian matrices is proven. In order to obtain the convergence results for the block methods that solve other eigenvalue problems, such as the generalized eigenvalue problem, we consider the convergence of a general block iterative process which uses the complex block Jacobi annihilators and operators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. On the Extension of Dai-Liao Conjugate Gradient Method for Vector Optimization.
- Author
-
Hu, Qingjie, Li, Ruyun, Zhang, Yanyan, and Zhu, Zhibin
- Subjects
- *
VECTOR valued functions , *CONVEX functions , *CONJUGATE gradient methods - Abstract
In this paper, we extend the Dai-Liao conjugate gradient method to vector optimization. Firstly, we analyze the global convergence of the direct extension version of the Dai-Liao conjugate gradient method for K-strongly convex vector functions. Secondly, we investigate the global convergence of the vector version of restricted non-negative Dai-Liao conjugate gradient method for general vector functions. Additionally, we discuss the global convergence of the vector version of modified Dai-Liao conjugate gradient method for general vector functions. Finally, numerical experiments demonstrate that the proposed conjugate gradient methods are effective for solving vector optimization problems. In particular, these methods can effectively generate the Pareto frontiers for the test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Convergence-Accelerated Fixed-Time Dynamical Methods for Absolute Value Equations.
- Author
-
Zhang, Xu, Li, Cailian, Zhang, Longcheng, Hu, Yaling, and Peng, Zheng
- Subjects
- *
ABSOLUTE value , *DYNAMICAL systems , *EQUATIONS , *EQUILIBRIUM - Abstract
Two new accelerated fixed-time stable dynamic systems are proposed for solving absolute value equations (AVEs): A x - | x | - b = 0 . Under some mild conditions, the equilibrium point of the proposed dynamic systems is completely equivalent to the solution of the AVEs under consideration. Meanwhile, we have introduced a new relatively tighter global error bound for the AVEs. Leveraging this finding, we have separately established the globally fixed-time stability of the proposed methods, along with providing the conservative settling-time for each method. Compared with some existing state-of-the-art dynamical methods, preliminary numerical experiments show the effectiveness of our methods in solving the AVEs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A New Hybrid Descent Algorithm for Large-Scale Nonconvex Optimization and Application to Some Image Restoration Problems.
- Author
-
Wang, Shuai, Wang, Xiaoliang, Tian, Yuzhu, and Pang, Liping
- Subjects
- *
IMAGE reconstruction , *CONJUGATE gradient methods , *CURVATURE , *ALGORITHMS - Abstract
Conjugate gradient methods are widely used and attractive for large-scale unconstrained smooth optimization problems, with simple computation, low memory requirements, and interesting theoretical information on the features of curvature. Based on the strongly convergent property of the Dai–Yuan method and attractive numerical performance of the Hestenes–Stiefel method, a new hybrid descent conjugate gradient method is proposed in this paper. The proposed method satisfies the sufficient descent property independent of the accuracy of the line search strategies. Under the standard conditions, the trust region property and the global convergence are established, respectively. Numerical results of 61 problems with 9 large-scale dimensions and 46 ill-conditioned matrix problems reveal that the proposed method is more effective, robust, and reliable than the other methods. Additionally, the hybrid method also demonstrates reliable results for some image restoration problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A new structured spectral conjugate gradient method for nonlinear least squares problems.
- Author
-
Nosrati, Mahsa and Amini, Keyvan
- Subjects
- *
ARTIFICIAL intelligence , *SIGNAL processing , *CONJUGATE gradient methods , *NONLINEAR equations , *LEAST squares , *EQUATIONS - Abstract
Least squares models appear frequently in many fields, such as data fitting, signal processing, machine learning, and especially artificial intelligence. Nowadays, the model is a popular and sophisticated way to make predictions about real-world problems. Meanwhile, conjugate gradient methods are traditionally known as efficient tools to solve unconstrained optimization problems, especially in high-dimensional cases. This paper presents a new structured spectral conjugate gradient method based on a modification of the modified structured secant equation of Zhang, Xue, and Zhang. The proposed method uses a novel appropriate spectral parameter. It is proved that the new direction satisfies the sufficient descent condition regardless of the line search. The global convergence of the proposed method is demonstrated under some standard assumptions. Numerical experiments show that our proposed method is efficient and can compete with other existing algorithms in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. An efficient hybrid conjugate gradient method with an adaptive strategy and applications in image restoration problems.
- Author
-
Chen, Zibo, Shao, Hu, Liu, Pengjie, Li, Guoxin, and Rong, Xianglin
- Subjects
- *
CONJUGATE gradient methods , *ADAPTIVE optics , *IMAGE reconstruction , *CONVEX functions - Abstract
In this study, we introduce a novel hybrid conjugate gradient method with an adaptive strategy called asHCG method. The asHCG method exhibits the following characteristics. (i) Its search direction guarantees sufficient descent property without dependence on any line search. (ii) It possesses strong convergence for the uniformly convex function using a weak Wolfe line search, and under the same line search, it achieves global convergence for the general function. (iii) Employing the Armijo line search, it provides an approximate guarantee for worst-case complexity for the uniformly convex function. The numerical results demonstrate promising and encouraging performances in both unconstrained optimization problems and image restoration problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A New Two-Parameter Family of Nonlinear Conjugate Gradient Method Without Line Search for Unconstrained Optimization Problem.
- Author
-
ZHU Tiefeng
- Abstract
Copyright of Wuhan University Journal of Natural Sciences is the property of Wuhan University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
13. Optimality Conditions and Gradient Descent Newton Pursuit for 0/1-Loss and Sparsity Constrained Optimization.
- Author
-
Wang, Dongrui, Zhang, Hui, Zhang, Penghe, and Xiu, Naihua
- Subjects
COMPRESSED sensing ,GLOBAL optimization ,ALGORITHMS - Abstract
In this paper, we consider the optimization problems with 0/1-loss and sparsity constraints (0/1-LSCO) that involve two blocks of variables. First, we define a τ -stationary point of 0/1-LSCO, according to which we analyze the first-order necessary and sufficient optimality conditions. Based on these results, we then develop a gradient descent Newton pursuit algorithm (GDNP), and analyze its global and locally quadratic convergence under standard assumptions. Finally, numerical experiments on 1-bit compressed sensing demonstrate its superior performance in terms of a high degree of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Optimization in complex spaces with the mixed Newton method.
- Author
-
Bakhurin, Sergei, Hildebrand, Roland, Alkousa, Mohammad, Titov, Alexander, and Yudin, Nikita
- Subjects
NEWTON-Raphson method ,HOLOMORPHIC functions ,ABSOLUTE value ,WIRELESS communications ,ARGUMENT - Abstract
We propose a second-order method for unconditional minimization of functions f(z) of complex arguments. We call it the mixed Newton method due to the use of the mixed Wirtinger derivative ∂ 2 f ∂ z ¯ ∂ z for computation of the search direction, as opposed to the full Hessian ∂ 2 f ∂ (z , z ¯) 2 in the classical Newton method. The method has been developed for specific applications in wireless network communications, but its global convergence properties are shown to be superior on a more general class of functions f, namely sums of squares of absolute values of holomorphic functions. In particular, for such objective functions minima are surrounded by attraction basins, while the iterates are repelled from other types of critical points. We provide formulas for the asymptotic convergence rate and show that in the scalar case the method reduces to the well-known complex Newton method for the search of zeros of holomorphic functions. In this case, it exhibits generically fractal global convergence patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Two efficient nonlinear conjugate gradient methods for Riemannian manifolds.
- Author
-
Nasiru Salihu, Poom Kumam, and Sani Salisu
- Abstract
In this paper, we address some of the computational challenges associated with the RMIL+ conjugate gradient parameter by proposing an efficient conjugate gradient (CG) parameter along with its generalization to the Riemannian manifold. This parameter ensures the good convergence properties of the CG method in Riemannian optimization and it is formed by combining the structures of two classical CG methods. The extension utilizes the concepts of retraction and vector transport to establish sufficient descent property for the method via strong Wolfe line search conditions. Additionally, the scheme achieves global convergence using the scaled version of the Ring-Wirth nonexpansive condition. Finally, numerical experiments are conducted to validate the scheme's effectiveness. We consider both unconstrained Euclidean optimization test problems and Riemannian optimization problems. The results reveal that the performance of the proposed method is significantly influenced by the choice of line search in both Euclidean and Riemannian optimizations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. An outcome space algorithm for solving general linear multiplicative programming.
- Author
-
Zhang, Yanzhen and Shen, Peiping
- Subjects
- *
LINEAR programming , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
The following article presents and corroborates an outcome space branch-and-bound algorithm for solving the general linear multiplicative programming problem (GLMPP). In this new algorithm, GLMPP is transformed into its equivalent problem by introducing auxiliary variables. To compute the tight upper bound for the optimal value of GLMPP, the linear relaxation programming problem is assembled by using bound and hull linear approximations. Furthermore, the global convergence and computational complexity of the algorithm are demonstrated. Finally, the soundness and advantage of the proposed algorithm are validated by solving a demonstrative linear multiplicative problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Two modified conjugate gradient methods for unconstrained optimization.
- Author
-
Abd Elhamid, Mehamdia and Yacine, Chaib
- Abstract
Conjugate gradient (CG) methods are a popular class of iterative methods for solving linear systems of equations and nonlinear optimization problems. Motivated by the construction of some modern CG methods, we propose two modified CG methods, named DHS* and DPRP*. Under the strong Wolfe line search (SWLS), the two presented methods are proven to be sufficient descent and globally convergent. Preliminary numerical results show that the DHS* and DPRP* methods are effective for the given test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Globalization of convergence of the constrained piecewise Levenberg–Marquardt method.
- Author
-
Izmailov, Alexey F., Uskov, Evgeniy I., and Zhibai, Yan
- Subjects
- *
COMPLEMENTARITY constraints (Mathematics) , *NONLINEAR equations , *EQUATIONS , *GLOBALIZATION , *ALGORITHMS - Abstract
We develop linesearch algorithms intended for globalization of convergence of the piecewise Levenberg–Marquardt method for constrained piecewise smooth equation. Conditions ensuring global convergence properties and asymptotic superlinear convergence rate are proposed. The peculiarities of the global convergence results in the piecewise smooth case are discussed and illustrated by examples. We also provide numerical results for unconstrained and constrained reformulations of nonlinear complementarity problems, comparing the performance of the globalized piecewise Levenberg–Marquardt algorithm with some relevant alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. an-QNA: An Adaptive Nesterov Quasi-Newton Acceleration-Optimized CMOS LNA for 65 nm Automotive Radar Applications.
- Author
-
Aras, Unal, Woo, Lee Sun, Delwar, Tahesin Samira, Siddique, Abrar, Jana, Anindya, Lee, Yangwon, and Ryu, Jee-Youl
- Subjects
- *
LOW noise amplifiers , *ROAD vehicle radar , *SPEED , *NOISE , *ALGORITHMS - Abstract
An adaptive Nesterov quasi-Newton acceleration (an-QNA)-optimized low-noise amplifier (LNA) is proposed in this paper. An optimized single-ended-to-differential two-stage LNA circuit is presented. It includes an improved post-linearization (IPL) technique to enhance the linearity. Traditional methods like conventional quasi-Newton (c-QN) often suffer from slow convergence and the tendency to get trapped in local minima. However, the proposed an-QNA method significantly accelerates the convergence speed. Furthermore, in this paper, modifications have been made to the an-QNA algorithm using a quadratic estimation to guarantee global convergence. The optimized an-QNA-based LNA, using standard 65 nm CMOS technology, achieves a simulated gain of 17.5 dB, a noise figure (NF) of 3.7 dB, and a 1 dB input compression point (IP1dB) of −13.1 dBm. It is also noted that the optimized LNA achieves a measured gain of 12.9 dB and an NF of 4.98 dB, and the IP1dB is −17.8 dB. The optimized LNA has a chip area of 0.67 mm2. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Family of Inertial Three‐Term CGPMs for Large‐Scale Nonlinear Pseudo‐Monotone Equations With Convex Constraints.
- Author
-
Jian, Jinbao, Huang, Qiongxuan, Yin, Jianghua, and Ma, Guodong
- Subjects
- *
CONJUGATE gradient methods , *LIPSCHITZ continuity , *IMAGE reconstruction , *NONLINEAR equations , *PROBLEM solving - Abstract
ABSTRACT This article presents and analyzes a family of three‐term conjugate gradient projection methods with the inertial technique for solving large‐scale nonlinear pseudo‐monotone equations with convex constraints. The generated search direction exhibits good properties independent of line searches. The global convergence of the family is proved without the Lipschitz continuity of the underlying mapping. Furthermore, under the locally Lipschitz continuity assumption, we conduct a thorough analysis related to the asymptotic and non‐asymptotic global convergence rates in terms of iteration complexity. To our knowledge, this is the first iteration‐complexity analysis for inertial gradient‐type projection methods, in the literature, under such a assumption. Numerical experiments demonstrate the computational efficiency of the family, showing its superiority over three existing inertial methods. Finally, we apply the proposed family to solve practical problems such as ℓ2$$ {\ell}_2 $$‐regularized logistic regression, sparse signal restoration and image restoration problems, highlighting its effectiveness and potential for real‐world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. New Formula on the Conjugate Gradient Method for Unconstrained Optimization and its Application.
- Author
-
Sulaiman, Ranen M., Abdullah, Zeyad M., and Hassan, Basim A.
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL formulas , *MATHEMATICAL optimization , *CONJUGACY classes , *TAYLOR'S series - Abstract
In this paper, we have presented a new formula by applying the conjugacy condition for the second-order Taylor expansion. In comparison with classic conjugate gradient methods, the new formula uses both available gradient and function value information. The formula's global convergence findings are described. Numerical results demonstrate that this strategy is effective and its application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Proximal Point Method for Quasiconvex Functions in Riemannian Manifolds.
- Author
-
Quiroz, Erik Alex Papa
- Subjects
- *
RIEMANNIAN manifolds , *EIGENFUNCTIONS , *CURVATURE , *ALGORITHMS - Abstract
This paper studies the convergence of the proximal point method for quasiconvex functions in finite dimensional complete Riemannian manifolds. We prove initially that, in the general case, when the objective function is proper and lower semicontinuous, each accumulation point of the sequence generated by the method, if it exists, is a limiting critical point of the function. Then, under the assumptions that the sectional curvature of the manifold is bounded above by some non negative constant and the objective function is quasiconvex we analyze two cases. When the constant is zero, the global convergence of the algorithm to a limiting critical point is assured and if it is positive, we prove the local convergence for a class of quasiconvex functions, which includes Lipschitz functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Another hybrid conjugate gradient method as a convex combination of WYL and CD methods.
- Author
-
Guefassa, Imane, Chaib, Yacine, and Bechouat, Tahar
- Subjects
- *
NUMERICAL functions , *LINEAR equations , *LINEAR systems , *PROBLEM solving , *ALGORITHMS , *NONLINEAR equations - Abstract
Conjugate gradient (CG) methods are a popular class of iterative methods for solving linear systems of equations and nonlinear optimization problems. In this paper, a new hybrid conjugate gradient (CG) method is presented and analyzed for solving unconstrained optimization problems, where the parameter β k is a convex combination of β k WYL and β k CD . Under the strong Wolfe line search, the new method possesses the sufficient descent condition and the global convergence properties. The preliminary numerical results show the efficiency of our method in comparison with other CG methods. Furthermore, the proposed algorithm HWYLCD was extended to solve the problem of a mode function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A Descent Conjugate Gradient Method for Optimization Problems.
- Author
-
Semiu, Ayinde, Idowu, Osinuga, Adesina, Adio, Sunday, Agboola, Joseph, Adelodun, Uchenna, Uka, and Olufisayo, Awe
- Subjects
- *
NONLINEAR equations , *CONJUGATE gradient methods , *TEST methods , *ALGORITHMS - Abstract
Over the years, a considerable number of conjugate gradient methods have been proposed based on modifications on the well-known classical conjugate gradient methods. These methods were shown to have satisfied descent condition taking into consideration the strong Wolfe line search and other line search schemes. Convergence of objective functions were also guarantied. In this study, a decent conjugate gradient method for solving unconstrained non-linear optimization problems is developed. Algorithm of the proposed method was well developed by constructing its update parameter. Descent properties of the method based on some assumptions on the objective function were established. The convergence analysis of the method showed that it converges globally taking into consideration the strong Wolfe conditions . Dolan and More performance profile was used to compare the numerical strength of this method with other methods, showing clear evidence of better performance of the new method in the profiles tested. [ABSTRACT FROM AUTHOR]
- Published
- 2024
25. A Weighted Flow related to a Trudinger-Moser Functional on Closed Riemann Surface.
- Author
-
Yu, Peng Xiu
- Subjects
- *
RIEMANN surfaces - Abstract
In this paper, with (Σ, g) being a closed Riemann surface, we analyze the possible concentration behavior of a heat flow related to the Trudinger-Moser energy. We obtain a long time existence for the flow. And along some sequence of times tk → + ∞, we can deduce the convergence of the flow in H2(Σ). Furthermore, the limit function is a critical point of the Trudinger-Moser functional under certain constraint. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Iterative Chebyshev approximation method for optimal control problems.
- Author
-
Wu, Di, Yu, Changjun, Wang, Hailing, Bai, Yanqin, Teo, Kok-Lay, and Toh, Kim-Chuan
- Subjects
CHEBYSHEV polynomials ,CHEBYSHEV systems ,CHEBYSHEV approximation ,DYNAMICAL systems ,COLLOCATION methods - Abstract
We present a novel numerical approach for solving nonlinear constrained optimal control problems (NCOCPs). Instead of directly solving the NCOCPs, we start by linearizing the constraints and dynamic system, which results in a sequence of sub-problems. For each sub-problem, we use finite number of Chebyshev polynomials to estimate the control and state vectors. To eliminate the errors at non-collocation points caused by conventional collocation methods, we additionally estimate the coefficient functions involved in the linear constraints and dynamic system by Chebyshev polynomials. By leveraging the characteristics of Chebyshev polynomials, the approximate sub-problem is changed into an equivalent nonlinear optimization problem with linear equality constraints. Consequently, any feasible point of the approximate sub-problem will satisfy the constraints and dynamic system throughout the entire time scale. To validate the efficacy of the new method, we solve three examples and assess the accuracy of the method through the computation of its approximation error. Numerical results obtained show that our approach achieves lower approximation error when compared to the Chebyshev pseudo-spectral method. The proposed method is particularly suitable for scenarios that require high-precision approximation, such as aerospace and precision instrument production. • A sequence of sub-problems is constructed by linearizing the dynamic system and constraints. • Each sub-problem can be transformed into an equivalent nonlinear optimization problem with linear equality constraints through Chebyshev polynomials. • Any feasible solution of the approximate sub-problem will satisfy its dynamic system and constraints over the entire time horizon. • Global convergence of the proposed method is established. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. An efficient augmented memoryless quasi-Newton method for solving large-scale unconstrained optimization problems.
- Author
-
Yulin Cheng and Jing Gao
- Subjects
QUASI-Newton methods ,OPTIMIZATION algorithms ,EQUATIONS ,ALGORITHMS - Abstract
In this paper, an augmented memoryless BFGS quasi-Newton method was proposed for solving unconstrained optimization problems. Based on a new modified secant equation, an augmented memoryless BFGS update formula and an efficient optimization algorithm were established. To improve the stability of the numerical experiment, we obtained the scaling parameter by minimizing the upper bound of the condition number. The global convergence of the algorithm was proved, and numerical experiments showed that the algorithm was efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. On the application of some fixed-point techniques to Fredholm integral equations of the second kind.
- Author
-
Ezquerro, J. A. and Hernández-Verón, M. A.
- Abstract
It is known that the global convergence of the method of successive approximations is obtained by means of the Banach contraction principle. In this paper, we study the global convergence of the method by means of a technique that uses auxiliary points and, as a consequence of this study, we obtain fixed-point type results on closed balls. We apply the study to nonlinear Fredholm integral equations of the second kind. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. On Mean-Optimal Robust Linear Discriminant Analysis.
- Author
-
Li, Xiangyu and Wang, Hua
- Subjects
FISHER discriminant analysis ,OPTIMIZATION algorithms ,SUPERVISED learning ,ALGORITHMS ,SERVER farms (Computer network management) - Abstract
Linear discriminant analysis (LDA) is widely used for dimensionality reduction under supervised learning settings. Traditional LDA objective aims to minimize the ratio of the squared Euclidean distances that may not perform optimally on noisy datasets. Multiple robust LDA objectives have been proposed to address this problem, but their implementations have two major limitations. One is that their mean calculations use the squared \(\ell_{2}\) -norm distance to center the data, which is not valid when the objective depends on other distance functions. The second problem is that there is no generalized optimization algorithm to solve different robust LDA objectives. In addition, most existing algorithms can only guarantee that the solution is locally optimal rather than globally optimal. In this article, we review multiple robust loss functions and propose a new and generalized robust objective for LDA. Besides, to remove the mean value within data better, our objective uses an optimal way to center the data through learning. As one important algorithmic contribution, we derive an efficient iterative algorithm to optimize the resulting non-smooth and non-convex objective function. We theoretically prove that our solution algorithm guarantees that both the objective and the solution sequences converge to globally optimal solutions at a sub-linear convergence rate. The results of comprehensive experimental evaluations demonstrate the effectiveness of our new method, achieving significant improvements compared to the other competing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Subspace Newton method for sparse group ℓ0 optimization problem.
- Author
-
Liao, Shichen, Han, Congying, Guo, Tiande, and Li, Bonan
- Subjects
NEWTON-Raphson method ,FEATURE selection ,PARAMETER estimation ,ALGORITHMS - Abstract
This paper investigates sparse optimization problems characterized by a sparse group structure, where element- and group-level sparsity are jointly taken into account. This particular optimization model has exhibited notable efficacy in tasks such as feature selection, parameter estimation, and the advancement of model interpretability. Central to our study is the scrutiny of the ℓ 0 and ℓ 2 , 0 norm regularization model, which, in comparison to alternative surrogate formulations, presents formidable computational challenges. We embark on our study by conducting the analysis of the optimality conditions of the sparse group optimization problem, leveraging the notion of a γ -stationary point, whose linkage to local and global minimizer is established. In a subsequent facet of our study, we develop a novel subspace Newton algorithm for sparse group ℓ 0 optimization problem and prove its global convergence property as well as local second-order convergence rate. Experimental results reveal the superlative performance of our algorithm in terms of both precision and computational expediency, thereby outperforming several state-of-the-art solvers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A new approximate descent derivative-free algorithm for large-scale nonlinear symmetric equations.
- Author
-
Wang, Xiaoliang
- Abstract
In this paper, an approximate descent three-term derivative-free algorithm is developed for a large-scale system of nonlinear symmetric equations where the gradients and the difference of the gradients are computed approximately in order to avoid computing and storing the corresponding Jacobian matrices or their approximate matrices. The new method enjoys the sufficient descent property independent of the accuracy of line search strategies and the error bounds of these approximations are established. Under some mild conditions and a nonmonotone line search technique, the global and local convergence properties are established respectively. Numerical results indicate that the proposed algorithm outperforms the other similar ones available in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A modified PRP conjugate gradient method with inertial extrapolation for sparse signal reconstruction
- Author
-
Yuanshou Zhang, Min Sun, and Jing Liu
- Subjects
PRP conjugate gradient method with inertial extrapolation ,Sparse signal reconstruction ,Global convergence ,Mathematics ,QA1-939 - Abstract
Abstract It is widely known that the inertial technique of the heavy-ball method can accelerate its convergence speed. In this paper, by embedding the inertial technique in the famous PRP conjugate gradient method, we propose a modified PRP conjugate gradient method with inertial extrapolation (PRPCG-IE) for sparse signal reconstruction. Its direction satisfies the sufficient descent property, which is independent of any line search. Global convergence of PRPCG-IE is established under some standard conditions. PRPCG-IE is applied to two sparse signal reconstruction problems with noise, and preliminary experimental results demonstrate the effectiveness of PRPCG-IE.
- Published
- 2024
- Full Text
- View/download PDF
33. A sufficient descent hybrid conjugate gradient method without line search consideration and application
- Author
-
Salihu, Nasiru, Kumam, Poom, Ibrahim, Sulaiman Mohammed, and Babando, Huzaifa Aliyu
- Published
- 2024
- Full Text
- View/download PDF
34. Hybrid CG-Like Algorithm for Nonlinear Equations and Image Restoration.
- Author
-
KANIKAR MUANGCHOO and SUPAK PHIANGSUNGNOEN
- Subjects
- *
NONLINEAR operators , *OPERATOR equations , *IMAGE reconstruction , *NONLINEAR equations , *MAP projection - Abstract
This paper introduces a hybrid spectral-conjugate gradient (SCG) method to solve nonlinear monotone operator equations efficiently. The proposed method incorporates a hybrid parameter that encompasses the Polak--Ribière--Polyak (PRP), Liu-Storey (LS), Fletcher-Reeves (FR), and conjugate descent (CD) methods as particular instances. Additionally, we derive the spectral parameter to ensure that the search direction adheres to the sufficient descent condition. The search direction is also designed to be bounded, and under specific conditions, we demonstrate that the sequence produced by our hybrid SCG algorithm converges toward a solution. Furthermore, to underscore the effectiveness of our proposed method, we conducted extensive numerical experiments comparing its performance against that of existing algorithms. These experiments were based on a selection of benchmark nonlinear monotone operator equations, highlighting our proposed algorithm's superior efficiency and potential in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
35. On the existence and uniqueness of fixed points in Banach spaces using the Krasnoselskij iterative method.
- Author
-
HERNÁNDEZ-VERÓN, M. A. and ROMERO, N.
- Subjects
- *
FREDHOLM equations , *INTEGRAL equations , *BANACH spaces - Abstract
We analyze the global convergence for the Krasnoselskij method. More specifically, we obtain domains of global convergence in which we locate and separate fixed points of a given operator. To do that, we use auxiliary points instead of imposing conditions on the solution, that is generally unknown. Then, we apply our study to obtain fixed point type results. We finish our study by applying the results to Fredholm integral equations. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
36. Two improved nonlinear conjugate gradient methods with application in conditional model regression function.
- Author
-
Elhamid, Mehamdia Abd, Yacine, Chaib, and Tahar, Bechouat
- Abstract
The conjugate gradient (CG) method is one of the most important ideas in scientific computing, it is applied to solve linear systems of equations and nonlinear optimization problems. In this paper, based on a variant of Dai-Yuan (DY) method and Fletcher-Reeves (FR) method, two modified CG methods (named IDY and IFR) are presented and analyzed. The search direction of the presented methods fulfills the sufficient descent condition at each iteration. We establish the global convergence of the proposed algorithms under normal assumptions and strong Wolfe line search. Preliminary elementary numerical experiment results are presented, demonstrating the promise and the effectiveness of the proposed methods. Finally, the proposed methods are further extended to solve the problem of conditional model regression function. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
37. A family of conjugate gradient methods with guaranteed positiveness and descent for vector optimization.
- Author
-
He, Qing-Rui, Li, Sheng-Jie, Zhang, Bo-Ya, and Chen, Chun-Rong
- Subjects
CONJUGATE gradient methods ,SEARCH algorithms - Abstract
In this paper, we seek a new modification way to ensure the positiveness of the conjugate parameter and, based on the Dai-Yuan (DY) method in the vector setting, propose an associated family of conjugate gradient (CG) methods with guaranteed descent for solving unconstrained vector optimization problems. Several special members of the family are analyzed and the (sufficient) descent condition is established for them (in the vector sense). Under mild conditions, a general convergence result for the CG methods with specific parameters is presented, which, in particular, covers the global convergence of the aforementioned members. Furthermore, for the purpose of comparison, we then consider the direct extension versions of some Dai-Yuan type methods which are obtained by modifying the DY method of the scalar case. These vector extensions can retrieve the classical parameters in the scalar minimization case and their descent property and global convergence are also studied under mild assumptions. Finally, numerical experiments are given to illustrate the practical behavior of all proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Scaled-PAKKT sequential optimality condition for multiobjective problems and its application to an Augmented Lagrangian method.
- Author
-
Carrizo, G. A., Fazzio, N. S., Sánchez, M. D., and Schuverdt, M. L.
- Subjects
NONLINEAR programming ,ALGORITHMS - Abstract
Based on the recently introduced Scaled Positive Approximate Karush–Kuhn–Tucker condition for single objective problems, we derive a sequential necessary optimality condition for multiobjective problems with equality and inequality constraints as well as additional abstract set constraints. These necessary sequential optimality conditions for multiobjective problems are subject to the same requirements as ordinary (pointwise) optimization conditions: we show that the updated Scaled Positive Approximate Karush–Kuhn–Tucker condition is necessary for a local weak Pareto point to the problem. Furthermore, we propose a variant of the classical Augmented Lagrangian method for multiobjective problems. Our theoretical framework does not require any scalarization. We also discuss the convergence properties of our algorithm with regard to feasibility and global optimality without any convexity assumption. Finally, some numerical results are given to illustrate the practical viability of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A VMiPG Method for Composite Optimization with Nonsmooth Term Having No Closed-form Proximal Mapping.
- Author
-
Zhang, Taiwei, Pan, Shaohua, and Liu, Ruyu
- Abstract
This paper concerns the minimization of the sum of a twice continuously differentiable function f and a nonsmooth convex function g without closed-form proximal mapping. For this class of nonconvex and nonsmooth problems, we propose a line-search based variable metric inexact proximal gradient (VMiPG) method with uniformly bounded positive definite variable metric linear operators. This method computes in each step an inexact minimizer of a strongly convex model such that the difference between its objective value and the optimal value is controlled by its squared distance from the current iterate, and then seeks an appropriate step-size along the obtained direction with an armijo line-search criterion. We prove that the iterate sequence converges to a stationary point when f and g are definable in the same o-minimal structure over the real field (R , + , ·) , and if addition the objective function f + g is a KL function of exponent 1/2, the convergence has a local R-linear rate. The proposed VMiPG method with the variable metric linear operator constructed by the Hessian of the function f is applied to the scenario that f and g have common composite structure, and numerical comparison with a state-of-art variable metric line-search algorithm indicates that the Hessian-based VMiPG method has a remarkable advantage in terms of the quality of objective values and the running time for those difficult problems such as high-dimensional fused weighted-lasso regressions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Descent Generalized RMIL Spectral Gradient Algorithm for Optimization Problems
- Author
-
Sulaiman Ibrahim M., Kaelo P., Khalid Ruzelan, and Nawawi Mohd Kamal M.
- Subjects
optimization models ,spectral cg algorithm ,global convergence ,line search strategy ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
This study develops a new conjugate gradient (CG) search direction that incorporates a well defined spectral parameter while the step size is required to satisfy the famous strong Wolfe line search (SWP) strategy. The proposed spectral direction is derived based on a recent method available in the literature, and satisfies the sufficient descent condition irrespective of the line search strategy and without imposing any restrictions or conditions. The global convergence results of the new formula are established using the assumption that the gradient of the defined smooth function is Lipschitz continuous. To illustrate the computational efficiency of the new direction, the study presents two sets of experiments on a number of benchmark functions. The first experiment is performed by setting uniform SWP parameter values for all the algorithms considered for comparison. For the second experiment, the study evaluates the performance of all the algorithms by considering the exact SWP parameter values used for the numerical experiments as reported in each work. The idea of these experiments is to study the influence of parameters in the computational efficiency of various CG algorithms. The results obtained demonstrate the effect of the parameter value on the robustness of the algorithms.
- Published
- 2024
- Full Text
- View/download PDF
41. An improved Dai‐Liao‐style hybrid conjugate gradient‐based method for solving unconstrained nonconvex optimization and extension to constrained nonlinear monotone equations.
- Author
-
Yuan, Zihang, Shao, Hu, Zeng, Xiaping, Liu, Pengjie, Rong, Xianglin, and Zhou, Jianhao
- Subjects
- *
LIPSCHITZ continuity , *CONJUGATE gradient methods , *NONLINEAR equations , *CONSTRAINED optimization - Abstract
In this work, for unconstrained optimization, we introduce an improved Dai‐Liao‐style hybrid conjugate gradient method based on the hybridization‐based self‐adaptive technique, and the search direction generated fulfills the sufficient descent and trust region properties regardless of any line search. The global convergence is established under standard Wolfe line search and common assumptions. Then, combining the hyperplane projection technique and a new self‐adaptive line search, we extend the proposed conjugate gradient method and obtain an improved Dai‐Liao‐style hybrid conjugate gradient projection method to solve constrained nonlinear monotone equations. Under mild conditions, we obtain its global convergence without Lipschitz continuity. In addition, the convergence rates for the two proposed methods are analyzed, respectively. Finally, numerical experiments are conducted to demonstrate the effectiveness of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A sufficient descent LS-PRP-BFGS-like method for solving nonlinear monotone equations with application to image restoration.
- Author
-
Abubakar, A. B., Ibrahim, A. H., Abdullahi, M., Aphane, M., and Chen, Jiawei
- Subjects
- *
IMAGE reconstruction , *NONLINEAR equations , *OPERATOR equations , *NONLINEAR operators , *MAP projection - Abstract
In this paper, we propose a method for efficiently obtaining an approximate solution for constrained nonlinear monotone operator equations. The search direction of the proposed method closely aligns with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) direction, known for its low storage requirement. Notably, the search direction is shown to be sufficiently descent and bounded without using the line search condition. Furthermore, under some standard assumptions, the proposed method converges globally. As an application, the proposed method is applied to solve image restoration problems. The efficiency and robustness of the method in comparison to other methods are tested by numerical experiments using some test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A Globally Convergent Inertial First-Order Optimization Method for Multidimensional Scaling.
- Author
-
Ram, Noga and Sabach, Shoham
- Subjects
- *
MULTIDIMENSIONAL scaling , *NONSMOOTH optimization , *DATA reduction , *DATA visualization , *ALGORITHMS - Abstract
Multidimensional scaling (MDS) is a popular tool for dimensionality reduction and data visualization. Given distances between data points and a target low-dimension, the MDS problem seeks to find a configuration of these points in the low-dimensional space, such that the inter-point distances are preserved as well as possible. We focus on the most common approach to formulate the MDS problem, known as stress minimization, which results in a challenging non-smooth and non-convex optimization problem. In this paper, we propose an inertial version of the well-known SMACOF Algorithm, which we call AI-SMACOF. This algorithm is proven to be globally convergent, and to the best of our knowledge this is the first result of this kind for algorithms aiming at solving the stress MDS minimization. In addition to the theoretical findings, numerical experiments provide another evidence for the superiority of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Another modified version of RMIL conjugate gradient method.
- Author
-
Yousif, Osman Omer Osman and Saleh, Mohammed A.
- Subjects
- *
CONJUGATE gradient methods , *TECHNOLOGY convergence , *SIMPLICITY - Abstract
Due to their simplicity and global convergence properties, the conjugate gradient (CG) methods are widely used for solving unconstrained optimization problems, especially those of large scale. To establish the global convergence and to obtain better numerical performance in practice, much effort has been devoted to develop new CG methods or even to modify well- known methods. In 2012, Rivaie et al., have proposed a new CG method, called RMIL which has good numerical results and globally convergent under the exact line search. However, in 2016, Dai has pointed out a mistake in the steps of the proof of global convergence of RMIL and hence to guarantee the global convergence he suggested a modified version of RMIL, called RMIL+. In this paper, we present another modified version of RMIL, which is globally convergent via the exact line search. Furthermore, to support the theoretical proof of the global convergence of the modified version in practical computation, a numerical experiment based on comparing it with RMIL, RMIL+, and CG-DESCENT was done. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. An inertial proximal splitting method with applications.
- Author
-
Wang, Xiaoquan, Shao, Hu, Liu, Pengjie, and Yang, Wenli
- Subjects
- *
GLOBAL optimization , *IMAGE processing - Abstract
In this paper, we propose an inertial proximal splitting method for solving the non-convex optimization problem, and the new method employs the idea of inertial proximal point to improve the computational efficiency. Based on the assumptions that the sequence generated by the new method is bounded and the auxiliary function satisfies the Kurdyka–Łojasiewicz property, the global convergence analysis with a more relaxed parameter range is proved for the proposed method. Moreover, some numerical results on SCAD, image processing and robust PCA non-convex problems are tested to demonstrate the effectiveness and superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. An efficient iterative method for matrix sign function with application in stability analysis of control systems using spectrum splitting.
- Author
-
Sharma, Pallvi and Kansal, Munish
- Subjects
- *
MATRIX functions , *SYSTEMS theory , *DYNAMICAL systems , *EIGENVALUES , *PHOTOCATHODES , *MATRICES (Mathematics) - Abstract
The goal of this study is to construct a novel iterative method to compute the matrix sign function using a different approach. It is discussed that the new method is globally convergent and asymptotically stable. It achieves the sixth order of convergence and only requires five matrix–matrix multiplications. The obtained results are extended to compute the number of eigenvalues of a matrix in a specified region of the complex plane. This is done by performing appropriate sequence of matrix sign computations. An application of this technique has been discussed in stability analysis of linear time‐invariant dynamic systems in control theory. Numerical results have been given to justify the effectual performance and superiority of the proposed method. Matrices of various sizes have been considered for this purpose. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A modified Fletcher-Reeves conjugate gradient method for unconstrained optimization with applications in image restoration.
- Author
-
Ahmed, Zainab Hassan, Hbaib, Mohamed, and Abbo, Khalil K.
- Subjects
- *
CONJUGATE gradient methods , *ALGORITHMS - Abstract
The Fletcher-Reeves (FR) method is widely recognized for its drawbacks, such as generating unfavorable directions and taking small steps, which can lead to subsequent poor directions and steps. To address this issue, we propose a modification to the FR method, and then we develop it into the three-term conjugate gradient method in this paper. The suggested methods, named "HZF" and "THZF", preserve the descent property of the FR method while mitigating the drawbacks. The algorithms incorporate strong Wolfe line search conditions to ensure effective convergence. Through numerical comparisons with other conjugate gradient algorithms, our modified approach demonstrates superior performance. The results highlight the improved efficacy of the HZF algorithm compared to the FR and three-term FR conjugate gradient methods. The new algorithm was applied to the problem of image restoration and proved to be highly effective in image restoration compared to other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A new diagonal quasi-Newton algorithm for unconstrained optimization problems.
- Author
-
Nosrati, Mahsa and Amini, Keyvan
- Subjects
- *
QUASI-Newton methods , *EQUATIONS , *COLLECTIONS - Abstract
We present a new diagonal quasi-Newton method for solving unconstrained optimization problems based on the weak secant equation. To control the diagonal elements, the new method uses new criteria to generate the Hessian approximation. We establish the global convergence of the proposed method with the Armijo line search. Numerical results on a collection of standard test problems demonstrate the superiority of the proposed method over several existing diagonal methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications.
- Author
-
Shams, Mudassir and Carpentieri, Bruno
- Subjects
- *
NONLINEAR equations , *HEAT equation , *NONLINEAR systems , *ANALYTICAL solutions , *PROBLEM solving - Abstract
In scientific and engineering disciplines, vectorial problems involving systems of equations or functions with multiple variables frequently arise, often defying analytical solutions and necessitating numerical techniques. This research introduces an efficient numerical scheme capable of simultaneously approximating all roots of nonlinear equations with a convergence order of ten, specifically designed for vectorial problems. Random initial vectors are employed to assess the global convergence behavior of the proposed scheme. The newly developed method surpasses methods in the existing literature in terms of accuracy, consistency, computational CPU time, residual error, and stability. This superiority is demonstrated through numerical experiments tackling engineering problems and solving heat equations under various diffusibility parameters and boundary conditions. The findings underscore the efficacy of the proposed approach in addressing complex nonlinear systems encountered in diverse applied scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Convexification Numerical Method for the Retrospective Problem of Mean Field Games.
- Author
-
Klibanov, Michael V., Li, Jingzhi, and Yang, Zhipeng
- Abstract
The convexification numerical method with the rigorously established global convergence property is constructed for a problem for the Mean Field Games System of the second order. This is the problem of the retrospective analysis of a game of infinitely many rational players. In addition to traditional initial and terminal conditions, one extra terminal condition is assumed to be known. Carleman estimates and a Carleman Weight Function play the key role. Numerical experiments demonstrate a good performance for complicated functions. Various versions of the convexification have been actively used by this research team for a number of years to numerically solve coefficient inverse problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.