131 results on '"global convergence"'
Search Results
2. About the existence and uniqueness of solutions for some second-order nonlinear BVPs
- Author
-
Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros de Telecomunicación - Escola Tècnica Superior d'Enginyers de Telecomunicació, AGENCIA ESTATAL DE INVESTIGACION, Universitat Politècnica de València, Science and Engineering Research Board, India, Yadav, Sonia, Singh, Sukhjit, Hernández-Verón, M. A., Martínez Molada, Eulalia, Kumar, Ajay, Badoni, R.P., Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros de Telecomunicación - Escola Tècnica Superior d'Enginyers de Telecomunicació, AGENCIA ESTATAL DE INVESTIGACION, Universitat Politècnica de València, Science and Engineering Research Board, India, Yadav, Sonia, Singh, Sukhjit, Hernández-Verón, M. A., Martínez Molada, Eulalia, Kumar, Ajay, and Badoni, R.P.
- Abstract
[EN] The significance of our work is to solve some second-order nonlinear boundary value prob-lems. To do this, we take into account the equivalence of the problems considered with certain integral equations, we will obtain a fixed-point-type result for these integral equa-tions. This result provides us the existence and uniqueness of solutions for the second -order nonlinear boundary value problems considered. As a novelty, we will use for this fixed-point-type result a family of third order iterative processes to approximate the so-lution, instead of the usually considered method of Successive Approximations of linear convergence. & COPY; 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
- Published
- 2023
3. About the existence and uniqueness of solutions for some second-order nonlinear BVPs
- Author
-
Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros de Telecomunicación - Escola Tècnica Superior d'Enginyers de Telecomunicació, AGENCIA ESTATAL DE INVESTIGACION, Universitat Politècnica de València, Science and Engineering Research Board, India, Yadav, Sonia, Singh, Sukhjit, Hernández-Verón, M. A., Martínez Molada, Eulalia, Kumar, Ajay, Badoni, R.P., Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros de Telecomunicación - Escola Tècnica Superior d'Enginyers de Telecomunicació, AGENCIA ESTATAL DE INVESTIGACION, Universitat Politècnica de València, Science and Engineering Research Board, India, Yadav, Sonia, Singh, Sukhjit, Hernández-Verón, M. A., Martínez Molada, Eulalia, Kumar, Ajay, and Badoni, R.P.
- Abstract
[EN] The significance of our work is to solve some second-order nonlinear boundary value prob-lems. To do this, we take into account the equivalence of the problems considered with certain integral equations, we will obtain a fixed-point-type result for these integral equa-tions. This result provides us the existence and uniqueness of solutions for the second -order nonlinear boundary value problems considered. As a novelty, we will use for this fixed-point-type result a family of third order iterative processes to approximate the so-lution, instead of the usually considered method of Successive Approximations of linear convergence. & COPY; 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
- Published
- 2023
4. Minimizing L 1 over L 2 norms on the gradient.
- Author
-
Wang, Chao, Wang, Chao, Tao, Min, Chuah, Chen-Nee, Nagy, James, Lou, Yifei, Wang, Chao, Wang, Chao, Tao, Min, Chuah, Chen-Nee, Nagy, James, and Lou, Yifei
- Abstract
In this paper, we study the L 1 /L 2 minimization on the gradient for imaging applications. Several recent works have demonstrated that L 1 /L 2 is better than the L 1 norm when approximating the L 0 norm to promote sparsity. Consequently, we postulate that applying L 1 /L 2 on the gradient is better than the classic total variation (the L 1 norm on the gradient) to enforce the sparsity of the image gradient. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of L 1 /L 2 over L 1 and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of MRI and CT reconstruction. Finally, we reveal some empirical evidence on the superiority of L 1 /L 2 over L 1 when recovering piecewise constant signals from low-frequency measurements to shed light on future works.
- Published
- 2022
5. A novel update rule of HALS algorithm for nonnegative matrix factorization and Zangwill’s global convergence
- Author
-
Sano, Takehiro, Migita, Tsuyoshi, Takahashi, Norikazu, Sano, Takehiro, Migita, Tsuyoshi, and Takahashi, Norikazu
- Abstract
Nonnegative Matrix Factorization (NMF) has attracted a great deal of attention as an effective technique for dimensionality reduction of large-scale nonnegative data. Given a nonnegative matrix, NMF aims to obtain two low-rank nonnegative factor matrices by solving a constrained optimization problem. The Hierarchical Alternating Least Squares (HALS) algorithm is a well-known and widely-used iterative method for solving such optimization problems. However, the original update rule used in the HALS algorithm is not well defined. In this paper, we propose a novel well-defined update rule of the HALS algorithm, and prove its global convergence in the sense of Zangwill. Unlike conventional globally-convergent update rules, the proposed one allows variables to take the value of zero and hence can obtain sparse factor matrices. We also present two stopping conditions that guarantee the finite termination of the HALS algorithm. The practical usefulness of the proposed update rule is shown through experiments using real-world datasets.
- Published
- 2022
6. Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models
- Author
-
Wang, Xiaoyu, Yuan, Yaxiang, Wang, Xiaoyu, and Yuan, Yaxiang
- Abstract
We present a stochastic trust-region model-based framework in which its radius is related to the probabilistic models. Especially, we propose a specific algorithm termed STRME, in which the trust-region radius depends linearly on the gradient used to define the latest model. The complexity results of the STRME method in nonconvex, convex and strongly convex settings are presented, which match those of the existing algorithms based on probabilistic properties. In addition, several numerical experiments are carried out to reveal the benefits of the proposed methods compared to the existing stochastic trust-region methods and other relevant stochastic gradient methods. © 2022 Global Science Press. All rights reserved.
- Published
- 2022
7. Maneuvering Angle Rigid Formations With Global Convergence Guarantees
- Author
-
Chen, Liangming, Lin, Zhiyun, De Marina, Hector Garcia, Sun, Zhiyong, Feroskhan, Mir, Chen, Liangming, Lin, Zhiyun, De Marina, Hector Garcia, Sun, Zhiyong, and Feroskhan, Mir
- Abstract
Angle rigid multi-agent formations can simultaneously undergo translational, rotational, and scaling maneuvering, therefore combining the maneuvering capabilities of both distance and bearing rigid formations. However, maneuvering angle rigid formations in 2D or 3D with global convergence guarantees is shown to be a challenging problem in the existing literature even when relative position measurements are available. Motivated by angle-induced linear equations in 2D triangles and 3D tetrahedra, this paper aims to solve this challenging problem in both 2D and 3D under a leader-follower framework. For the 2D case where the leaders have constant velocities, by using local relative position and velocity measurements, a formation maneuvering law is designed for the followers governed by double-integrator dynamics. When the leaders have time-varying velocities, a sliding mode formation maneuvering law is proposed by using the same measurements. For the 3D case, to establish an angle-induced linear equation for each tetrahedron, we assume that all the followers' coordinate frames share a common Z direction. Then, a formation maneuvering law is proposed for the followers to globally maneuver Z-weakly angle rigid formations in 3D. The extension to Lagrangian agent dynamics and the construction of the desired rigid formations by using the minimum number of angle constraints are also discussed. Simulation examples are provided to validate the effectiveness of the proposed algorithms.
- Published
- 2022
8. The Boosted DC Algorithm for Linearly Constrained DC Programming
- Author
-
Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Campoy, Rubén, Vuong, Phan T., Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Campoy, Rubén, and Vuong, Phan T.
- Abstract
The Boosted Difference of Convex functions Algorithm (BDCA) has been recently introduced to accelerate the performance of the classical Difference of Convex functions Algorithm (DCA). This acceleration is achieved thanks to an extrapolation step from the point computed by DCA via a line search procedure. In this work, we propose an extension of BDCA that can be applied to difference of convex functions programs with linear constraints, and prove that every cluster point of the sequence generated by this algorithm is a Karush–Kuhn–Tucker point of the problem if the feasible set has a Slater point. When the objective function is quadratic, we prove that any sequence generated by the algorithm is bounded and R-linearly (geometrically) convergent. Finally, we present some numerical experiments where we compare the performance of DCA and BDCA on some challenging problems: to test the copositivity of a given matrix, to solve one-norm and infinity-norm trust-region subproblems, and to solve piecewise quadratic problems with box constraints. Our numerical results demonstrate that this new extension of BDCA outperforms DCA.
- Published
- 2022
9. A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Author
-
Yang, Minghan, Milzarek, Andre, Wen, Zaiwen, Zhang, Tong, Yang, Minghan, Milzarek, Andre, Wen, Zaiwen, and Zhang, Tong
- Abstract
In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods. © 2021, Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society.
- Published
- 2022
10. On ADMM in Deep Learning: Convergence and Saturation-Avoidance
- Author
-
Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, Zhou, Ding-Xuan, Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, and Zhou, Ding-Xuan
- Abstract
In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called sigmoid-ADMM pair), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation. In particular, we prove that the approximation capability of deep sigmoid nets is not worse than that of deep ReLU nets by showing that ReLU activation function can be well approximated by deep sigmoid nets with two hidden layers and finitely many free parameters but not vice-verse. We also establish the global convergence of the proposed ADMM for the nonlinearly constrained formulation of the deep sigmoid nets training from arbitrary initial points to a Karush-Kuhn-Tucker (KKT) point at a rate of order O(1/k). Besides sigmoid activation, such a convergence theorem holds for a general class of smooth activations. Compared with the widely used stochastic gradient descent (SGD) algorithm for the deep ReLU nets training (called ReLU-SGD pair), the proposed sigmoid-ADMM pair is practically stable with respect to the algorithmic hyperparameters including the learning rate, initial schemes and the pro-processing of the input data. Moreover, we find that to approximate and learn simple but important functions the proposed sigmoid-ADMM pair numerically outperforms the ReLU-SGD pair. © 2021 Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou.
- Published
- 2021
11. A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Author
-
Yang, Minghan, Milzarek, Andre, Wen, Zaiwen, Zhang, Tong, Yang, Minghan, Milzarek, Andre, Wen, Zaiwen, and Zhang, Tong
- Abstract
In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods. © 2021, Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society.
- Published
- 2021
12. On ADMM in Deep Learning: Convergence and Saturation-Avoidance
- Author
-
Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, Zhou, Ding-Xuan, Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, and Zhou, Ding-Xuan
- Abstract
In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called sigmoid-ADMM pair), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation. In particular, we prove that the approximation capability of deep sigmoid nets is not worse than that of deep ReLU nets by showing that ReLU activation function can be well approximated by deep sigmoid nets with two hidden layers and finitely many free parameters but not vice-verse. We also establish the global convergence of the proposed ADMM for the nonlinearly constrained formulation of the deep sigmoid nets training from arbitrary initial points to a Karush-Kuhn-Tucker (KKT) point at a rate of order O(1/k). Besides sigmoid activation, such a convergence theorem holds for a general class of smooth activations. Compared with the widely used stochastic gradient descent (SGD) algorithm for the deep ReLU nets training (called ReLU-SGD pair), the proposed sigmoid-ADMM pair is practically stable with respect to the algorithmic hyperparameters including the learning rate, initial schemes and the pro-processing of the input data. Moreover, we find that to approximate and learn simple but important functions the proposed sigmoid-ADMM pair numerically outperforms the ReLU-SGD pair. © 2021 Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou.
- Published
- 2021
13. A Compact Neural Network for Fused Lasso Signal Approximator
- Author
-
Mohammadi, Majid (author) and Mohammadi, Majid (author)
- Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network., Accepted Author Manuscript, Information and Communication Technology
- Published
- 2021
- Full Text
- View/download PDF
14. On ADMM in Deep Learning: Convergence and Saturation-Avoidance
- Author
-
Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, Zhou, Ding-Xuan, Zeng, Jinshan, Lin, Shao-Bo, Yao, Yuan, and Zhou, Ding-Xuan
- Abstract
In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called sigmoid-ADMM pair), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation. In particular, we prove that the approximation capability of deep sigmoid nets is not worse than that of deep ReLU nets by showing that ReLU activation function can be well approximated by deep sigmoid nets with two hidden layers and finitely many free parameters but not vice-verse. We also establish the global convergence of the proposed ADMM for the nonlinearly constrained formulation of the deep sigmoid nets training from arbitrary initial points to a Karush-Kuhn-Tucker (KKT) point at a rate of order O(1/k). Besides sigmoid activation, such a convergence theorem holds for a general class of smooth activations. Compared with the widely used stochastic gradient descent (SGD) algorithm for the deep ReLU nets training (called ReLU-SGD pair), the proposed sigmoid-ADMM pair is practically stable with respect to the algorithmic hyperparameters including the learning rate, initial schemes and the pro-processing of the input data. Moreover, we find that to approximate and learn simple but important functions the proposed sigmoid-ADMM pair numerically outperforms the ReLU-SGD pair. © 2021 Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou.
- Published
- 2021
15. A Compact Neural Network for Fused Lasso Signal Approximator
- Author
-
Mohammadi, Majid (author) and Mohammadi, Majid (author)
- Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network., Accepted Author Manuscript, Information and Communication Technology
- Published
- 2021
- Full Text
- View/download PDF
16. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs
- Author
-
Dai, Y.H., Liu, X.W., Sun, Jie, Dai, Y.H., Liu, X.W., and Sun, Jie
- Abstract
With the help of a logarithmic barrier augmented Lagrangian function, we can obtain closed-form solutions of slack variables of logarithmicbarrier problems of nonlinear programs. As a result, a two-parameter primaldual nonlinear system is proposed, which corresponds to the Karush-Kuhn-Tucker point and the infeasible stationary point of nonlinear programs, respectively, as one of two parameters vanishes. Based on this distinctive system, we present a primal-dual interior-point method capable of rapidly detecting infeasibility of nonlinear programs. The method generates interior-point iterates without truncation of the step. It is proved that our method converges to a Karush-Kuhn-Tucker point of the original problem as the barrier parameter tends to zero. Otherwise, the scaling parameter tends to zero, and the method converges to either an infeasible stationary point or a singular stationary point of the original problem. Moreover, our method has the capability to rapidly detect the infeasibility of the problem. Under suitable conditions, the method can be superlinearly or quadratically convergent to the Karush-Kuhn-Tucker point if the original problem is feasible, and it can be superlinearly or quadratically convergent to the infeasible stationary point when the problem is infeasible. Preliminary numerical results show that the method is ecient in solving some simple but hard problems, where the superlinear convergence to an infeasible stationary point is demonstrated when we solve two infeasible problems in the literature.
- Published
- 2020
17. On a variational method for stiff differential equations arising from chemistry kinetics
- Author
-
Universidad Politécnica de Cartagena, Universidad de Cádiz, Amat Plata, Sergio, Legaz Almansa, María José, Ruiz Álvarez, Juan, Universidad Politécnica de Cartagena, Universidad de Cádiz, Amat Plata, Sergio, Legaz Almansa, María José, and Ruiz Álvarez, Juan
- Abstract
For the approximation of stiff systems of ODEs arising from chemistry kinetics, implicit integrators emerge as good candidates. This paper proposes a variational approach for this type of systems. In addition to introducing the technique, we present its most basic properties and test its numerical performance through some experiments. The main advantage with respect to other implicit methods is that our approach has a global convergence. The other approaches need to ensure convergence of the iterative scheme used to approximate the associated nonlinear equations that appear for the implicitness. Notice that these iterative methods, for these nonlinear equations, have bounded basins of attraction.
- Published
- 2019
18. Globalizing a nonsmooth Newton method via path search
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
19. Globalizing a nonsmooth Newton method via path search
- Author
-
Klatte, Diethard, Bütikofer, Stephan, Klatte, Diethard, and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
20. Globalizing a nonsmooth Newton method via path search, invited presentation at University of Graz : SFB Colloquium, Graz, 9 April 2008
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method for solving Lipschitz equations introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use first a monotone path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We sketch also a nonmonotone version of the algorithm. In the last part we discuss and illustrate the details of the general algorithm (e.g the computation of a Newton step and the construction of a path) for our applications and present results from numerical tests for (generalized) semi-infinite optimization and complementarity problems.
- Published
- 2019
21. A globalized nonsmooth Newton Method, applications and numerical results
- Author
-
Bütikofer, Stephan, Klatte, Diethard, Bütikofer, Stephan, and Klatte, Diethard
- Abstract
In a recent article a globalization scheme for a local non-smooth Newton method from Kummer was proposed. This globalized method is particularly suited for non-linear optimization problems with only one time continuously differentiable objective and constraints. We show interesting applications from (generalized) semi-in nite optimization, which lead to such problems. We work out the necessary technical details for the application of the abstract globalized algorithm to non-linear optimization problems and show numerical results.
- Published
- 2019
22. Globalizing a nonsmooth Newton method via path search
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method for solving Lipschitz equations introduced by B. Kummer. We start with recalling Kummers approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use first a monotone path search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We sketch also a nonmonotone version of the algorithm. In the last part we discuss and illustrate the details of the general algorithm (e.g the computation of a path) for some interesting examples and present results from numerical tests.
- Published
- 2019
23. A global Newton method for nonsmooth Lipschitz equations
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
24. MM Algorithms For Variance Components Models.
- Author
-
Zhou, Hua, Zhou, Hua, Hu, Liuyi, Zhou, Jin, Lange, Kenneth, Zhou, Hua, Zhou, Hua, Hu, Liuyi, Zhou, Jin, and Lange, Kenneth
- Abstract
Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation and restricted maximum likelihood estimation of variance component models remain numerically challenging. Building on the minorization-maximization (MM) principle, this paper presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush-Kuhn-Tucker (KKT) point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite.
- Published
- 2019
25. Globalizing a nonsmooth Newton method via path search
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
26. Globalizing a nonsmooth Newton method via path search
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method for solving Lipschitz equations introduced by B. Kummer. We start with recalling Kummers approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use first a monotone path search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We sketch also a nonmonotone version of the algorithm. In the last part we discuss and illustrate the details of the general algorithm (e.g the computation of a path) for some interesting examples and present results from numerical tests.
- Published
- 2019
27. Globalizing a nonsmooth Newton method via path search, invited presentation at University of Graz : SFB Colloquium, Graz, 9 April 2008
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method for solving Lipschitz equations introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use first a monotone path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We sketch also a nonmonotone version of the algorithm. In the last part we discuss and illustrate the details of the general algorithm (e.g the computation of a Newton step and the construction of a path) for our applications and present results from numerical tests for (generalized) semi-infinite optimization and complementarity problems.
- Published
- 2019
28. Globalizing a nonsmooth Newton method via path search
- Author
-
Klatte, Diethard, Bütikofer, Stephan, Klatte, Diethard, and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
29. A global Newton method for nonsmooth Lipschitz equations
- Author
-
Bütikofer, Stephan and Bütikofer, Stephan
- Abstract
We give a framework for the globalization of a nonsmooth Newton method introduced by B. Kummer. We start with recalling Kummer's approach to convergence analysis of this method and state his results for local convergence. In a second part we give a globalized version of this method. In our approach we use a path-search idea to control the descent. After elaborating the single steps, we analyze and discuss the proof of global convergence resp. of local superlinear or quadratic convergence of the algorithm. We end with illustrating our ideas at some interesting examples.
- Published
- 2019
30. A globalized nonsmooth Newton Method, applications and numerical results
- Author
-
Bütikofer, Stephan, Klatte, Diethard, Bütikofer, Stephan, and Klatte, Diethard
- Abstract
In a recent article a globalization scheme for a local non-smooth Newton method from Kummer was proposed. This globalized method is particularly suited for non-linear optimization problems with only one time continuously differentiable objective and constraints. We show interesting applications from (generalized) semi-in nite optimization, which lead to such problems. We work out the necessary technical details for the application of the abstract globalized algorithm to non-linear optimization problems and show numerical results.
- Published
- 2019
31. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
32. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
33. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
34. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
35. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
36. Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
- Author
-
Burdakov, Oleg, Kamandi, Ahmad, Burdakov, Oleg, and Kamandi, Ahmad
- Abstract
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix. Owing to their ability to more completely utilize the information about the Jacobian matrix gathered at the previous iterations, these methods are especially efficient in the case of expensive functions. They are known to be local and superlinearly convergent. We combine these methods with the nonmonotone line search proposed by Li and Fukushima (2000), and study global and superlinear convergence of this combination. Results of numerical experiments are presented. They indicate that the multipoint secant and interpolation methods tend to be more robust and efficient than Broyden’s method globalized in the same way.
- Published
- 2018
- Full Text
- View/download PDF
37. The environmental risks of incomplete globalization
- Author
-
Karlsson, Rasmus and Karlsson, Rasmus
- Abstract
As the liberal optimism of the long 1990s has faded into a world of growing inequality and resurging nationalism, there is less certainty about the prospects of economic convergence and global integration. Beyond the formidable human cost of maintaining a divided world, the possibility of incomplete globalisation also gives rise to a number of environmental risks. While environmental political theory generally sees strength in localism, history rather shows that a robust world trade system is crucial to offset local resource scarcities and that cosmopolitan norms of solidarity are essential for helping communities to rebuild after environmental catastrophe. In relation to climate change, statist thinking has led to a focus on non-scalable technologies and a silent acceptance of chronic poverty abroad as a way of avoiding a climate emergency. Contrary to such views, this paper argues that accelerating the transition to a fully integrated high-energy planet may more effectively mitigate Anthropocene risks.
- Published
- 2017
- Full Text
- View/download PDF
38. The environmental risks of incomplete globalization
- Author
-
Karlsson, Rasmus and Karlsson, Rasmus
- Abstract
As the liberal optimism of the long 1990s has faded into a world of growing inequality and resurging nationalism, there is less certainty about the prospects of economic convergence and global integration. Beyond the formidable human cost of maintaining a divided world, the possibility of incomplete globalisation also gives rise to a number of environmental risks. While environmental political theory generally sees strength in localism, history rather shows that a robust world trade system is crucial to offset local resource scarcities and that cosmopolitan norms of solidarity are essential for helping communities to rebuild after environmental catastrophe. In relation to climate change, statist thinking has led to a focus on non-scalable technologies and a silent acceptance of chronic poverty abroad as a way of avoiding a climate emergency. Contrary to such views, this paper argues that accelerating the transition to a fully integrated high-energy planet may more effectively mitigate Anthropocene risks.
- Published
- 2017
- Full Text
- View/download PDF
39. Globalization technique for projected Newton–Krylov methods
- Author
-
Chen, Jinhai (author), Vuik, Cornelis (author), Chen, Jinhai (author), and Vuik, Cornelis (author)
- Abstract
Large-scale systems of nonlinear equations appear in many applications. In various applications, the solution of the nonlinear equations should also be in a certain interval. A typical application is a discretized system of reaction diffusion equations. It is well known that chemical species should be positive otherwise the solution is not physical and in general blow up occurs. Recently, a projected Newton method has been developed, which can be used to solve this type of problems. A drawback is that the projected Newton method is not globally convergent. This motivates us to develop a new feasible projected Newton–Krylov algorithm for solving a constrained system of nonlinear equations. Combined with a projected gradient direction, our feasible projected Newton–Krylov algorithm circumvents the non-descent drawback of search directions which appear in the classical projected Newton methods. Global and local superlinear convergence of our approach is established under some standard assumptions. Numerical experiments are used to illustrate that the new projected Newton method is globally convergent and is a significate complementarity for Newton–Krylov algorithms known in the literature., Numerical Analysis
- Published
- 2017
- Full Text
- View/download PDF
40. Global behavior of the Douglas–Rachford method for a nonconvex feasibility problem
- Author
-
Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Borwein, Jonathan M., Tam, Matthew K., Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Borwein, Jonathan M., and Tam, Matthew K.
- Abstract
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a point in the intersection of a half-space and a potentially non-convex set which is assumed to satisfy a well-quasi-ordering property or a property weaker than compactness. In particular, the special case in which the second set is finite is covered by our framework and provides a prototypical setting for combinatorial optimization problems.
- Published
- 2016
41. Global behavior of the Douglas–Rachford method for a nonconvex feasibility problem
- Author
-
Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Borwein, Jonathan M., Tam, Matthew K., Universidad de Alicante. Departamento de Matemáticas, Aragón Artacho, Francisco Javier, Borwein, Jonathan M., and Tam, Matthew K.
- Abstract
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a point in the intersection of a half-space and a potentially non-convex set which is assumed to satisfy a well-quasi-ordering property or a property weaker than compactness. In particular, the special case in which the second set is finite is covered by our framework and provides a prototypical setting for combinatorial optimization problems.
- Published
- 2016
42. Studies on Optimization Methods for Nonlinear Semidefinite Programming Problems
- Author
-
Yamakawa, Yuya and Yamakawa, Yuya
- Published
- 2015
43. Studies on block coordinate gradient methods for nonlinear optimization problems with separable structure
- Author
-
Hua, Xiaoqin and Hua, Xiaoqin
- Published
- 2015
44. Globally convergent algorithms for finding zeros of duplomonotone mappings
- Author
-
Universidad de Alicante. Departamento de Estadística e Investigación Operativa, Aragón Artacho, Francisco Javier, Fleming, Ronan M.T., Universidad de Alicante. Departamento de Estadística e Investigación Operativa, Aragón Artacho, Francisco Javier, and Fleming, Ronan M.T.
- Abstract
We introduce a new class of mappings, called duplomonotone, which is strictly broader than the class of monotone mappings. We study some of the main properties of duplomonotone functions and provide various examples, including nonlinear duplomonotone functions arising from the study of systems of biochemical reactions. Finally, we present three variations of a derivative-free line search algorithm for finding zeros of systems of duplomonotone equations, and we prove their linear convergence to a zero of the function.
- Published
- 2015
45. New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources
- Abstract
Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications.
- Published
- 2015
46. Globally convergent algorithms for finding zeros of duplomonotone mappings
- Author
-
Luxembourg Centre for Systems Biomedicine (LCSB): Systems Biochemistry (Fleming Group) [research center], Aragón Artacho, Francisco Javier, Fleming, Ronan MT, Luxembourg Centre for Systems Biomedicine (LCSB): Systems Biochemistry (Fleming Group) [research center], Aragón Artacho, Francisco Javier, and Fleming, Ronan MT
- Abstract
We introduce a new class of mappings, called duplomonotone, which is strictly broader than the class of monotone mappings. We study some of the main properties of duplomonotone functions and provide various examples, including nonlinear duplomonotone functions arising from the study of systems of biochemical reactions. Finally, we present three variations of a derivative-free line search algorithm for finding zeros of systems of duplomonotone equations, and we prove their linear convergence to a zero of the function.
- Published
- 2015
47. New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources
- Abstract
Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications.
- Published
- 2015
48. New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources
- Author
-
Xu, Peng-cheng, Shen, Yue-hong, Li, Hui, Xu, Peng-cheng, Shen, Yue-hong, and Li, Hui
- Abstract
Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications.
- Published
- 2015
49. Globally convergent algorithms for finding zeros of duplomonotone mappings
- Author
-
Universidad de Alicante. Departamento de Estadística e Investigación Operativa, Aragón Artacho, Francisco Javier, Fleming, Ronan M.T., Universidad de Alicante. Departamento de Estadística e Investigación Operativa, Aragón Artacho, Francisco Javier, and Fleming, Ronan M.T.
- Abstract
We introduce a new class of mappings, called duplomonotone, which is strictly broader than the class of monotone mappings. We study some of the main properties of duplomonotone functions and provide various examples, including nonlinear duplomonotone functions arising from the study of systems of biochemical reactions. Finally, we present three variations of a derivative-free line search algorithm for finding zeros of systems of duplomonotone equations, and we prove their linear convergence to a zero of the function.
- Published
- 2015
50. Towards a global model of accounting education
- Author
-
Watty,K, Sugahara,S, Abayadeera,N, Perera,L, McKay,J, Watty,K, Sugahara,S, Abayadeera,N, Perera,L, and McKay,J
- Abstract
Purpose - The purpose of this paper is to examine the accounting education systems in three countries - Australia, Japan and Sri Lanka - to inform the development and testing (by application) of a Global Model of Accounting Education.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.