692 results
Search Results
2. A Note on the Paper 'Regularization Proximal Point Algorithm for Common Fixed Points of Nonexpansive Mappings in Banach Spaces'.
- Author
-
Hang, Nguyen and Tuyen, Truong
- Subjects
- *
FIXED point theory , *NONEXPANSIVE mappings , *BANACH spaces , *MATHEMATICAL regularization , *STOCHASTIC convergence , *ALGORITHMS - Abstract
In this note, a small gap is corrected in the assumption of main theorem of T.M. Tuyen (Theorem 3.1, Regularization proximal point algorithm for common fixed points of nonexpansive mappings in Banach spaces, J. Optim. Theory Appl., 152:351-365, ). We give another assumption, which allows us to obtain the strong convergence of regularization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
3. Tight Ergodic Sublinear Convergence Rate of the Relaxed Proximal Point Algorithm for Monotone Variational Inequalities.
- Author
-
Gu, Guoyong and Yang, Junfeng
- Subjects
ALGORITHMS ,LOGICAL prediction ,CONCRETE ,MOTIVATION (Psychology) - Abstract
This paper considers the relaxed proximal point algorithm for solving monotone variational inequality problems, and our main contribution is the establishment of a tight ergodic sublinear convergence rate. First, the tight or exact worst-case convergence rate is computed using the performance estimation framework. It is observed that this numerical bound asymptotically coincides with the best-known existing rate, whose tightness is not clear. This implies that, without further assumptions, sublinear convergence rate is likely the best achievable rate for the relaxed proximal point algorithm. Motivated by the numerical result, a concrete example is constructed, which provides a lower bound on the exact worst-case convergence rate. This lower bound coincides with the numerical bound computed via the performance estimation framework, leading us to conjecture that the lower bound provided by the example is exactly the tight worse-case rate, which is then verified theoretically. We thus have established an ergodic sublinear complexity rate that is tight in terms of both the sublinear order and the constants involved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Unified Approach of Interior-Point Algorithms for P∗(κ)-LCPs Using a New Class of Algebraically Equivalent Transformations.
- Author
-
Illés, Tibor, Rigó, Petra Renáta, and Török, Roland
- Subjects
COMPLEMENTARITY constraints (Mathematics) ,ALGORITHMS ,KERNEL functions ,CONCAVE functions - Abstract
We propose new short-step interior-point algorithms (IPAs) for solving P ∗ (κ) -linear complementarity problems (LCPs). In order to define the search directions, we use the algebraic equivalent transformation (AET) technique of the system describing the central path. A novelty of the paper is that we introduce a whole, new class of AET functions for which a unified complexity analysis of the IPAs is presented. This class of functions differs from the ones used in the literature for determining search directions, like the class of concave functions determined by Haddou, Migot and Omer, self-regular functions, eligible kernel and self-concordant functions. We prove that the IPAs using any member φ of the new class of AET functions have polynomial iteration complexity in the size of the problem, in starting point's duality gap, in the accuracy parameter and in the parameter κ . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. On the Rate of Convergence of the Difference-of-Convex Algorithm (DCA).
- Author
-
Abbaszadehpeivasti, Hadi, de Klerk, Etienne, and Zamani, Moslem
- Subjects
SEMIDEFINITE programming ,ALGORITHMS ,SOCIAL norms ,NONSMOOTH optimization - Abstract
In this paper, we study the non-asymptotic convergence rate of the DCA (difference-of-convex algorithm), also known as the convex–concave procedure, with two different termination criteria that are suitable for smooth and non-smooth decompositions, respectively. The DCA is a popular algorithm for difference-of-convex (DC) problems and known to converge to a stationary point of the objective under some assumptions. We derive a worst-case convergence rate of O (1 / N) after N iterations of the objective gradient norm for certain classes of DC problems, without assuming strong convexity in the DC decomposition and give an example which shows the convergence rate is exact. We also provide a new convergence rate of O(1/N) for the DCA with the second termination criterion. Moreover, we derive a new linear convergence rate result for the DCA under the assumption of the Polyak–Łojasiewicz inequality. The novel aspect of our analysis is that it employs semidefinite programming performance estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A New Ai–Zhang Type Interior Point Algorithm for Sufficient Linear Complementarity Problems.
- Author
-
E.-Nagy, Marianna and Varga, Anita
- Subjects
LINEAR complementarity problem ,GREEDY algorithms ,INTERIOR-point methods ,ALGORITHMS ,MATHEMATICAL programming - Abstract
In this paper, we propose a new long-step interior point method for solving sufficient linear complementarity problems. The new algorithm combines two important approaches from the literature: the main ideas of the long-step interior point algorithm introduced by Ai and Zhang and the algebraic equivalent transformation technique proposed by Darvay. Similar to the method of Ai and Zhang, our algorithm also works in a wide neighborhood of the central path and has the best known iteration complexity of short-step variants. However, due to the properties of the applied transforming function in Darvay's technique, the wide neighborhood definition in the analysis depends on the value of the handicap. We implemented not only the theoretical algorithm but a greedy variant of the new method (working in a neighborhood independent of the handicap) in MATLAB and tested its efficiency on both sufficient and non-sufficient problem instances. In addition to presenting our numerical results, we also make some interesting observations regarding the analysis of Ai–Zhang type methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Forward–Reflected–Backward Splitting Algorithms with Momentum: Weak, Linear and Strong Convergence Results.
- Author
-
Yao, Yonghong, Adamu, Abubakar, and Shehu, Yekini
- Subjects
- *
MONOTONE operators , *HILBERT space , *ALGORITHMS - Abstract
This paper studies the forward–reflected–backward splitting algorithm with momentum terms for monotone inclusion problem of the sum of a maximal monotone and Lipschitz continuous monotone operators in Hilbert spaces. The forward–reflected–backward splitting algorithm is an interesting algorithm for inclusion problems with the sum of maximal monotone and Lipschitz continuous monotone operators due to the inherent feature of one forward evaluation and one backward evaluation per iteration it possesses. The results in this paper further explore the convergence behavior of the forward–reflected–backward splitting algorithm with momentum terms. We obtain weak, linear, and strong convergence results under the same inherent feature of one forward evaluation and one backward evaluation at each iteration. Numerical results show that forward–reflected–backward splitting algorithms with momentum terms are efficient and promising over some related splitting algorithms in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Accelerated Doubly Stochastic Gradient Descent for Tensor CP Decomposition.
- Author
-
Wang, Qingsong, Cui, Chunfeng, and Han, Deren
- Subjects
ALGORITHMS - Abstract
In this paper, we focus on the acceleration of doubly stochastic gradient descent method for computing the CANDECOMP/PARAFAC (CP) decomposition of tensors. This optimization problem has N blocks, where N is the order of the tensor. Under the doubly stochastic framework, each block subproblem is solved by the vanilla stochastic gradient method. However, the convergence analysis requires that the variance converges to zero, which is hard to check in practice and may not hold in some implementations. In this paper, we propose accelerating the stochastic gradient method by the momentum acceleration and the variance reduction technique, denoted as DS-MVR. Theoretically, the convergence of DS-MVR only requires the variance to be bounded. Under mild conditions, we show DS-MVR converges to a stochastic ε -stationary solution in O ~ (N 3 / 2 ε - 3) iterations with varying stepsizes and in O (N 3 / 2 ε - 3) iterations with constant stepsizes, respectively. Numerical experiments on four real-world datasets show that our proposed algorithm can get better results compared with the baselines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Convergence of a Weighted Barrier Algorithm for Stochastic Convex Quadratic Semidefinite Optimization.
- Author
-
Alzalg, Baha and Gafour, Asma
- Subjects
SEMIDEFINITE programming ,ALGORITHMS ,LOGARITHMIC functions ,INTERIOR-point methods ,QUADRATIC programming ,STOCHASTIC programming - Abstract
Mehrotra and Özevin (SIAM J Optim 19:1846–1880, 2009) computationally found that a weighted barrier decomposition algorithm for two-stage stochastic conic programs achieves significantly superior performance when compared to standard barrier decomposition algorithms existing in the literature. Inspired by this motivation, Mehrotra and Özevin (SIAM J Optim 20:2474–2486, 2010) theoretically analyzed the iteration complexity for a decomposition algorithm based on the weighted logarithmic barrier function for two-stage stochastic linear optimization with discrete support. In this paper, we extend the aforementioned theoretical paper and its self-concordance analysis from the polyhedral case to the semidefinite case and analyze the iteration complexity for a weighted logarithmic barrier decomposition algorithm for two-stage stochastic convex quadratic SDP with discrete support. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Efficient Use of Quantum Linear System Algorithms in Inexact Infeasible IPMs for Linear Optimization.
- Author
-
Mohammadisiahroudi, Mohammadhossein, Fakhimi, Ramin, and Terlaky, Tamás
- Subjects
- *
INTERIOR-point methods , *CONJUGATE gradient methods , *QUANTUM computing , *ALGORITHMS , *RESEARCH personnel , *LINEAR systems - Abstract
Quantum computing has attracted significant interest in the optimization community because it potentially can solve classes of optimization problems faster than conventional supercomputers. Several researchers proposed quantum computing methods, especially quantum interior point methods (QIPMs), to solve convex conic optimization problems. Most of them have applied a quantum linear system algorithm at each iteration to compute a Newton step. However, using quantum linear solvers in QIPMs comes with many challenges, such as having ill-conditioned systems and the considerable error of quantum solvers. This paper investigates in detail the use of quantum linear solvers in QIPMs. Accordingly, an Inexact Infeasible Quantum Interior Point (II-QIPM) is developed to solve linear optimization problems. We also discuss how we can get an exact solution by iterative refinement (IR) without excessive time of quantum solvers. The proposed IR-II-QIPM shows exponential speed-up with respect to precision compared to previous II-QIPMs. Additionally, we present a quantum-inspired classical variant of the proposed IR-II-QIPM where QLSAs are replaced by conjugate gradient methods. This classic IR-II-IPM has some advantages compared to its quantum counterpart, as well as previous classic inexact infeasible IPMs. Finally, computational results with a QISKIT implementation of our QIPM using quantum simulators are presented and analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Kernel-Based Full-Newton Step Feasible Interior-Point Algorithm for P∗(κ)-Weighted Linear Complementarity Problem.
- Author
-
Chi, Xiaoni, Wang, Guoqiang, and Lesaja, Goran
- Subjects
- *
LINEAR complementarity problem , *INTERIOR-point methods , *ALGORITHMS , *KERNEL functions - Abstract
In this paper, we consider a kernel-based full-Newton step feasible interior-point method (IPM) for P ∗ (κ) -Weighted Linear Complementarity Problem (WLCP). The specific eligible kernel function is used to define an equivalent form of the central path, the proximity measure, and to obtain search directions. Full-Newton steps are adopted to avoid the line search at each iteration. It is shown that with appropriate choices of the parameters, and a certain condition on the starting point, the iterations always lie in the defined neighborhood of the central path. Assuming strict feasibility of P ∗ (κ) -WLCP, it is shown that the IPM converges to the ε -approximate solution of P ∗ (κ) -WLCP in a polynomial number of iterations. Few numerical results are provided to indicate the computational performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. An Efficient GIPM Algorithm for Computing the Smallest V-Singular Value of the Partially Symmetric Tensor.
- Author
-
Du, Zhuolin, Wang, Chunyan, Chen, Haibin, and Yan, Hong
- Subjects
- *
SOLID mechanics , *QUANTUM theory , *EIGENVECTORS , *ALGORITHMS , *QUANTUM entanglement - Abstract
Real partially symmetric tensors arise from the strong ellipticity condition problem in solid mechanics and the entanglement problem in quantum physics. In this paper, we try to compute the smallest V-singular value of partially symmetric tensors with orders (p, q). This is a unified notion in a broad sense that, when (p , q) = (2 , 2) , the V-singular value coincides with the notion of M-eigenvalue. To do that, we propose a generalized inverse power method with a shift variable to compute the smallest V-singular value and eigenvectors. Global convergence of the algorithm is established. Furthermore, it is proven that the proposed algorithm always converges to the smallest V-singular value and the associated eigenvectors. Several numerical experiments show the efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Multilevel Uzawa and Arrow–Hurwicz Algorithms for General Saddle Point Problems.
- Author
-
Badea, Lori
- Subjects
ALGORITHMS ,CONVEX sets ,MULTIGRID methods (Numerical analysis) ,HILBERT space ,DOMAIN decomposition methods ,FINITE element method - Abstract
In this paper, we introduce and analyze multilevel inexact Uzawa and Arrow–Hurwicz algorithms for solving saddle point problems. For the definition of the problem and that of the Uzawa and Arrow–Hurwicz algorithms, we adopt the framework introduced in I. Ekeland and R. Temam, convex analysis and variational problems, North-Holland Publishing Company, Amsterdam, Oxford, 1976, where the saddle point is defined by optimizations on convex sets. The results are obtained for Hilbert spaces, and therefore they can be applied to obtain convergence results for the multigrid methods in finite element spaces. We prove the convergence of the two algorithms, the multilevel Uzawa and the multilevel Arrow–Hurwicz algorithms. Also, we give new convergence proofs for the Uzawa and Arrow–Hurwicz algorithms themselves to better characterize their convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. New Outer Proximal Methods for Solving Variational Inequality Problems.
- Author
-
Anh, Pham Ngoc
- Subjects
VARIATIONAL inequalities (Mathematics) ,ALGORITHMS - Abstract
In this paper, we propose a new outer proximal approach for solving the variational inequality problems in the real Euclidean space, where the feasible set is replaced by its polyhedral outer approximation. First, we prove the quasicontractiveness of the outer proximal operator. Second, we apply this property to present two new algorithms and their convergence under strongly monotone and Lipschitz continuous conditions of the cost mapping. Finally, we give some numerical results for the proposed algorithms and comparison with other known methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Convergence Rates of the Stochastic Alternating Algorithm for Bi-Objective Optimization.
- Author
-
Liu, Suyun and Vicente, Luis Nunes
- Subjects
STOCHASTIC convergence ,ALGORITHMS - Abstract
Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating algorithms achieve a sublinear convergence rate of O (1 / T) , under strong convexity, for the determination of a minimizer of a weighted-sum of the two functions, parameterized by the number of steps applied on each of them. An extension to the convex case is presented for which the rate weakens to O (1 / T) . These rates are valid also in the non-smooth case. Importantly, by varying the proportion of steps applied to each function, one can determine an approximation to the Pareto front. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. A Minimal Cardinality Solution to Fitting Sawtooth Piecewise-Linear Functions.
- Author
-
Allen, Cody and de Oliveira, Mauricio
- Subjects
MONOTONIC functions ,LEAST squares ,ALGORITHMS - Abstract
In this paper, we explore a method to parameterize a linear function with jump discontinuities, which we refer to as a "sawtooth" function, and then develop theory and algorithms for estimating the function parameters from noisy data in a least-squares framework. Because there will always exist a sawtooth function that exactly fits a given data set, one is led to bounding the maximum number of jumps the sawtooth function can have in order to obtain reasonable practical estimates. The main contribution of the paper is a proof that cardinality of the optimal solutions to a relaxation of the associated least-squares problem in which a constraint on the cardinality of the solutions is replaced by a 1-norm constraint on the vector of jumps is a monotonic function of the parameter of the relaxation. This property allows one to avoid dealing with the combinatorial cardinality constraint and quickly explore solutions using the proposed convex relaxation. A fitting algorithm based on the proposed results is developed and illustrated with a simple numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. The Algorithm to Locate the Optimal Solution for Production System Subject to Random Machine Breakdown and Failure in Rework for Supply Chain Management.
- Author
-
Chung, Kun-Jen
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MANUFACTURING processes ,RANDOM data (Statistics) ,STOCHASTIC processes ,MACHINERY maintenance & repair ,SUPPLY chain management ,CONVEX domains - Abstract
A lot of researchers try to use the conditional convexity to develop their solution procedure to locate the optimal solution. However, this paper indicates that the solution procedure to locate the optimal solution based on the conditional convexity has logical shortcomings from a mathematics point of view. So, the main purpose of this paper will provide the alternative approach to develop a recursive algorithm for determining the optimal solution without using the conditional convexity. Finally, this paper improves many existing articles. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. Extended Sufficient Conditions for Strong Minimality in the Bolza Problem: Applications to Space Trajectory Optimization.
- Author
-
Mazzini, Leonardo
- Subjects
TRAJECTORY optimization ,FEEDBACK control systems ,SPACE trajectories ,DISCONTINUOUS functions ,ALGORITHMS - Abstract
This paper expands the existing sufficiency results for the strong minimality of an extremal of the Bolza problem. We cover the case where the strict Legendre Clebsh conditions are not strictly verified. An efficient, easy to use, algorithm to prove minimality is provided. It can be used on solutions with bang-bang control and does not require any local controllability property. The interval where sufficiency is provided is maximized over a class of discontinuous Verification Functions determined solving a sequence of Riccati problems. This class of Verification Functions can be adapted to all kind of boundary conditions. The algorithms developed in this paper are applied to space trajectory extremals that exhibit bang-bang control. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Cardinality-Constrained Multi-objective Optimization: Novel Optimality Conditions and Algorithms.
- Author
-
Lapucci, Matteo and Mansueto, Pierluigi
- Subjects
- *
ALGORITHMS , *THRESHOLDING algorithms - Abstract
In this paper, we consider multi-objective optimization problems with a sparsity constraint on the vector of variables. For this class of problems, inspired by the homonymous necessary optimality condition for sparse single-objective optimization, we define the concept of L-stationarity and we analyze its relationships with other existing conditions and Pareto optimality concepts. We then propose two novel algorithmic approaches: the first one is an iterative hard thresholding method aiming to find a single L-stationary solution, while the second one is a two-stage algorithm designed to construct an approximation of the whole Pareto front. Both methods are characterized by theoretical properties of convergence to points satisfying necessary conditions for Pareto optimality. Moreover, we report numerical results establishing the practical effectiveness of the proposed methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. The Difference of Convex Algorithm on Hadamard Manifolds.
- Author
-
Bergmann, Ronny, Ferreira, Orizon P., Santos, Elianderson M., and Souza, João Carlos O.
- Subjects
- *
ALGORITHMS , *PROBLEM solving - Abstract
In this paper, we propose a Riemannian version of the difference of convex algorithm (DCA) to solve a minimization problem involving the difference of convex (DC) function. The equivalence between the classical and simplified Riemannian versions of the DCA is established. We also prove that under mild assumptions the Riemannian version of the DCA is well defined and every cluster point of the sequence generated by the proposed method, if any, is a critical point of the objective DC function. Some duality relations between the DC problem and its dual are also established. To illustrate the algorithm's effectiveness, some numerical experiments are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Matrix Optimization Problem Involving Group Sparsity and Nonnegativity Constraints.
- Author
-
Zhang, Xi, Li, Xinrong, and Zhang, Chao
- Subjects
- *
THRESHOLDING algorithms , *ALGORITHMS - Abstract
In this paper, we consider the matrix optimization problem with group sparsity and nonnegativity constraints. We analyze the optimality conditions and develop two matrix-based improved iterative hard thresholding algorithms for the problem, using the projected gradient method with the Armijo-type stepsize rule and the fixed stepsize, respectively. We then prove that the whole sequence generated by each of the proposed algorithm converges to a local minimizer of the optimization problem and establish the linear convergence rates as well as the iteration complexity results under mild conditions. Finally, numerical results are reported to demonstrate the usefulness of this type of problem and the efficiency of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. An Outer Space Approach to Tackle Generalized Affine Fractional Program Problems.
- Author
-
Jiao, Hongwei, Li, Binbin, and Shang, Youlin
- Subjects
- *
OUTER space , *COMPUTATIONAL complexity , *GLOBAL optimization , *ALGORITHMS - Abstract
This paper aims to globally solve a generalized affine fractional program problem (GAFPP). Firstly, by introducing some outer space variables and performing equivalent transformations, we can derive the equivalence problem (EP) of the GAFPP. Secondly, by constructing a novel linear relaxation method, we can deduce the affine relaxation problem (ARP) of the EP. Next, by solving the ARP to compute the lower bound, we propose a new outer space branch-and-bound algorithm for tackling the GAFPP. Then, the global convergence of the algorithm is proved, and the computational complexity of the algorithm in the worst case is analyzed. Finally, numerical experimental results are reported to illustrate the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A New Global Algorithm for Max-Cut Problem with Chordal Sparsity.
- Author
-
Lu, Cheng, Deng, Zhibin, Fang, Shu-Cherng, and Xing, Wenxun
- Subjects
INTERSECTION numbers ,ALGORITHMS ,INTERSECTION graph theory - Abstract
In this paper, we develop a semidefinite relaxation-based branch-and-bound algorithm that exploits the chordal sparsity patterns of the max-cut problem. We first study how the chordal sparsity pattern affects the hardness of a max-cut problem. To do this, we derive a polyhedral relaxation based on the clique decomposition of the chordal sparsity patterns and prove some sufficient conditions for the tightness of this polyhedral relaxation. The theoretical results show that the max-cut problem is easy to solve when the sparsity pattern embedded in the problem has a small treewidth and the number of vertices in the intersection of maximal cliques is small. Based on the theoretical results, we propose a new branching rule called hierarchy branching rule, which utilizes the tree decomposition of the sparsity patterns. We also analyze how the proposed branching rule affects the chordal sparsity patterns embedded in the problem, and explain why it can be effective. The numerical experiments show that the proposed algorithm is superior to those known algorithms using classical branching rules and the state-of-the-art solver BiqCrunch on most instances with sparsity patterns arisen in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Accelerated Stochastic Variance Reduction for a Class of Convex Optimization Problems.
- Author
-
He, Lulu, Ye, Jimin, and Jianwei, E.
- Subjects
ALGORITHMS - Abstract
Katyusha momentum is a famous and efficient alternative acceleration method that used for stochastic optimization problems, which can reduce the potential accumulation error from the process of randomly sampling, induced by classical Nesterov's acceleration technique. The nature idea behind the Katyusha momentum is to use a convex combination framework instead of extrapolation framework used in Nesterov's momentum. In this paper, we design a Katyusha-like momentum step, i.e., a negative momentum framework, and incorporate it into the classical variance reduction stochastic gradient algorithm. Based on the built negative momentum-based framework, we proposed an accelerated stochastic algorithm, namely negative momentum-based stochastic variance reduction gradient (NMSVRG) algorithm for minimizing a class of convex finite-sum problems. There is only one extra parameter needed to turn in NMSVRG algorithm, which is obviously more friendly in parameter turning than the original Katyusha momentum-based algorithm. We provided a rigorous theoretical analysis and shown that the proposed NMSVRG algorithm is superior to the SVRG algorithm and is comparable to the best one in the existing literature in convergence rate. Finally, experimental results verify our analysis and show again that our proposed algorithm is superior to the state-of-the-art-related stochastic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. A Greedy Newton-Type Method for Multiple Sparse Constraint Problem.
- Author
-
Sun, Jun, Kong, Lingchen, and Qu, Biao
- Subjects
ALGORITHMS - Abstract
With the development of science and technology, we can get many groups of data for the same object. There is a certain relationship with each other or structure between these data or within the data. To characterize the structure of the data in different datasets, in this paper, we propose a multiple sparse constraint problem (MSCP) to process the problem with multiblock sparse structure. We give three types of stationary points and present the relationships among the three types of stationary points and the global/local minimizers. Then we design a gradient projection Newton algorithm, which is proven to enjoy the global and quadratic convergence property. Finally, some numerical experiments of different examples illustrate the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Bregman Three-Operator Splitting Methods.
- Author
-
Jiang, Xin and Vandenberghe, Lieven
- Subjects
ALGORITHMS - Abstract
The paper presents primal–dual proximal splitting methods for convex optimization, in which generalized Bregman distances are used to define the primal and dual proximal update steps. The methods extend the primal and dual Condat–Vũ algorithms and the primal–dual three-operator (PD3O) algorithm. The Bregman extensions of the Condat–Vũ algorithms are derived from the Bregman proximal point method applied to a monotone inclusion problem. Based on this interpretation, a unified framework for the convergence analysis of the two methods is presented. We also introduce a line search procedure for stepsize selection in the Bregman dual Condat–Vũ algorithm applied to equality-constrained problems. Finally, we propose a Bregman extension of PD3O and analyze its convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. On the Parallelization Upper Bound for Asynchronous Stochastic Gradients Descent in Non-convex Optimization.
- Author
-
Wang, Lifu and Shen, Bo
- Subjects
ASYNCHRONOUS learning ,DISTRIBUTED algorithms ,DEEP learning ,PARALLEL algorithms ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
In deep learning, asynchronous parallel stochastic gradient descent (APSGD) is a broadly used algorithm to speed up the training process. In asynchronous system, the time delay of stale gradients in asynchronous algorithms is generally proportional to the total number of workers. When the number of workers is too large, the time delay will bring additional deviation from the accurate gradient due to delayed gradients and may cause a negative influence on the convergence speed of the algorithm. One may ask: How many workers can be used at most to achieve both the convergence and the speedup? In this paper, we consider the asynchronous training problem with the non-convex case. We theoretically study this problem to find an approximating second-order stationary point using asynchronous algorithms in non-convex optimization and investigate the behaviors of APSGD near-saddle points. This work gives the first theoretical guarantee to find an approximating second-order stationary point in asynchronous algorithms and a provable upper bound for the time delay. The techniques we provide can be generalized to analyze other types of asynchronous algorithms to understand the behaviors of asynchronous algorithms in distributed asynchronous parallel training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. A Full-Newton Step Infeasible Interior-Point Method for the Special Weighted Linear Complementarity Problem.
- Author
-
Chi, Xiaoni and Wang, Guoqiang
- Subjects
LINEAR complementarity problem ,INTERIOR-point methods ,COMPLEMENTARITY constraints (Mathematics) ,ALGORITHMS ,MATHEMATICAL economics - Abstract
As an extension of the complementarity problem (CP), the weighted complementarity problem (wCP) is a large class of equilibrium problems with wide applications in science, economics, and engineering. If the weight vector is zero, the wCP reduces to a CP. In this paper, we present a full-Newton step infeasible interior-point method (IIPM) for the special weighted linear complementarity problem over the nonnegative orthant. One iteration of the algorithm consists of one feasibility step followed by a few centering steps. All of them are full-Newton steps, and hence, no calculation of the step size is necessary. The iteration bound of the algorithm is as good as the best-known polynomial complexity of IIPMs for linear optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. A Semismooth Newton-based Augmented Lagrangian Algorithm for Density Matrix Least Squares Problems.
- Author
-
Liu, Yong-Jin and Yu, Jing
- Subjects
DENSITY matrices ,QUANTUM states ,ALGORITHMS ,SIGNAL processing ,MACHINE learning ,LEAST squares - Abstract
The density matrix least squares problem arises from the quantum state tomography problem in experimental physics and has many applications in signal processing and machine learning, mainly including the phase recovery problem and the matrix completion problem. In this paper, we first reformulate the density matrix least squares problem as an equivalent convex optimization problem and then design an efficient semismooth Newton-based augmented Lagrangian (Ssnal) algorithm to solve the dual of its equivalent form, in which an inexact semismooth Newton (Ssn) algorithm with superlinear or even quadratic convergence is applied to solve the inner subproblems. Theoretically, the global convergence and locally asymptotically superlinear convergence of the Ssnal algorithm are established under very mild conditions. Computationally, the costs of the Ssn algorithm for solving the subproblem are significantly reduced by making full use of low-rank or high-rank property of optimal solutions of the density matrix least squares problem. In order to verify the performance of our algorithm, numerical experiments conducted on randomly generated quantum state tomography problems and density matrix least squares problems with real data demonstrate that the Ssnal algorithm is more effective and robust than the Qsdpnal solver and several state-of-the-art first-order algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. An Infeasible Interior-Point Algorithm for Stochastic Second-Order Cone Optimization.
- Author
-
Alzalg, Baha, Badarneh, Khaled, and Ababneh, Ayat
- Subjects
STOCHASTIC programming ,ALGORITHMS ,CONES - Abstract
Alzalg (J Optim Theory Appl 163(1):148-164, 2014) derived a homogeneous self-dual algorithm for stochastic second-order cone programs with finite event space. In this paper, we derive an infeasible interior-point algorithm for the same stochastic optimization problem by utilizing the work of Rangarajan (SIAM J Optim 16(4), 1211-1229, 2006) for deterministic symmetric cone programs. We show that the infeasible interior-point algorithm developed in this paper has complexity less than that of the homogeneous self-dual algorithm mentioned above. We implement the proposed algorithm to show that they are efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Adaptive Sampling Stochastic Multigradient Algorithm for Stochastic Multiobjective Optimization.
- Author
-
Zhao, Yong, Chen, Wang, and Yang, Xinmin
- Subjects
- *
ALGORITHMS , *SAMPLE size (Statistics) , *CONJUGATE gradient methods , *CRITICAL point theory - Abstract
In this paper, we propose an adaptive sampling stochastic multigradient algorithm for solving stochastic multiobjective optimization problems. Instead of requiring additional storage or computation of full gradients, the proposed method reduces variance by adaptively controlling the sample size used. Without the convexity assumption on the objective functions, we obtain that the proposed algorithm converges to Pareto stationary points in almost surely. We then analyze the convergence rates of the proposed algorithm. Numerical experiments are presented to demonstrate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Bregman-Golden Ratio Algorithms for Variational Inequalities.
- Author
-
Tam, Matthew K. and Uteda, Daniel J.
- Subjects
- *
GOLDEN ratio , *GAUSSIAN channels , *ALGORITHMS , *VARIATIONAL inequalities (Mathematics) , *TELECOMMUNICATION systems - Abstract
Variational inequalities provide a framework through which many optimisation problems can be solved, in particular, saddle-point problems. In this paper, we study modifications to the so-called Golden RAtio ALgorithm (GRAAL) for variational inequalities—a method which uses a fully explicit adaptive step-size and provides convergence results under local Lipschitz assumptions without requiring backtracking. We present and analyse two Bregman modifications to GRAAL: the first uses a fixed step size and converges under global Lipschitz assumptions, and the second uses an adaptive step-size rule. Numerical performance of the former method is demonstrated on a bimatrix game arising in network communication, and of the latter on two problems, namely, power allocation in Gaussian communication channels and N-person Cournot completion games. In all of these applications, an appropriately chosen Bregman distance simplifies the projection steps computed as part of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Global Convergence of Algorithms Under Constant Rank Conditions for Nonlinear Second-Order Cone Programming.
- Author
-
Andreani, Roberto, Haeser, Gabriel, Mito, Leonardo M., Ramírez, C. Héctor, and Silveira, Thiago P.
- Subjects
SEMIDEFINITE programming ,ALGORITHMS ,NONLINEAR programming ,LAGRANGE multiplier - Abstract
In Andreani et al. (Weak notions of nondegeneracy in nonlinear semidefinite programming, 2020), the classical notion of nondegeneracy (or transversality) and Robinson's constraint qualification have been revisited in the context of nonlinear semidefinite programming exploiting the structure of the problem, namely its eigendecomposition. This allows formulating the conditions equivalently in terms of (positive) linear independence of significantly smaller sets of vectors. In this paper, we extend these ideas to the context of nonlinear second-order cone programming. For instance, for an m-dimensional second-order cone, instead of stating nondegeneracy at the vertex as the linear independence of m derivative vectors, we do it in terms of several statements of linear independence of 2 derivative vectors. This allows embedding the structure of the second-order cone into the formulation of nondegeneracy and, by extension, Robinson's constraint qualification as well. This point of view is shown to be crucial in defining significantly weaker constraint qualifications such as the constant rank constraint qualification and the constant positive linear dependence condition. Also, these conditions are shown to be sufficient for guaranteeing global convergence of several algorithms, while still implying metric subregularity and without requiring boundedness of the set of Lagrange multipliers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Proximal Gradient Algorithms Under Local Lipschitz Gradient Continuity: A Convergence and Robustness Analysis of PANOC.
- Author
-
De Marchi, Alberto and Themelis, Andreas
- Subjects
LIPSCHITZ continuity ,ALGORITHMS ,NONSMOOTH optimization ,PARTICLE swarm optimization - Abstract
Composite optimization offers a powerful modeling tool for a variety of applications and is often numerically solved by means of proximal gradient methods. In this paper, we consider fully nonconvex composite problems under only local Lipschitz gradient continuity for the smooth part of the objective function. We investigate an adaptive scheme for PANOC-type methods (Stella et al. in Proceedings of the IEEE 56th CDC, 2017), namely accelerated linesearch algorithms requiring only the simple oracle of proximal gradient. While including the classical proximal gradient method, our theoretical results cover a broader class of algorithms and provide convergence guarantees for accelerated methods with possibly inexact computation of the proximal mapping. These findings have also significant practical impact, as they widen scope and performance of existing, and possibly future, general purpose optimization software that invoke PANOC as inner solver. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. A Hybrid Differential Dynamic Programming Algorithm for Constrained Optimal Control Problems. Part 2: Application.
- Author
-
Lantoine, Gregory and Russell, Ryan
- Subjects
DIFFERENTIABLE dynamical systems ,CONTROL theory (Engineering) ,MATHEMATICAL optimization ,NONLINEAR theories ,ALGORITHMS - Abstract
In the first part of this paper series, a new solver, called HDDP, was presented for solving constrained, nonlinear optimal control problems. In the present paper, the algorithm is extended to include practical safeguards to enhance robustness, and four illustrative examples are used to evaluate the main algorithm and some variants. The experiments involve both academic and applied problems to show that HDDP is capable of solving a wide class of constrained, nonlinear optimization problems. First, the algorithm is verified to converge in a single iteration on a simple multi-phase quadratic problem with trivial dynamics. Successively, more complicated constrained optimal control problems are then solved demonstrating robust solutions to problems with as many as 7 states, 25 phases, 258 stages, 458 constraints, and 924 total control variables. The competitiveness of HDDP, with respect to general-purpose, state-of-the-art NLP solvers, is also demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
36. Finding the Strong Nash Equilibrium: Computation, Existence and Characterization for Markov Games.
- Author
-
Clempner, Julio B. and Poznyak, Alexander S.
- Subjects
NASH equilibrium ,LAGRANGE problem ,NEWTON-Raphson method ,REGULARIZATION parameter ,TIKHONOV regularization - Abstract
This paper suggests a procedure to construct the Pareto frontier and efficiently computes the strong Nash equilibrium for a class of time-discrete ergodic controllable Markov chain games. The procedure finds the strong Nash equilibrium, using the Newton optimization method presenting a potential advantage for ill-conditioned problems. We formulate the solution of the problem based on the Lagrange principle, adding a Tikhonov's regularization parameter for ensuring both the strict convexity of the Pareto frontier and the existence of a unique strong Nash equilibrium. Then, any welfare optimum arises as a strong Nash equilibrium of the game. We prove the existence and characterization of the strong Nash equilibrium, which is one of the main results of this paper. The method is validated theoretically and illustrated with an application example. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. Generalized Inexact Proximal Algorithms: Routine's Formation with Resistance to Change, Following Worthwhile Changes.
- Author
-
Bento, G. and Soubeyran, A.
- Subjects
RESISTANCE to change ,PERTURBATION theory ,ALGORITHMS ,HUMAN behavior ,MATHEMATICAL inequalities ,GAME theory - Abstract
This paper shows how, in a quasi-metric space, an inexact proximal algorithm with a generalized perturbation term appears to be a nice tool for Behavioral Sciences (Psychology, Economics, Management, Game theory,...). More precisely, the new perturbation term represents an index of resistance to change, defined as a 'curved enough' function of the quasi-distance between two successive iterates. Using this behavioral point of view, the present paper shows how such a generalized inexact proximal algorithm can modelize the formation of habits and routines in a striking way. This idea comes from a recent 'variational rationality approach' of human behavior which links a lot of different theories of stability (habits, routines, equilibrium, traps,...) and changes (creations, innovations, learning and destructions,...) in Behavioral Sciences and a lot of concepts and algorithms in variational analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. Weiszfeld's Method: Old and New Results.
- Author
-
Beck, Amir and Sabach, Shoham
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,LOCATION problems (Programming) ,ALGORITHMS - Abstract
In 1937, the 16-years-old Hungarian mathematician Endre Weiszfeld, in a seminal paper, devised a method for solving the Fermat-Weber location problem-a problem whose origins can be traced back to the seventeenth century. Weiszfeld's method stirred up an enormous amount of research in the optimization and location communities, and is also being discussed and used till these days. In this paper, we review both the past and the ongoing research on Weiszfed's method. The existing results are presented in a self-contained and concise manner-some are derived by new and simplified techniques. We also establish two new results using modern tools of optimization. First, we establish a non-asymptotic sublinear rate of convergence of Weiszfeld's method, and second, using an exact smoothing technique, we present a modification of the method with a proven better rate of convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. An Explicit Extragradient Algorithm for Solving Variational Inequalities.
- Author
-
Hieu, Dang Van, Strodiot, Jean Jacques, and Muu, Le Dung
- Subjects
VARIATIONAL inequalities (Mathematics) ,HILBERT space ,ALGORITHMS ,MATHEMATICAL equivalence ,PRIOR learning - Abstract
In this paper, we introduce an explicit iterative algorithm for solving a (pseudo) monotone variational inequality under Lipschitz condition in a Hilbert space. The algorithm is constructed around some projections incorporated by inertial terms. It uses variable stepsizes which are generated at each iteration by some simple computations. Furthermore, it can be easily implemented without the prior knowledge of the Lipschitz constant of the operator. Theorems of weak convergence are established under mild conditions, and some numerical results are reported for the purpose of comparison with other algorithms. The obtained results in this paper extend some related works in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. A Proximal Point Algorithm Revisited and Extended.
- Author
-
Moroşanu, Gheorghe and Petruşel, Adrian
- Subjects
MONOTONE operators ,HILBERT space ,ALGORITHMS ,FUNCTIONALS - Abstract
This note is a reaction to the recent paper by Rouhani and Moradi (J Optim Theory Appl 172:222–235, 2017), where a proximal point algorithm proposed by Boikanyo and Moroşanu (Optim Lett 7:415–420, 2013) is discussed. Noticing the inappropriate formulation of that algorithm, we propose a more general algorithm for approximating zeros of a maximal monotone operator on a Hilbert space. Besides the main result on the strong convergence of the sequences generated by this new algorithm, we discuss some particular cases, including the approximation of minimizers of convex functionals and present two examples to illustrate the applicability of the algorithm. The note clarifies and extends both the papers quoted above. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Manifold Regularization Nonnegative Triple Decomposition of Tensor Sets for Image Compression and Representation.
- Author
-
Wu, Fengsheng, Li, Chaoqian, and Li, Yaotang
- Subjects
IMAGE representation ,IMAGE compression ,IMAGE processing ,ALGORITHMS - Abstract
The image processing usually depends on exploring the structure and the geometric information of the tensor objects generated by image data. In the process, the decomposition of the tensor objects is very significant for the dimension reduction and the low-rank representation of image data. In this paper, based on the triple decomposition of third-order tensors and the correlation between different nonnegative tensor objects, a nonnegative triple decomposition model with manifold regularization terms is constructed. Then, an algorithm for the manifold regularization nonnegative triple decomposition is proposed, and the convergence of the algorithm is discussed. Furthermore, experiments on some real-world image data sets are given to illustrate the feasibility and effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. A Tensor Regularized Nuclear Norm Method for Image and Video Completion.
- Author
-
Bentbib, A. H., Hachimi, A. El, Jbilou, K., and Ratnani, A.
- Subjects
ALGORITHMS ,VIDEOS - Abstract
In the present paper, we propose two new methods for tensor completion of third-order tensors. The proposed methods consist in minimizing the average rank of the underlying tensor using its approximate function, namely the tensor nuclear norm. The recovered data will be obtained by combining the minimization process with the total variation regularization technique. We will adopt the alternating direction method of multipliers, using the tensor T-product, to solve the main optimization problems associated with the two proposed algorithms. In the last section, we present some numerical experiments and comparisons with the most known image video completion methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Adaptive Global Algorithm for Solving Box-Constrained Non-convex Quadratic Minimization Problems.
- Author
-
Andjouh, Amar and Bibi, Mohand Ouamer
- Subjects
GLOBAL optimization ,ALGORITHMS ,PROBLEM solving ,EIGENVALUES ,CONVEXITY spaces - Abstract
In this paper, we propose a new adaptive method for solving the non-convex quadratic minimization problem subject to box constraints, where the associated matrix is indefinite, in particular with one negative eigenvalue. We investigate the derived sufficient global optimality conditions by exploiting the particular form of the Moreau envelope (L-subdifferential) of the quadratic function and abstract convexity, also to develop a new algorithm for solving the original problem without transforming it, that we call adaptive global algorithm, which can effectively find one global minimizer of the problem. Furthermore, the research of the convex support of the objective function allows us to characterize the global optimum and reduce the complexity of the big size problems. We give some theoretical aspects of global optimization and present numerical examples with test problems for illustrating our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A Class of Polynomial Interior Point Algorithms for the Cartesian P-Matrix Linear Complementarity Problem over Symmetric Cones.
- Author
-
Wang, G. and Bai, Y.
- Subjects
LINEAR complementarity problem ,SYMMETRIC matrices ,POLYNOMIALS ,ALGORITHMS ,KERNEL functions - Abstract
In this paper, we present a new class of polynomial interior point algorithms for the Cartesian P-matrix linear complementarity problem over symmetric cones based on a parametric kernel function, which determines both search directions and the proximity measure between the iterate and the center path. The symmetrization of the search directions used in this paper is based on the Nesterov and Todd scaling scheme. By using Euclidean Jordan algebras, we derive the iteration bounds that match the currently best known iteration bounds for large- and small-update methods. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. A Sampling-and-Discarding Approach to Chance-Constrained Optimization: Feasibility and Optimality.
- Author
-
Campi, M. C. and Garatti, S.
- Subjects
MATHEMATICAL optimization ,MODULES (Algebra) ,MATHEMATICAL analysis ,CONSTRAINTS (Physics) ,ALGORITHMS - Abstract
In this paper, we study the link between a Chance-Constrained optimization Problem (CCP) and its sample counterpart (SP). SP has a finite number, say N, of sampled constraints. Further, some of these sampled constraints, say k, are discarded, and the final solution is indicated by $x^{\ast}_{N,k}$. Extending previous results on the feasibility of sample convex optimization programs, we establish the feasibility of $x^{\ast}_{N,k}$ for the initial CCP problem. Constraints removal allows one to improve the cost function at the price of a decreased feasibility. The cost improvement can be inspected directly from the optimization result, while the theory here developed permits to keep control on the other side of the coin, the feasibility of the obtained solution. In this way, trading feasibility for performance is put on solid mathematical grounds in this paper. The feasibility result here obtained applies to a vast class of chance-constrained optimization problems, and has the distinctive feature that it holds true irrespective of the algorithm used to discard k constraints in the SP problem. For constraints discarding, one can thus, e.g., resort to one of the many methods introduced in the literature to solve chance-constrained problems with discrete distribution, or even use a greedy algorithm, which is computationally very low-demanding, and the feasibility result remains intact. We further prove that, if constraints in the SP problem are optimally removed-i.e., one deletes those constraints leading to the largest possible cost improvement-, then a precise optimality link to the original chance-constrained problem CCP in addition holds. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
46. COSMO: A Conic Operator Splitting Method for Convex Conic Problems.
- Author
-
Garstka, Michael, Cannon, Mark, and Goulart, Paul
- Subjects
ALGORITHMS ,GRAPH theory ,CONVEX sets ,BENCHMARK problems (Computer science) ,ROBUST control ,SEMIDEFINITE programming ,ROBUST optimization - Abstract
This paper describes the conic operator splitting method (COSMO) solver, an operator splitting algorithm and associated software package for convex optimisation problems with quadratic objective function and conic constraints. At each step, the algorithm alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefinite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to effectively exploit sparsity in large, structured semidefinite programs. Numerical comparisons with other state-of-the-art solvers for a variety of benchmark problems show the effectiveness of our approach. Our Julia implementation is open source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. A Class of Accelerated Subspace Minimization Conjugate Gradient Methods.
- Author
-
Sun, Wumei, Liu, Hongwei, and Liu, Zexian
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,MATHEMATICAL regularization ,INTERPOLATION - Abstract
The subspace minimization conjugate gradient methods based on Barzilai–Borwein method (SMCG_BB) and regularization model (SMCG_PR), which were proposed by Liu and Liu (J Optim Theory Appl 180(3):879–906, 2019) and Zhao et al. (Numer Algorithm, 2020. https://doi.org/10.1007/s11075-020-01017-1), respectively, are very interesting and efficient for unconstrained optimization. In this paper, two accelerated subspace minimization conjugate gradient methods are proposed for unconstrained optimization. Motivated by the subspace minimization conjugate gradient methods and the finite termination of linear conjugate gradient methods, we derive an acceleration parameter by the quadratic interpolation function to improve the stepsize, and the modified stepsize may be more closer to the stepsize obtained by exact line search. Moreover, several specific acceleration criteria to enhance the efficiency of the algorithm are designed. Under standard conditions, the global convergence of the proposed methods can be guaranteed. Numerical results show that the proposed accelerated methods are superior to two excellent subspace minimization conjugate gradient methods SMCG_BB and SMCG_PR. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Sequential Penalty Algorithm for Nonlinear Constrained Optimization.
- Author
-
Zhang, J.L. and Zhang, X.S.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONLINEAR theories ,STOCHASTIC convergence ,MATHEMATICS ,MATHEMATICAL analysis - Abstract
In this paper, a new sequential penalty algorithm, based on the L[sub ∞] exact penalty function, is proposed for a general nonlinear constrained optimization problem. The algorithm has the following characteristics: it can start from an arbitrary initial point; the feasibility of the subproblem is guaranteed; the penalty parameter is adjusted automatically; global convergence without any regularity assumption is proved. The update formula of the penalty parameter is new. It is proved that the algorithm proposed in this paper behaves equivalently to the standard SQP method after sufficiently many iterations. Hence, the local convergence results of the standard SQP method can be applied to this algorithm. Preliminary numerical experiments show the efficiency and stability of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
49. Proximal Point Algorithms for Multi-criteria Optimization with the Difference of Convex Objective Functions.
- Author
-
Ji, Ying, Goh, Mark, and Souza, Robert
- Subjects
MATHEMATICAL optimization ,CONVEX functions ,SCALAR field theory ,NUMERICAL analysis ,ALGORITHMS - Abstract
This paper focuses on solving a class of multi-criteria optimization with the difference of convex objective functions. Proximal point algorithms, extensively studied for scalar optimization, are extended to our setting. We show that the proposed algorithms are well posed and globally convergent to a critical point. For an application, the new methods are used to a multi-criteria model arising in portfolio optimization. The numerical results show the efficiency of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Regularized Sample Average Approximation Approach for Two-Stage Stochastic Variational Inequalities.
- Author
-
Jiang, Jie and Li, Shengjie
- Subjects
CONTINUOUS distributions ,DISTRIBUTION (Probability theory) ,ALGORITHMS ,REGULARIZATION parameter ,SAMPLE size (Statistics) - Abstract
Sample average approximation (SAA) approach for two-stage stochastic variational inequalities (SVIs) with continuous probability distributions, where the second-stage problems have multiple solutions, may not promise convergence assertions as the sample size tends to infinity. In this paper, a regularized SAA approach is proposed to numerically solve a class of two-stage SVIs with continuous probability distributions, where the second-stage problems are monotone and allowed to have multiple solutions. We first give some structural properties. After that, the convergence analysis of the regularized SAA approach for two-stage SVIs is investigated as the regularization parameter tends to zero and the sample size tends to infinity. Finally, we employ the progressive hedging algorithm to report some numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.