31 results
Search Results
2. On Global Error Bounds for Convex Inequalities Systems.
- Author
-
Long, Vo Si Trong
- Subjects
- *
CONVEX functions , *LINEAR systems , *SET functions , *CONTINUOUS functions , *SUBDIFFERENTIALS - Abstract
In this paper, we first present necessary and sufficient conditions for the existence of global error bounds for a convex function without additional conditions on the function or the solution set. In particular, we obtain characterizations of such global error bounds in Euclidean spaces, which are often simple to check. Second, we prove that under a suitable assumption the subdifferential of the supremum function of an arbitrary family of convex continuous functions coincides with the convex hull of the subdifferentials of functions corresponding to the active indices at given points. As applications, we study the existence of global error bounds for infinite systems of linear and convex inequalities. Several examples are provided as well to explain the advantages of our results with existing ones in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Exterior-Point Optimization for Sparse and Low-Rank Optimization.
- Author
-
Das Gupta, Shuvomoy, Stellato, Bartolomeo, and Van Parys, Bart P. G.
- Subjects
- *
CONVEX functions , *MACHINE learning , *PROBLEM solving , *DATA science , *ALGORITHMS - Abstract
Many problems of substantial current interest in machine learning, statistics, and data science can be formulated as sparse and low-rank optimization problems. In this paper, we present the nonconvex exterior-point optimization solver (NExOS)—a first-order algorithm tailored to sparse and low-rank optimization problems. We consider the problem of minimizing a convex function over a nonconvex constraint set, where the set can be decomposed as the intersection of a compact convex set and a nonconvex set involving sparse or low-rank constraints. Unlike the convex relaxation approaches, NExOS finds a locally optimal point of the original problem by solving a sequence of penalized problems with strictly decreasing penalty parameters by exploiting the nonconvex geometry. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and low-rank optimization problems, empirically demonstrating that our algorithm outperforms specialized methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Optimality Conditions in DC-Constrained Mathematical Programming Problems.
- Author
-
Correa, Rafael, López, Marco A., and Pérez-Aros, Pedro
- Subjects
SEMIDEFINITE programming ,MATHEMATICAL programming ,CONVEX functions ,STOCHASTIC programming - Abstract
This paper provides necessary and sufficient optimality conditions for abstract-constrained mathematical programming problems in locally convex spaces under new qualification conditions. Our approach exploits the geometrical properties of certain mappings, in particular their structure as difference of convex functions, and uses techniques of generalized differentiation (subdifferential and coderivative). It turns out that these tools can be used fruitfully out of the scope of Asplund spaces. Applications to infinite, stochastic and semi-definite programming are developed in separate sections. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Differential Stability Properties of Convex Optimization and Optimal Control Problems.
- Author
-
Toan, Nguyen Thi and Thuy, Le Quang
- Subjects
- *
BANACH spaces , *CONVEX functions , *EQUATIONS of state , *LINEAR equations - Abstract
This paper studies the solution stability of convex optimization and discrete convex optimal control problems in Banach spaces, where the solution set may be empty. For both the optimization problem and the optimal control problem, formulas for the ε -subdifferential of the optimal value function are derived without qualification conditions. We first calculate the ε -subdifferential of the optimal value function to a parametric optimization problem with geometrical and functional constraints. We then use the obtained results to derive a formula for computing the ε -subdifferential of the optimal value function to a discrete convex optimal control problem with linear state equations, control constraints and initial, terminal conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Privacy-Preserving Dual Stochastic Push-Sum Algorithm for Distributed Constrained Optimization.
- Author
-
Gu, Chuanye, Jiang, Lin, Li, Jueyou, and Wu, Zhiyou
- Subjects
CONSTRAINED optimization ,DISTRIBUTED algorithms ,SUBGRADIENT methods ,ELECTRIC charge ,COST functions ,CONVEX functions - Abstract
This paper investigates a private distributed optimization problem over a multi-agent network, where the goal is to cooperatively minimize the sum of all locally convex cost functions subject to coupled equality constraints over time-varying unbalanced directed networks while considering privacy concerns. To solve this problem, we integrate push-sum protocols with dual subgradient methods to design a private distributed dual stochastic push-sum algorithm. Under the assumption of convexity, we first establish the convergence of the algorithm in terms of dual variables, primal variables and constraint violations. Then we show that the algorithm has a sub-linear growth with order of O (ln t / t) . The result reveals that there is a tradeoff between the privacy level and the accuracy of the algorithm. Finally, the efficiency of the algorithm is verified numerically over two applications to the economic dispatch problems and electric vehicles charging control problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Role of Subgradients in Variational Analysis of Polyhedral Functions.
- Author
-
Hang, Nguyen T. V., Jung, Woosuk, and Sarabi, Ebrahim
- Subjects
- *
CONVEX functions , *POLYHEDRAL functions - Abstract
Understanding the role that subgradients play in various second-order variational analysis constructions can help us uncover new properties of important classes of functions in variational analysis. Focusing mainly on the behavior of the second subderivative and subgradient proto-derivative of polyhedral functions, i.e., functions with polyhedral convex epigraphs, we demonstrate that choosing the underlying subgradient, utilized in the definitions of these concepts, from the relative interior of the subdifferential of polyhedral functions ensures stronger second-order variational properties such as strict twice epi-differentiability and strict subgradient proto-differentiability. This allows us to characterize continuous differentiability of the proximal mapping and twice continuous differentiability of the Moreau envelope of polyhedral functions. We close the paper with proving the equivalence of metric regularity and strong metric regularity of a class of generalized equations at their nondegenerate solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A New Approach About Equilibrium Problems via Busemann Functions.
- Author
-
de C. Bento, Glaydston, Neto, João X. Cruz, Lopes, Jurandir O., Melo, Ítalo D. L., and Filho, Pedro Silva
- Subjects
- *
EQUILIBRIUM , *CONVEX functions , *PROBLEM solving - Abstract
In this paper, we consider the resolvent via Busemann functions introduced by Bento, Cruz Neto, Melo (J Optim Theory Appl 195:1087–1105, 2022), and we present a proximal point method for equilibrium problems on Hadamard manifold. The resolvent in consideration is a natural extension of its counterpart in linear settings, proposed and analyzed by Combettes and Hirstoaga (J Nonlinear Convex Anal 6:117–136, 2005). The advantage of using this resolvent is that the term performing regularization is a convex function in general Hadamard manifolds, allowing us to explore the asymptotic behavior of the proximal point method to solve equilibrium problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Combinatorial Convexity in Hadamard Manifolds: Existence for Equilibrium Problems.
- Author
-
Bento, Glaydston de Carvalho, Cruz Neto, João Xavier, and Melo, Ítalo Dowell Lira
- Subjects
EQUILIBRIUM ,CONVEX functions ,HOPFIELD networks - Abstract
In this paper is introduced a proposal of resolvent for equilibrium problems in terms of the Busemann's function. A advantage of this new proposal is that, in addition to be a natural extension of its counterpart in the linear setting introduced by Combettes and Hirstoaga (J Nonlinear Convex Anal 6(1): 117–136, 2005), the new term that performs regularization is a convex function in general Hadamard manifolds, being a first step to fully answer to the problem posed by Cruz Neto et al. (J Convex Anal 24(2): 679–684, 2017 Section 5). During our study, some elements of convex analysis are explored in the context of Hadamard manifolds, which are interesting on their own. In particular, we introduce a new definition of convex combination (now commutative) of any finite collection of points and present an associated Jensen-type inequality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems.
- Author
-
Choi, Woocheol, Kim, Doheon, and Yun, Seok-Bae
- Subjects
LIPSCHITZ continuity ,CONVEX functions ,DISTRIBUTED algorithms - Abstract
We are concerned with the convergence of NEAR-DGD + (Nested Exact Alternating Recursion Distributed Gradient Descent) method introduced to solve the distributed optimization problems. Under the assumption of the strong convexity of local objective functions and the Lipschitz continuity of their gradients, the linear convergence is established in Berahas et al. (IEEE Trans Autom Control 64:3141-3155, 2019). In this paper, we investigate the convergence property of NEAR-DGD + in the absence of strong convexity. More precisely, we establish the convergence results in the following two cases: (1) When only the convexity is assumed on the objective function. (2) When the objective function is represented as a composite function of a strongly convex function and a rank deficient matrix, which falls into the class of convex and quasi-strongly convex functions. The numerical results are provided to support the convergence results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Tight Convex Relaxations for the Expansion Planning Problem.
- Author
-
Lenz, Ralf and Serrano, Felipe
- Subjects
CONVEX functions ,NONLINEAR equations ,NONLINEAR programming - Abstract
Secure energy transport is considered as highly relevant for the basic infrastructure of nowadays society and economy. To satisfy increasing demands and to handle more diverse transport situations, operators of energy networks regularly expand the capacity of their network by building new network elements, known as the expansion planning problem. A key constraint function in expansion planning problems is a nonlinear and nonconvex potential loss function. In order to improve the algorithmic performance of state-of-the-art MINLP solvers, this paper presents an algebraic description for the convex envelope of this function. Through a thorough computational study, we show that this tighter relaxation tremendously improves the performance of the MINLP solver SCIP on a large test set of practically relevant instances for the expansion planning problem. In particular, the results show that our achievements lead to an improvement of the solver performance for a development version by up to 58%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Convexity of Non-homogeneous Quadratic Functions on the Hyperbolic Space.
- Author
-
Ferreira, Orizon P., Németh, Sándor Z., and Zhu, Jinzhen
- Subjects
- *
HYPERBOLIC spaces , *HYPERBOLIC functions , *FUNCTION spaces , *CONVEXITY spaces , *CONVEX sets , *CONVEX functions - Abstract
In this paper, some concepts related to the intrinsic convexity of non-homogeneous quadratic functions on the hyperbolic space are studied. Unlike in the Euclidean space, the study of intrinsic convexity of non-homogeneous quadratic functions in the hyperbolic space is more elaborate than that of homogeneous quadratic functions. Several characterizations that allow the construction of many examples will be presented. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. New Tour on the Subdifferential of Supremum via Finite Sums and Suprema.
- Author
-
Hantoute, A. and López-Cerdá, M. A.
- Subjects
CONVEX functions ,BANACH spaces ,FINITE, The ,TOURS - Abstract
This paper provides new characterizations for the subdifferential of the pointwise supremum of an arbitrary family of convex functions. The main feature of our approach is that the normal cone to the effective domain of the supremum (or to finite-dimensional sections of it) does not appear in our formulas. Another aspect of our analysis is that it emphasizes the relationship with the subdifferential of the supremum of finite subfamilies, or equivalently, finite weighted sums. Some specific results are given in the setting of reflexive Banach spaces, showing that the subdifferential of the supremum can be reduced to the supremum of a countable family. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Quaternion Matrix Optimization: Motivation and Analysis.
- Author
-
Qi, Liqun, Luo, Ziyan, Wang, Qing-Wen, and Zhang, Xinzhen
- Subjects
COLOR image processing ,QUATERNIONS ,MATHEMATICAL optimization ,QUATERNION functions ,CONVEX functions ,IMAGE denoising ,CALCULUS - Abstract
The class of quaternion matrix optimization (QMO) problems, with quaternion matrices as decision variables, has been widely used in color image processing and other engineering areas in recent years. However, optimization theory for QMO is far from adequate. The main objective of this paper is to provide necessary theoretical foundations on optimality analysis, in order to enrich the contents of optimization theory and to pave way for the design of efficient numerical algorithms as well. We achieve this goal by conducting a thorough study on the first-order and second-order (sub)differentiation of real-valued functions in quaternion matrices, with a newly introduced operation called R-product as the key tool for our calculus. Combining with the classical optimization theory, we establish the first-order and the second-order optimality analysis for QMO. Particular treatments on convex functions, the ℓ 0 -norm and the rank function in quaternion matrices are tailored for a sparse low rank QMO model, arising from color image denoising, to establish its optimality conditions via stationarity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Generalized Bregman Envelopes and Proximity Operators.
- Author
-
Burachik, Regina S., Dao, Minh N., and Lindstrom, Scott B.
- Subjects
MONOTONE operators ,CONVEX functions ,COERCIVE fields (Electronics) - Abstract
Every maximally monotone operator can be associated with a family of convex functions, called the Fitzpatrick family or family of representative functions. Surprisingly, in 2017, Burachik and Martínez-Legaz showed that the well-known Bregman distance is a particular case of a general family of distances, each one induced by a specific maximally monotone operator and a specific choice of one of its representative functions. For the family of generalized Bregman distances, sufficient conditions for convexity, coercivity, and supercoercivity have recently been furnished. Motivated by these advances, we introduce in the present paper the generalized left and right envelopes and proximity operators, and we provide asymptotic results for parameters. Certain results extend readily from the more specific Bregman context, while others only extend for certain generalized cases. To illustrate, we construct examples from the Bregman generalizing case, together with the natural "extreme" cases that highlight the importance of which generalized Bregman distance is chosen. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Convergence of Inertial Dynamics Driven by Sums of Potential and Nonpotential Operators with Implicit Newton-Like Damping.
- Author
-
Adly, Samir, Attouch, Hedy, and Vo, Van Nam
- Subjects
NONSMOOTH optimization ,MONOTONE operators ,CONVEX functions ,EVOLUTION equations ,DIFFERENTIABLE functions ,TIME management - Abstract
We analyze the convergence properties when the time t tends to infinity of the trajectories generated by damped inertial dynamics which are driven by the sum of potential and nonpotential operators. Precisely, we seek to reach asymptotically the zeros of a maximally monotone operator which is the sum of a potential operator (the gradient of a continuously differentiable convex function) and of a monotone and cocoercive nonpotential operator. As an original feature, in addition to the viscous friction, the dynamic involves implicit Newton-type damping. This contrasts with the authors' previous study where explicit Newton-type damping was considered, which, for the potential term corresponds to Hessian-driven damping. We show the weak convergence, as time tends to infinity, of the generated trajectories toward the zeros of the sum of the potential and nonpotential operators. Our results are based on Lyapunov analysis and appropriate setting of the damping parameters. The introduction of geometric dampings allows to control and attenuate the oscillations known for the viscous damping of inertial methods. Rewriting the second-order evolution equation as a system involving only first-order derivative in time and space allows us to extend the convergence analysis to nonsmooth convex potentials. The main part of our study concerns the autonomous case with positive fixed parameters. We complete it with some first results concerning the nonautonomous case, and which are based on a recent acceleration method using time scaling and averaging. These results open the door to the design of new first-order accelerated algorithms in optimization taking into account the specific properties of potential and nonpotential terms. The proofs and techniques are original due to the presence of the nonpotential term. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. A Forward–Backward Algorithm With Different Inertial Terms for Structured Non-Convex Minimization Problems.
- Author
-
László, Szilárd Csaba
- Subjects
FORWARD-backward algorithm ,IMAGE reconstruction ,GLOBAL optimization ,CONVEX functions - Abstract
We investigate an inertial forward–backward algorithm in connection with the minimization of the sum of a non-smooth and possibly non-convex and a non-convex differentiable function. The algorithm is formulated in the spirit of the famous FISTA method; however, the setting is non-convex and we allow different inertial terms. Moreover, the inertial parameters in our algorithm can take negative values too. We also treat the case when the non-smooth function is convex, and we show that in this case a better step size can be allowed. Further, we show that our numerical schemes can successfully be used in DC-programming. We prove some abstract convergence results which applied to our numerical schemes allow us to show that the generated sequences converge to a critical point of the objective function, provided a regularization of the objective function satisfies the Kurdyka–Łojasiewicz property. Further, we obtain a general result that applied to our numerical schemes ensures convergence rates for the generated sequences and for the objective function values formulated in terms of the KL exponent of a regularization of the objective function. Finally, we apply our results to image restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Smoothing Accelerated Proximal Gradient Method with Fast Convergence Rate for Nonsmooth Convex Optimization Beyond Differentiability.
- Author
-
Wu, Fan and Bian, Wei
- Subjects
NONSMOOTH optimization ,SMOOTHING (Numerical analysis) ,CONVEX sets ,CONVEX functions ,EXTRAPOLATION - Abstract
We propose a smoothing accelerated proximal gradient (SAPG) method with fast convergence rate for finding a minimizer of a decomposable nonsmooth convex function over a closed convex set. The proposed algorithm combines the smoothing method with the proximal gradient algorithm with extrapolation k - 1 k + α - 1 and α > 3 . The updating rule of smoothing parameter μ k is a smart scheme and guarantees the global convergence rate of o (ln σ k / k) with σ ∈ (1 2 , 1 ] on the objective function values. Moreover, we prove that the iterates sequence is convergent to an optimal solution of the problem. We then introduce an error term in the SAPG algorithm to get the inexact smoothing accelerated proximal gradient algorithm. And we obtain the same convergence results as the SAPG algorithm under the summability condition on the errors. Finally, numerical experiments show the effectiveness and efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Improved Convex and Concave Relaxations of Composite Bilinear Forms.
- Author
-
Wilhelm, Matthew E. and Stuber, Matthew D.
- Subjects
GLOBAL optimization ,SUBGRADIENT methods ,MODEL validation ,NONCONVEX programming ,SUPPLY chains ,BILINEAR forms ,DETERMINISTIC algorithms ,CONVEX functions - Abstract
Deterministic nonconvex optimization solvers generate convex relaxations of nonconvex functions by making use of underlying factorable representations. One approach introduces auxiliary variables assigned to each factor that lifts the problem into a higher-dimensional decision space. In contrast, a generalized McCormick relaxation approach offers the significant advantage of constructing relaxations in the lower dimensionality space of the original problem without introducing auxiliary variables, often referred to as a "reduced-space" approach. Recent contributions illustrated how additional nontrivial inequality constraints may be used in factorable programming to tighten relaxations of the ubiquitous bilinear term. In this work, we exploit an analogous representation of McCormick relaxations and factorable programming to formulate tighter relaxations in the original decision space. We develop the underlying theory to generate necessarily tighter reduced-space McCormick relaxations when a priori convex/concave relaxations are known for intermediate bilinear terms. We then show how these rules can be generalized within a McCormick relaxation scheme via three different approaches: the use of a McCormick relaxations coupled to affine arithmetic, the propagation of affine relaxations implied by subgradients, and an enumerative approach that directly uses relaxations of each factor. The developed approaches are benchmarked on a library of optimization problems using the EAGO.jl optimizer. Two case studies are also considered to demonstrate the developments: an application in advanced manufacturing to optimize supply chain quality metrics and a global dynamic optimization application for rigorous model validation of a kinetic mechanism. The presented subgradient method leads to an improvement in CPU time required to solve the considered problems to ϵ -global optimality. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Error Bound and Isocost Imply Linear Convergence of DCA-Based Algorithms to D-Stationarity.
- Author
-
Tao, Min and Li, Jiang-Ning
- Subjects
SMOOTHNESS of functions ,CONVEX functions ,ALGORITHMS ,EXTRAPOLATION ,LINEAR statistical models ,COMPUTER simulation - Abstract
We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex-based algorithms for potential acceleration to converge to a weak/standard d-stationary point of the structured nonsmooth problem, and prove its linear convergence of these algorithms under the assumptions of piecewise error bound and piecewise isocost condition. As a product, we refine the linear convergence analysis of sDCA and ε -DCA in a recent work of Dong and Tao (J Optim Theory Appl 189: 190–220, 2021) by removing the assumption of locally linear regularity regarding the intersection of certain stationary sets and dominance regions. We also discuss sufficient conditions to guarantee these assumptions and illustrate that several sparse learning models satisfy all these assumptions. Finally, we conduct some elementary numerical simulations on sparse recovery to verify the theoretical results empirically. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Continuous Fréchet Differentiability of the Moreau Envelope of Convex Functions on Banach Spaces.
- Author
-
Khanh, Pham Duy and Nguyen, Bao Tran
- Subjects
BANACH spaces ,FUNCTION spaces ,CONVEX functions ,DIFFERENTIABLE functions - Abstract
It is shown that the Moreau envelope of a convex lower semicontinuous function on a real Banach space with strictly convex dual is Fréchet differentiable at every its minimizer, and continuously Fréchet differentiable at every its non-minimizer satisfying that the dual space is uniformly convex at every norm one element around its normalized gradient vector at those points. As an application, we obtain the continuous Fréchet differentiability of the Moreau envelope functions on Banach spaces with locally uniformly duals and the continuity of the corresponding proximal mappings provided that both primal and dual spaces are locally uniformly convex. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. An Adaptive Alternating Direction Method of Multipliers.
- Author
-
Bartz, Sedi, Campoy, Rubén, and Phan, Hung M.
- Subjects
SIGNAL denoising ,CONVEX functions ,MULTIPLIERS (Mathematical analysis) ,CONSTRAINED optimization - Abstract
The alternating direction method of multipliers (ADMM) is a powerful splitting algorithm for linearly constrained convex optimization problems. In view of its popularity and applicability, a growing attention is drawn toward the ADMM in nonconvex settings. Recent studies of minimization problems for nonconvex functions include various combinations of assumptions on the objective function including, in particular, a Lipschitz gradient assumption. We consider the case where the objective is the sum of a strongly convex function and a weakly convex function. To this end, we present and study an adaptive version of the ADMM which incorporates generalized notions of convexity and penalty parameters adapted to the convexity constants of the functions. We prove convergence of the scheme under natural assumptions. To this end, we employ the recent adaptive Douglas–Rachford algorithm by revisiting the well-known duality relation between the classical ADMM and the Douglas–Rachford splitting algorithm, generalizing this connection to our setting. We illustrate our approach by relating and comparing to alternatives, and by numerical experiments on a signal denoising problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms.
- Author
-
Salim, Adil, Condat, Laurent, Mishchenko, Konstantin, and Richtárik, Peter
- Subjects
MATHEMATICAL optimization ,LINEAR operators ,NONSMOOTH optimization ,CONVEX functions ,MONOTONE operators ,IMAGE processing - Abstract
We consider minimizing the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable and the third one is the composition of a nonsmooth proximable function with a linear operator L. This template problem has many applications, for instance, in image processing and machine learning. First, we propose a new primal–dual algorithm, which we call PDDY, for this problem. It is constructed by applying Davis–Yin splitting to a monotone inclusion in a primal–dual product space, where the operators are monotone under a specific metric depending on L. We show that three existing algorithms (the two forms of the Condat–Vũ algorithm and the PD3O algorithm) have the same structure, so that PDDY is the fourth missing link in this self-consistent class of primal–dual algorithms. This representation eases the convergence analysis: it allows us to derive sublinear convergence rates in general, and linear convergence results in presence of strong convexity. Moreover, within our broad and flexible analysis framework, we propose new stochastic generalizations of the algorithms, in which a variance-reduced random estimate of the gradient of F is used, instead of the true gradient. Furthermore, we obtain, as a special case of PDDY, a linearly converging algorithm for the minimization of a strongly convex function F under a linear constraint; we discuss its important application to decentralized optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. A Norm Minimization-Based Convex Vector Optimization Algorithm.
- Author
-
Ararat, Çağın, Ulus, Firdevs, and Umer, Muhammad
- Subjects
MATHEMATICAL optimization ,APPROXIMATION algorithms ,BENCHMARK problems (Computer science) ,POLYHEDRAL functions ,CONVEX functions ,FINITE, The - Abstract
We propose an algorithm to generate inner and outer polyhedral approximations to the upper image of a bounded convex vector optimization problem. It is an outer approximation algorithm and is based on solving norm-minimizing scalarizations. Unlike Pascoletti–Serafini scalarization used in the literature for similar purposes, it does not involve a direction parameter. Therefore, the algorithm is free of direction-biasedness. We also propose a modification of the algorithm by introducing a suitable compact subset of the upper image, which helps in proving for the first time the finiteness of an algorithm for convex vector optimization. The computational performance of the algorithms is illustrated using some of the benchmark test problems, which shows promising results in comparison to a similar algorithm that is based on Pascoletti–Serafini scalarization. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. An Invariant-Point Theorem in Banach Space with Applications to Nonconvex Optimization.
- Author
-
Long, Vo Si Trong
- Subjects
BANACH spaces ,MATHEMATICAL optimization ,CONVEX functions ,VARIATIONAL principles - Abstract
An invariant-point theorem and its equivalent formulation are established in which distance functions are replaced by minimal time functions. It is worth emphasizing here that the class of minimal time functions can be interpreted as a general type of directional distance functions recently used to develop new applications in optimization theory. The obtained results are applied in two directions. First, we derive sufficient conditions for the existence of solutions to optimization-related problems without convexity. As an easy corollary, we get a directional Ekeland variational principle. Second, we propose a new type of global error bounds for inequalities which allows us to simultaneously study nonconvex and convex functions. Several examples and comparison remarks are included as well to explain advantages of our results with existing ones in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. The Equivalence of Three Types of Error Bounds for Weakly and Approximately Convex Functions.
- Author
-
Bai, Sixuan, Li, Minghua, Lu, Chengwu, Zhu, Daoli, and Deng, Sien
- Subjects
SMOOTHNESS of functions ,CONVEX functions ,EXPONENTS - Abstract
We start by establishing the equivalence of three types of error bounds: weak sharp minima, level-set subdifferential error bounds and Łojasiewicz (for short Ł) inequalities for weakly convex functions with exponent α ∈ [ 0 , 1 ] and approximately convex functions. Then we apply these equivalence results to a class of nonconvex optimization problems, whose objective functions are the sum of a convex function and a composite function with a locally Lipschitz function and a smooth vector-valued function. Finally, applying a characterization for lower-order regularization problems, we show that the level-set subdifferential error bound with exponent 1 and the Ł inequality with exponent 1 2 hold at a local minimum point. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Optimal Control of an Evolution Problem Involving Time-Dependent Maximal Monotone Operators.
- Author
-
Bouhali, Nesrine, Azzam-Laouir, Dalila, and Monteiro Marques, Manuel D. P.
- Subjects
MONOTONE operators ,DIFFERENTIAL operators ,DIFFERENTIAL inclusions ,ORDINARY differential equations ,CONVEX functions ,DIFFERENTIAL evolution - Abstract
We consider a control problem in a finite-dimensional setting, which consists in finding a minimizer for a standard functional defined by way of two continuous and bounded below functions and a convex function, where the control functions take values in a closed convex set and the state functions solve a differential system made up of a differential inclusion governed by a maximal monotone operator; and an ordinary differential equation with a Lipschitz mapping in the right-hand side. We first show the existence of a unique absolutely continuous solution of our system, by transforming it to a sole evolution differential inclusion, and then use a result from the literature. Secondly, we prove the existence of an optimal solution to our problem. The main novelties are: the presence of the time-dependent maximal monotone operators, which may depend as well as their domains on the time variable; and the discretization scheme for the approximation of the solution. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Restarting Frank–Wolfe: Faster Rates under Hölderian Error Bounds.
- Author
-
Kerdreux, Thomas, d'Aspremont, Alexandre, and Pokutta, Sebastian
- Subjects
CONVEX functions ,CONJUGATE gradient methods ,POLYTOPES - Abstract
Conditional gradient algorithms (aka Frank–Wolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence of projection steps, and competitive numerical performance. While the vanilla Frank–Wolfe algorithm only ensures a worst-case rate of O (1 / ϵ) , various recent results have shown that for strongly convex functions on polytopes, the method can be slightly modified to achieve linear convergence. However, this still leaves a huge gap between sublinear O (1 / ϵ) convergence and linear O (log 1 / ϵ) convergence to reach an ϵ -approximate solution. Here, we present a new variant of conditional gradient algorithms that can dynamically adapt to the function's geometric properties using restarts and smoothly interpolates between the sublinear and linear regimes. These interpolated convergence rates are obtained when the optimization problem satisfies a new type of error bounds, which we call strong Wolfe primal bounds. They combine geometric information on the constraint set with Hölderian error bounds on the objective function. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Linear Convergence of Prox-SVRG Method for Separable Non-smooth Convex Optimization Problems under Bounded Metric Subregularity.
- Author
-
Zhang, Jin and Zhu, Xide
- Subjects
CALMNESS ,CONVEX functions ,LINEAR systems ,NONSMOOTH optimization ,STOCHASTIC convergence - Abstract
With the help of bounded metric subregularity which is weaker than strong convexity, we show the linear convergence of proximal stochastic variance-reduced gradient (Prox-SVRG) method for solving a class of separable non-smooth convex optimization problems where the smooth item is a composite of strongly convex function and linear function. We introduce an equivalent characterization for the bounded metric subregularity by taking into account the calmness condition of a perturbed linear system. This equivalent characterization allows us to provide a verifiable sufficient condition to ensure linear convergence of Prox-SVRG and randomized block-coordinate proximal gradient methods. Furthermore, we verify that these sufficient conditions hold automatically when the non-smooth item is the generalized sparse group Lasso regularizer. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Efficient Proximal Mapping Computation for Low-Rank Inducing Norms.
- Author
-
Grussler, Christian and Giselsson, Pontus
- Subjects
LOW-rank matrices ,CONVEX functions ,PROBLEM solving - Abstract
Low-rank inducing unitarily invariant norms have been introduced to convexify problems with a low-rank/sparsity constraint. The most well-known member of this family is the so-called nuclear norm. To solve optimization problems involving such norms with proximal splitting methods, efficient ways of evaluating the proximal mapping of the low-rank inducing norms are needed. This is known for the nuclear norm, but not for most other members of the low-rank inducing family. This work supplies a framework that reduces the proximal mapping evaluation into a nested binary search, in which each iteration requires the solution of a much simpler problem. The simpler problem can often be solved analytically as demonstrated for the so-called low-rank inducing Frobenius and spectral norms. The framework also allows to compute the proximal mapping of increasing convex functions composed with these norms as well as projections onto their epigraphs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Subdifferentials of the Marginal Functions in Parametric Convex Optimization via Intersection Formulas.
- Author
-
An, Duong Thi Viet and Jourani, Abderrahim
- Subjects
CONVEX functions ,SET-valued maps ,CONVEX programming ,SUBDIFFERENTIALS ,CONVEX sets ,SET functions - Abstract
The aim of the present work is to use a metric intersection formula to estimate the subdifferential of the marginal function in the convex setting. This intersection formula includes many interesting situations in parametric convex programming, including the polyhedral one. It is expressed in terms of the objective function and the constrained multivalued mapping which govern the parametric program. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.