335 results on '"subgradient"'
Search Results
2. Directional Derivative and Subgradient of Cone-Convex Set-Valued Mappings with Applications in Set Optimization Problems: Directional Derivative and Subgradient ⋯
- Author
-
Han, Yu
- Published
- 2024
- Full Text
- View/download PDF
3. A new double inertial subgradient extragradient algorithm for solving split pseudomonotone equilibrium problems and fixed point problems
- Author
-
Mebawondu, A. A., Ofem, A. E., Akutsah, F., Agbonkhese, C., Kasali, F., and Narain, O. K.
- Published
- 2024
- Full Text
- View/download PDF
4. A density result for doubly nonlinear operators in $ L^1 $.
- Author
-
Collier, Timothy Allen and Hauer, Daniel
- Subjects
NONLINEAR operators ,DENSITY ,POROUS materials - Abstract
The aim of this paper is to provide sufficient conditions implying that the effective domain $ D(A\phi) $ of an $ m $-accretive operator $ A\phi $ in $ L^1 $ is dense in $ L^1 $. Here, $ A\phi $ refers to the composition $ A\circ \phi $ in $ L^1 $ of the part $ A = (\partial\mathcal{E})_{\vert L^{1\cap \infty}} $ in $ L^{1\cap\infty} $ of the subgradient $ \partial\mathcal{E} $ in $ L^2 $ of a convex, proper, lower semicontinuous functional $ \mathcal{E} $ on $ L^2 $ and a continuous, strictly increasing function $ \phi $ on the real line $ \mathbb{R} $. To illustrate the role of the sufficient conditions, we apply our main result to the class of doubly nonlinear operators $ A\phi $, where $ A $ is a classical Leray-Lions operator. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A subgradient supported ellipsoid method for convex multiobjective optimization problems
- Author
-
Muthukani, M. and Paramanathan, P.
- Published
- 2024
- Full Text
- View/download PDF
6. Smooth Approximation of Lipschitz Maps and Their Subgradients.
- Author
-
EDALAT, ABBAS
- Subjects
DIRECTIONAL derivatives ,LIPSCHITZ spaces ,VECTOR fields ,BANACH spaces ,MAPS ,DIFFERENTIABLE functions - Abstract
We derive new representations for the generalised Jacobian of a locally Lipschitz map between finite dimensional real Euclidean spaces as the lower limit (i.e., limit inferior) of the classical derivative of the mapwhere it exists. The new representations lead to significantly shorter proofs for the basic properties of the subgradient and the generalised Jacobian including the chain rule. We establish that a sequence of locally Lipschitz maps between finite dimensional Euclidean spaces converges to a given locally Lipschitz map in the L-topology--that is, theweakest refinement of the sup norm topology on the space of locally Lipschitz maps that makes the generalised Jacobian a continuous functional--if and only if the limit superior of the sequence of directional derivatives of the maps in a given vector direction coincides with the generalised directional derivative of the given map in that direction, with the convergence to the limit superior being uniform for all unit vectors. We then prove our main result that the subspace of Lipschitz C8 maps between finite dimensional Euclidean spaces is dense in the space of Lipschitz maps equipped with the L-topology, and, for a given Lipschitz map, we explicitly construct a sequence of Lipschitz C8 maps converging to it in the L-topology, allowing global smooth approximation of a Lipschitz map and its differential properties. As an application, we obtain a short proof of the extension of Green's theorem to interval-valued vector fields. For infinite dimensions, we show that the subgradient of a Lipschitz map on a Banach space is upper continuous, and, for a given real-valued Lipschitz map on a separable Banach space, we construct a sequence of Gateaux differentiable functions that converges to the map in the sup norm topology such that the limit superior of the directional derivatives in any direction coincides with the generalised directional derivative of the Lipschitz map in that direction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Statistical Performance of Subgradient Step-Size Update Rules in Lagrangian Relaxations of Chance-Constrained Optimization Models
- Author
-
Ritter, Charlotte, Singh, Bismark, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Olenev, Nicholas, editor, Evtushenko, Yuri, editor, Jaćimović, Milojica, editor, Khachay, Michael, editor, and Malkova, Vlasta, editor
- Published
- 2023
- Full Text
- View/download PDF
8. A Cutting Method with Successive Use of Constraint Functions in Constructing Approximating Sets
- Author
-
Yarullin, Rashid, Zabotin, Igor, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Khachay, Michael, editor, Kochetov, Yury, editor, Eremeev, Anton, editor, Khamisov, Oleg, editor, Mazalov, Vladimir, editor, and Pardalos, Panos, editor
- Published
- 2023
- Full Text
- View/download PDF
9. The slope robustly determines convex functions.
- Author
-
Daniilidis, Aris and Drusvyatskiy, Dmitriy
- Subjects
- *
CONSTANTS of integration - Abstract
We show that the deviation between the slopes of two convex functions controls the deviation between the functions themselves. This result reveals that the slope—a one dimensional construct—robustly determines convex functions, up to a constant of integration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization.
- Author
-
Liu, Rui, Han, Deren, and Xia, Yong
- Subjects
ALGORITHMS ,NONSMOOTH optimization ,MONOTONE operators ,GENERALIZATION - Abstract
We propose an indefinite proximal subgradient-based algorithm (IPSB) for solving nonsmooth composite optimization problems. IPSB is a generalization of the Nesterov's dual algorithm, where an indefinite proximal term is added to the subproblems, which can make the subproblem easier and the algorithm efficient when an appropriate proximal operator is judiciously setting down. Under mild assumptions, we establish sublinear convergence of IPSB to a region of the optimal value. We also report some numerical results, demonstrating the efficiency of IPSB in comparing with the classical dual averaging-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. One Relaxed Variant of the Proximal Level Method
- Author
-
Yarullin, Rashid, Zabotin, Igor, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Kochetov, Yury, editor, Eremeev, Anton, editor, Khamisov, Oleg, editor, and Rettieva, Anna, editor
- Published
- 2022
- Full Text
- View/download PDF
12. Spectral projected subgradient method for nonsmooth convex optimization problems.
- Author
-
Krejić, Nataša, Krklec Jerinkić, Nataša, and Ostojić, Tijana
- Subjects
- *
SUBGRADIENT methods , *NONSMOOTH optimization , *MATHEMATICAL forms , *CONSTRAINED optimization , *MATHEMATICAL functions , *LEARNING problems - Abstract
We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves the performance of the first order method as shown by numerical results. The step sizes are chosen from the predefined interval and the almost sure convergence of the method is proved under the standard assumptions in stochastic environment. To enhance the performance of the proposed algorithm, we further specify the choice of the step size by introducing an Armijo-like procedure adapted to this framework. Considering the computational cost on machine learning problems, we conclude that the line search improves the performance significantly. Numerical experiments conducted on finite sum problems also reveal that the variable sample strategy outperforms the full sample approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization.
- Author
-
Liu, Jingxin, Liao, Xiaofeng, Dong, Jin-song, and Mansoori, Amin
- Subjects
- *
NONSMOOTH optimization , *DIFFERENTIAL inclusions , *MATRIX inversion , *CONVEX functions , *ALGORITHMS , *LINEAR orderings - Abstract
In this paper, a subgradient-based neurodynamic algorithm is presented to solve the nonsmooth nonconvex interval-valued optimization problem with both partial order and linear equality constraints, where the interval-valued objective function is nonconvex, and interval-valued partial order constraint functions are convex. The designed neurodynamic system is constructed by a differential inclusion with upper semicontinuous right-hand side, whose calculation load is reduced by relieving penalty parameters estimation and complex matrix inversion. Based on nonsmooth analysis and the extension theorem of the solution of differential inclusion, it is obtained that the global existence and boundedness of state solution of neurodynamic system, as well as the asymptotic convergence of state solution to the feasible region and the set of L U -critical points of interval-valued nonconvex optimization problem. Several numerical experiments and the applications to emergency supplies distribution and nondeterministic fractional continuous static games are solved to illustrate the applicability of the proposed neurodynamic algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Properties of the Quadratic Transformation of Dual Variables.
- Author
-
Krutikov, Vladimir, Tovbis, Elena, Bykov, Anatoly, Stanimirovic, Predrag, Chernova, Ekaterina, and Kazakovtsev, Lev
- Subjects
- *
NONSMOOTH optimization , *SUBGRADIENT methods , *CONJUGATE gradient methods , *SMOOTHNESS of functions , *CONVEX programming , *CONVEX functions , *QUADRATIC programming - Abstract
We investigate a solution of a convex programming problem with a strongly convex objective function based on the dual approach. A dual optimization problem has constraints on the positivity of variables. We study the methods and properties of transformations of dual variables that enable us to obtain an unconstrained optimization problem. We investigate the previously known method of transforming the components of dual variables in the form of their modulus (modulus method). We show that in the case of using the modulus method, the degree of the degeneracy of the function increases as it approaches the optimal point. Taking into account the ambiguity of the gradient in the boundary regions of the sign change of the new dual function variables and the increase in the degree of the function degeneracy, we need to use relaxation subgradient methods (RSM) that are difficult to implement and that can solve non-smooth non-convex optimization problems with a high degree of elongation of level surfaces. We propose to use the transformation of the components of dual variables in the form of their square (quadratic method). We prove that the transformed dual function has a Lipschitz gradient with a quadratic method of transformation. This enables us to use efficient gradient methods to find the extremum. The above properties are confirmed by a computational experiment. With a quadratic transformation compared to a modulus transformation, it is possible to obtain a solution of the problem by relaxation subgradient methods and smooth function minimization methods (conjugate gradient method and quasi-Newtonian method) with higher accuracy and lower computational costs. The noted transformations of dual variables were used in the program module for calculating the maximum permissible emissions of enterprises (MPE) of the software package for environmental monitoring of atmospheric air (ERA-AIR). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. A redistributed cutting plane bundle-type algorithm for multiobjective nonsmooth optimization
- Author
-
Jia-Tong Li
- Subjects
multiobjective nonsmooth optimization ,cutting-plane model ,redistributed bundle method ,approximate calculation ,subgradient ,Mathematics ,QA1-939 - Abstract
I construct a new cutting-plane model for approximating nonsmooth nonconvex functions in multiobjective optimization and propose a new bundle-type method with the help of an improvement function. The presented bundle method possesses three features. Firstly, the objective and constraint functions are approximated by a new cutting-plane model, which is a local convexification of the corresponding functions, instead of the entire approximation for the functions, as most bundle methods do. Secondly, the subgradients and values of the objective and constraint functions are computed approximately. In other words, approximate calculation is applied to the method, and the proposed algorithm is doubly approximate to some extent. Thirdly, the introduction of the improvement function eliminates the necessity of employing any scalarization, which is the usual method when dealing with multiobjective optimization. Under reasonable conditions satisfactory convergence results are obtained.
- Published
- 2022
- Full Text
- View/download PDF
16. Real-Time Pricing Method Based on Lyapunov for Stochastic and Electric Power Demand in Smart Grid
- Author
-
Zhang, Yake, Li, Yucong, Ran, Diya, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Liu, Qi, editor, Liu, Xiaodong, editor, Li, Lang, editor, Zhou, Huiyu, editor, and Zhao, Hui-Huang, editor
- Published
- 2021
- Full Text
- View/download PDF
17. An Improved Lagrangian Relaxation Algorithm for Solving the Lower Bound of Production Logistics
- Author
-
Yu, Nai-Kang, Hu, Rong, Qian, Bin, Wang, Ling, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Jo, Kang-Hyun, editor, Li, Jianqiang, editor, Gribova, Valeriya, editor, and Bevilacqua, Vitoantonio, editor
- Published
- 2021
- Full Text
- View/download PDF
18. Inertial Subgradient Projection Algorithms Extended to Equilibrium Problems.
- Author
-
Van Thang, Tran
- Subjects
- *
SUBGRADIENT methods , *EQUILIBRIUM , *HILBERT space , *ALGORITHMS - Abstract
In this paper, we introduce a new inertial subgradient projection algorithm for finding a solution of an equilibrium problem in a real Hilbert space. The algorithm combines subgradient projection methods with the self-adaptive and inertial techniques to generate an iteration sequence converging weakly to a solution of the equilibrium problem. Based on the proposed algorithm, we develop a modified algorithm for finding a common solution of the equilibrium problem and fixed point problems. Several of fundamental experiments are showed to illustrate our algorithms and compare with some others. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Halpern Subgradient Method for Pseudomonotone Equilibrium Problems in Hilbert Space.
- Author
-
TRAN VAN THANG and NGUYEN MINH KHOA
- Subjects
- *
SUBGRADIENT methods , *HILBERT space , *VARIATIONAL inequalities (Mathematics) , *EQUILIBRIUM - Abstract
In this paper, we introduce a new algorithm for finding a solution of an equilibrium problem in a real Hilbert space. Our paper extends the single projection method to pseudomonotone variational inequalities, from a 2018 paper of Shehu et. al., to pseudomonotone equilibrium problems in a real Hilbert space. On the basis of the given algorithm for the equilibrium problem, we develop a new algorithm for finding a common solution of a equilibrium problem and fixed point problem. The strong convergence of the algorithm is established under mild assumptions. Several of fundamental experiments in finite (infinite) spaces are provided to illustrate the numerical behavior of the algorithm for the equilibrium problem and to compare it with other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Two inertial proximal coordinate algorithms for a family of nonsmooth and nonconvex optimization problems.
- Author
-
Dang, Ya Zheng, Sun, Jie, and Teo, Kok Lay
- Abstract
The inertial proximal method is extended to minimize the sum of a series of separable nonconvex and possibly nonsmooth objective functions and a smooth nonseparable function (possibly nonconvex). Here, we propose two new algorithms. The first one is an inertial proximal coordinate subgradient algorithm, which updates the variables by employing the proximal subgradients of each separable function at the current point. The second one is an inertial proximal block coordinate method, which updates the variables by using the subgradients of the separable functions at the partially updated points. Global convergence is guaranteed under the Kurdyka–Łojasiewicz (KŁ) property and some additional mild assumptions. Convergence rate is derived based on the Łojasiewicz exponent. Two numerical examples are given to illustrate the effectiveness of the algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
21. Newton projection method as applied to assembly simulation.
- Author
-
Baklanov, S., Stefanova, M., and Lupuleac, S.
- Subjects
- *
NEWTON-Raphson method , *QUADRATIC programming , *JOINING processes , *HESSIAN matrices , *SPARSE matrices , *NONSMOOTH optimization - Abstract
In this paper, we consider Newton projection method for solving the quadratic programming problem that emerges in simulation of joining process for assembly with compliant parts. This particular class of problems has specific features such as an ill-conditioned Hessian and a sparse matrix of constraints as well as a requirement for the large-scale computations. We use the projected Newton method with a quadratic rate of convergence and suggest some improvements to reduce the solving time: a method for solving the system of linear equations, so-called constraint recalculation method, and compare different approaches for step-size selection. We use the duality principle to formulate alternative forms of the minimization problem that, as a rule, can be solved faster. We describe how to solve the considered nonlinear minimization problem with the nonsmooth objective function by modifying Newton projection method and employing subgradients. In addition, we prove the convergence of the suggested algorithm. Finally, we compare Newton projection method with the other quadratic programming techniques on a number of assembly simulation problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Re-Weighted Algorithms within the Lagrange Duality Framework : Bringing Interpretability to Weights
- Author
-
Valdés, Matías, Fiori, Marcelo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Morales, Aythami, editor, Fierrez, Julian, editor, Sánchez, José Salvador, editor, and Ribeiro, Bernardete, editor
- Published
- 2019
- Full Text
- View/download PDF
23. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems.
- Author
-
Jolaoso, Lateef Olakunle and Aphane, Maggie
- Subjects
SUBGRADIENT methods ,VARIATIONAL inequalities (Mathematics) ,HILBERT space ,PRIOR learning - Abstract
Using the concept of Bregman divergence, we propose a new subgradient extragradient method for approximating a common solution of pseudo-monotone and Lipschitz continuous variational inequalities and fixed point problem in real Hilbert spaces. The algorithm uses a new self-adjustment rule for selecting the stepsize in each iteration and also, we prove a strong convergence result for the sequence generated by the algorithm without prior knowledge of the Lipschitz constant. Finally, we provide some numerical examples to illustrate the performance and accuracy of our algorithm in finite and infinite dimensional spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Robust Food–Energy–Water–Environmental Security Management: Stochastic Quasigradient Procedure for Linkage of Distributed Optimization Models under Asymmetric Information and Uncertainty.
- Author
-
Ermoliev, Y., Zagorodny, A. G., Bogdanov, V. L., Ermolieva, T., Havlik, P., Rovenskaya, E., Komendantova, N., and Obersteiner, M.
- Subjects
- *
INFORMATION asymmetry , *SECURITY management , *NONSMOOTH optimization - Abstract
The paper presents a consistent algorithm for regional and sectoral distributed models' linkage and optimization under asymmetric information based on iterative stochastic quasigradient (SQG) solution procedure of, in general, nonsmooth nondifferentiable optimization. The procedure is used for linking individual sectoral and regional models for integrated and interdependent Food–Energy–Water–Environmental security analysis and management. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Energy Efficient User Association, Power, and Flow Control in Millimeter Wave Backhaul Heterogeneous Networks
- Author
-
Sylvester Aboagye, Ahmed Ibrahim, and Telex M. N. Ngatched
- Subjects
Flow control ,generalized assignment problem ,Lagrangian multipliers ,multiplier adjustment method ,power control ,subgradient ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
This paper studies the problem of energy efficiency (EE) maximization via user association, power, and backhaul (BH) flow control in the downlink of millimeter wave BH heterogeneous networks. This problem is mathematically formulated as a mixed-integer non-linear program, which is non-convex. To get a tractable solution, the initial problem is separated into two sub-problems and optimized sequentially. The first is a joint user association and power control sub-problem for the access network (AN) (AN sub-problem). The second is a joint flow and power control sub-problem for the BH network (BH sub-problem). While the BH sub-problem is a convex optimization problem and hence can be efficiently solved, the AN sub-problem assumes the form of a generalized assignment problem, which is known to be NP-hard. To that end, we utilize Lagrangian decomposition to propose two polynomial time solution techniques that obtain a high-quality solution for the AN sub-problem. The first, referred to as Technique A, uses dynamic programming, the subgradient method, and a heuristic. The second, named Technique B, uses the multiplier adjustment method, the sorting algorithm, and a heuristic. Simulation results are used to demonstrate the effectiveness of the proposed energy efficient user association, power, and BH flow control algorithms as compared with benchmark user association schemes that incorporate the BH sub-problem algorithm, in terms of the total AN power, BH power, and overall network (AN plus BH) EE. The computational complexity and practical implementation of the proposed algorithms are discussed.
- Published
- 2020
- Full Text
- View/download PDF
26. Properties of the Quadratic Transformation of Dual Variables
- Author
-
Vladimir Krutikov, Elena Tovbis, Anatoly Bykov, Predrag Stanimirovic, Ekaterina Chernova, and Lev Kazakovtsev
- Subjects
convex programming ,strongly convex functions ,nonlinear programming ,dual problem ,subgradient ,subgradient methods ,Industrial engineering. Management engineering ,T55.4-60.8 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
We investigate a solution of a convex programming problem with a strongly convex objective function based on the dual approach. A dual optimization problem has constraints on the positivity of variables. We study the methods and properties of transformations of dual variables that enable us to obtain an unconstrained optimization problem. We investigate the previously known method of transforming the components of dual variables in the form of their modulus (modulus method). We show that in the case of using the modulus method, the degree of the degeneracy of the function increases as it approaches the optimal point. Taking into account the ambiguity of the gradient in the boundary regions of the sign change of the new dual function variables and the increase in the degree of the function degeneracy, we need to use relaxation subgradient methods (RSM) that are difficult to implement and that can solve non-smooth non-convex optimization problems with a high degree of elongation of level surfaces. We propose to use the transformation of the components of dual variables in the form of their square (quadratic method). We prove that the transformed dual function has a Lipschitz gradient with a quadratic method of transformation. This enables us to use efficient gradient methods to find the extremum. The above properties are confirmed by a computational experiment. With a quadratic transformation compared to a modulus transformation, it is possible to obtain a solution of the problem by relaxation subgradient methods and smooth function minimization methods (conjugate gradient method and quasi-Newtonian method) with higher accuracy and lower computational costs. The noted transformations of dual variables were used in the program module for calculating the maximum permissible emissions of enterprises (MPE) of the software package for environmental monitoring of atmospheric air (ERA-AIR).
- Published
- 2023
- Full Text
- View/download PDF
27. Low-Rank Matrix Recovery with Composite Optimization: Good Conditioning and Rapid Convergence.
- Author
-
Charisopoulos, Vasileios, Chen, Yudong, Davis, Damek, Díaz, Mateo, Ding, Lijun, and Drusvyatskiy, Dmitriy
- Subjects
- *
NONSMOOTH optimization , *RESTRICTED isometry property , *LOW-rank matrices , *SUBGRADIENT methods , *LENGTH measurement , *MATHEMATICAL optimization - Abstract
The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, nonsmooth penalty formulations do not suffer from the same type of ill-conditioning. Consequently, standard algorithms for nonsmooth optimization, such as subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. Moreover, nonsmooth formulations are naturally robust against outliers. Our framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA. Numerical experiments on these problems illustrate the benefits of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Variant of the Cutting Plane Method with Approximation of the Set of Constraints and Auxiliary Functions Epigraphs
- Author
-
Zabotin, I. Ya., Kazaeva, K. E., Barbosa, Simone Diniz Junqueira, Series Editor, Chen, Phoebe, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Eremeev, Anton, editor, Khachay, Michael, editor, Kochetov, Yury, editor, and Pardalos, Panos, editor
- Published
- 2018
- Full Text
- View/download PDF
29. Improved Sub-gradient Algorithm for Solving the Lower Bound of No-Wait Flow-Shop Scheduling with Sequence-Dependent Setup Times and Release Dates
- Author
-
Yu, Nai-Kang, Hu, Rong, Qian, Bin, Zhang, Zi-Qi, Wang, Ling, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Huang, De-Shuang, editor, Gromiha, M. Michael, editor, Han, Kyungsook, editor, and Hussain, Abir, editor
- Published
- 2018
- Full Text
- View/download PDF
30. On a minimum enclosing ball of a collection of linear subspaces.
- Author
-
Marrinan, Tim, Absil, P.-A., and Gillis, Nicolas
- Subjects
- *
GRASSMANN manifolds , *PROBLEM solving , *VECTOR spaces , *MAXIMA & minima , *COLLECTIONS - Abstract
This paper concerns the minimax center of a collection of linear subspaces. For k -dimensional subspaces of an n -dimensional vector space, this can be cast as finding the center of a minimum enclosing ball on a Grassmann manifold. For subspaces of differing dimension, the setting becomes a disjoint union of Grassmannians rather than a single manifold, and the problem is no longer well-defined. However, natural geometric maps exist between these manifolds with a well-defined notion of distance for the images of the subspaces under the mappings. Solving the problem in this context leads to a candidate minimax center on each of the constituent manifolds, but does not provide intuition about which candidate is the best representation of the data. Additionally, the solutions of different rank are generally not nested so a deflationary approach will not suffice, and the problem must be solved independently on each manifold. We propose an optimization problem parametrized by the rank of the minimax center. The solution is computed with a subgradient algorithm applied to the dual problem. By scaling the objective and penalizing the information lost by the rank- k minimax center, we jointly recover an optimal dimension, k ⁎ , and a subspace at the center of the minimum enclosing ball, U ⁎ , that best represents the data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Weak subgradient method for solving nonsmooth nonconvex optimization problems.
- Author
-
Dinc Yalcin, Gulcin and Kasimbeyli, Refail
- Subjects
- *
SUBGRADIENT methods , *NONSMOOTH optimization , *SET functions , *HYPERPLANES , *CONES - Abstract
This paper presents a weak subgradient based method for solving nonconvex optimization problems. The method uses a weak subgradient of the objective function at a current point to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of a function under consideration which replaces in some sense the supporting hyperplanes used for subgradient notion of convex analysis. Because of this reason, the method developed in this paper does not require convexity assumption neither on the objective function nor on the set of feasible solutions. The new method is similar to subgradient methods of convex analysis and can be considered as a generalization of those methods. The paper investigates different stepsize parameters and provides convergence theorems for all cases. The significant difficulty of subgradient methods is an estimation of subgradients at every iteration. In this paper, a method for estimating the weak subgradients is also presented. The new method is tested on well-known test problems from the literature and computational results are reported and interpreted. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. One method for minimization a convex Lipschitz-continuous function of two variables on a fixed square
- Author
-
Dmitry Arkad'evich Pasechnyuk and Fedor Sergeevich Stonyakin
- Subjects
minimization problem ,convex functional ,Lipshitz-continuous functional ,Lipshitz-continuous gradient ,non-smooth functional ,subgradient ,gradient method ,ellipsoid method ,rate of convergence ,Applied mathematics. Quantitative methods ,T57-57.97 ,Mathematics ,QA1-939 - Abstract
In the article we have obtained some estimates of the rate of convergence for the recently proposed by Yu. E.Nesterov method of minimization of a convex Lipschitz-continuous function of two variables on a square with a fixed side. The idea of the method is to divide the square into smaller parts and gradually remove them so that in the remaining sufficiently small part. The method consists in solving auxiliary problems of one-dimensional minimization along the separating segments and does not imply the calculation of the exact value of the gradient of the objective functional. The main result of the paper is proved in the class of smooth convex functions having a Lipschitz-continuous gradient. Moreover, it is noted that the property of Lipschitzcontinuity for gradient is sufficient to require not on the whole square, but only on some segments. It is shown that the method can work in the presence of errors in solving auxiliary one-dimensional problems, as well as in calculating the direction of gradients. Also we describe the situation when it is possible to neglect or reduce the time spent on solving auxiliary one-dimensional problems. For some examples, experiments have demonstrated that the method can work effectively on some classes of non-smooth functions. In this case, an example of a simple non-smooth function is constructed, for which, if the subgradient is chosen incorrectly, even if the auxiliary one-dimensional problem is exactly solved, the convergence property of the method may not hold. Experiments have shown that the method under consideration can achieve the desired accuracy of solving the problem in less time than the other methods (gradient descent and ellipsoid method) considered. Partially, it is noted that with an increase in the accuracy of the desired solution, the operating time for the Yu. E. Nesterovs method can grow slower than the time of the ellipsoid method.
- Published
- 2019
- Full Text
- View/download PDF
33. Accelerating Two Projection Methods via Perturbations with Application to Intensity-Modulated Radiation Therapy.
- Author
-
Bonacker, Esther, Gibali, Aviv, and Küfer, Karl-Heinz
- Subjects
- *
RADIOTHERAPY , *NONLINEAR equations , *CONSTRAINED optimization , *INTENSITY modulated radiotherapy - Abstract
Constrained convex optimization problems arise naturally in many real-world applications. One strategy to solve them in an approximate way is to translate them into a sequence of convex feasibility problems via the recently developed level set scheme and then solve each feasibility problem using projection methods. However, if the problem is ill-conditioned, projection methods often show zigzagging behavior and therefore converge slowly. To address this issue, we exploit the bounded perturbation resilience of the projection methods and introduce two new perturbations which avoid zigzagging behavior. The first perturbation is in the spirit of k-step methods and uses gradient information from previous iterates. The second uses the approach of surrogate constraint methods combined with relaxed, averaged projections. We apply two different projection methods in the unperturbed version, as well as the two perturbed versions, to linear feasibility problems along with nonlinear optimization problems arising from intensity-modulated radiation therapy (IMRT) treatment planning. We demonstrate that for all the considered problems the perturbations can significantly accelerate the convergence of the projection methods and hence the overall procedure of the level set scheme. For the IMRT optimization problems the perturbed projection methods found an approximate solution up to 4 times faster than the unperturbed methods while at the same time achieving objective function values which were 0.5 to 5.1% lower. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. New projection methods for equilibrium problems over fixed point sets.
- Author
-
Anh, Pham Ngoc and Van Hong, Nguyen
- Abstract
In this paper, we introduce some new approximation projection algorithms for solving monotone equilibrium problems over the intersection of fixed point sets of demicontractive mappings. By combining subgradient projection methods and hybrid steepest descent methods, strong convergence of the algorithms to a solution is shown in a real Hilbert space. Some numerical illustrations and comparisons are also reported to show the efficiency and advantage of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Worst-Case Robust MIMO Transmission Based on Subgradient Projection.
- Author
-
Zhou, Wen, Li, Chunguo, and Hua, Min
- Abstract
This letter studies robust multi-input multi-ouput (MIMO) transmission strategies, where the transmitter only has imperfect channel state information (CSI) and the transmission covariance matrix is designed to maximize the worst system capacity. We first extend Danskin’s Theorem to complex spaces by the aid of Wirtinger’s subgradients. Then, with the extended theorem, we propose a subgradient projection (SP) based algorithm to solve the covariance matrix, in which finding the worst channel error is key and can be formulated as a nonconvex minimization problem. We deploy the successive convex approximation (SCA) technique and create a sequence of convex problems to solve this problem. Simulation results demonstrate the robustness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Path-based system optimal dynamic traffic assignment: A subgradient approach.
- Author
-
Zhang, Pinchao and Qian, Sean
- Subjects
- *
TRAFFIC assignment , *DYNAMICAL systems , *DIRECT costing , *ANALYTICAL solutions , *HEURISTIC algorithms , *PROBLEM solving , *NETWORK hubs - Abstract
• We solve for the system optimum dynamic traffic assignment considering subgradients with repect to path flow. • The system cost could be non-differentiable with respect to the link/path flow, especially when the flow is close or under the SO conditions. • Overlooking the non-differentiability can lead to convergence failure or incorrect solutions even in toy networks. • We show how to compute the subgradients, namely the lower and upper limit of path marginal costs. • We develop PMC-based necessary conditions for SO solutions, and design heuristic solution algorithms. The system-optimal dynamic traffic assignment (SO-DTA) problem aims at solving for the time-dependent link and path flow of a network that yields the minimal total system cost, provided with the Origin-Destination (O-D) demand. The key to solving the path-based formulation of SO-DTA is to efficiently compute the path marginal cost (PMC). Existing studies implicitly assume that the total system cost (TC) is always differentiable with respect to the path flow when computing PMC. We show that the TC could be non-differentiable with respect to the link/path flow in some cases, especially when the flow is close or under the SO conditions. Overlooking this fact can lead to convergence failure or incorrect solutions while numerically solving the SO-DTA problem. In this paper we demonstrate when the TC would be indifferentiable and how to compute the subgradients, namely the lower and upper limit of path marginal costs. We examine the relations between the discontinuity of PMC and the SO conditions, develop PMC-based necessary conditions for SO solutions, and finally design heuristic solution algorithms for solving SO in general networks with multi-origin-multi-destination OD demands. Those algorithms are tested and compared to existing algorithms in four numerical experiments, two toy networks where we compare analytical solutions with numerical solutions, one small network and one sizable real-world network. We show that the proposed heuristic algorithms outperform existing ones, in terms of both the total TC, convergence, and the resultant path/link flow. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. Lagrangian relaxation of the generic materials and operations planning model.
- Author
-
Rius-Sorolla, G., Maheut, J., Coronado-Hernandez, Jairo R., and Garcia-Sabater, J. P.
- Subjects
SUBGRADIENT methods ,MATHEMATICAL programming ,SUPPLY chain management ,PRODUCTION planning ,MANUFACTURING processes - Abstract
The supply chain management requires increasingly proposals for the production programming planning that brings together its special singularities. Solving coexisting products and alternative processes or by-products must be allowed by the mathematical programming models. The generic materials and operations planning (GMOP) formulation allows operating with different materials and process lists. The paper presents a procedure to solve the versatile GMOP model by the Lagrange Relaxation. The subgradient update method of the lagrangian multiplier is compared with a linear update method. Obtaining lower bound faster compared to the linear method is allowed by the subgradient method, but the linear method provides better solutions after certain iterations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Network coded multicast for multi-hop D2D communication systems.
- Author
-
Mei, Zhonghui, Lu, Youxiong, and Huang, Shuanghong
- Subjects
- *
TELECOMMUNICATION systems , *LINEAR network coding , *MULTICASTING (Computer networks) , *WIRELESS communications , *GREEDY algorithms , *CELL phone systems , *MATHEMATICAL optimization - Abstract
In this paper, we consider the wireless communication systems where multi-hop Device-to-Device (D2D) networks can coexist with the conventional cellular networks by sharing the downlink resource of cellular users (CUs). A multicast data flow is distributed over the multi-hop D2D networks where network coding (NC) can be employed at the intermediate nodes. To maximize the utility of the multicast flow, we formulate a joint optimization problem for the systems while guaranteeing the quality-of-service (QoS) for regular CUs. We propose a subgradient algorithm to solve the optimization problem by decomposing it into three sub-problems: multicast rate control, NC subgraph selection, and downlink resource reusing. In particular, we develop a greedy algorithm to deal with the downlink resource reusing sub-problem for it is NP hard. Numerical and simulation results prove the superior performance of the proposed techniques compared with the conventional routing scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Stochastic Subgradient Method Converges on Tame Functions.
- Author
-
Davis, Damek, Drusvyatskiy, Dmitriy, Kakade, Sham, and Lee, Jason D.
- Subjects
- *
SUBGRADIENT methods , *DEEP learning , *DIFFERENTIAL inclusions , *DATA science , *LYAPUNOV functions - Abstract
This work considers the question: what convergence guarantees does the stochastic subgradient method have in the absence of smoothness and convexity? We prove that the stochastic subgradient method, on any semialgebraic locally Lipschitz function, produces limit points that are all first-order stationary. More generally, our result applies to any function with a Whitney stratifiable graph. In particular, this work endows the stochastic subgradient method, and its proximal extension, with rigorous convergence guarantees for a wide class of problems arising in data science—including all popular deep learning architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. Integrality of subgradients and biconjugates of integrally convex functions.
- Author
-
Murota, Kazuo and Tamura, Akihisa
- Abstract
Integrally convex functions constitute a fundamental function class in discrete convex analysis. This paper shows that an integer-valued integrally convex function admits an integral subgradient and that the integral biconjugate of an integer-valued integrally convex function coincides with itself. The proof is based on the Fourier–Motzkin elimination. The latter result provides a unified proof of integral biconjugacy for various classes of integer-valued discrete convex functions, including L-convex, M-convex, L 2 -convex, M 2 -convex, BS-convex, and UJ-convex functions as well as multimodular functions. Our results of integral subdifferentiability and integral biconjugacy make it possible to extend the theory of discrete DC (difference of convex) functions developed for L- and M-convex functions to that for integrally convex functions, including an analogue of the Toland–Singer duality for integrally convex functions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. An Optimal Data Fusion Algorithm in the Presence of Unknown Cross Covariances.
- Author
-
Zhang, Xiaohai
- Subjects
- *
ALGORITHMS , *COVARIANCE matrices , *SYMMETRIC matrices , *MULTISENSOR data fusion , *CROSSES , *DATA integration - Abstract
This paper presents an optimal data fusion formulation and algorithm in the sense of minimum mean-square error when some or all cross covariances are unknown. This algorithm is generic in that it is capable of processing any number of measurement vectors of any dimension with any pattern of unknown cross covariances. Closed-form solution is provided for the case when all cross covariances are unknown and all measurement covariance matrices are diagonal. Numerical projected subgradient optimal fusion algorithm is provided for the most generic case. The well known covariance intersection method is shown to have a weaker upper bound of this formulation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. The Optimization Model and Algorithm for High-Speed Railway Station Operation Based on Lagrangian Relaxation
- Author
-
Bai, Zixi, Zhou, Leishan, Jia, Limin, editor, Liu, Zhigang, editor, Qin, Yong, editor, Zhao, Minghua, editor, and Diao, Lijun, editor
- Published
- 2014
- Full Text
- View/download PDF
43. Approximations of Subdifferentials
- Author
-
Bagirov, Adil, Karmitsa, Napsu, Mäkelä, Marko M., Bagirov, Adil, Karmitsa, Napsu, and Mäkelä, Marko M.
- Published
- 2014
- Full Text
- View/download PDF
44. Convex Analysis
- Author
-
Bagirov, Adil, Karmitsa, Napsu, Mäkelä, Marko M., Bagirov, Adil, Karmitsa, Napsu, and Mäkelä, Marko M.
- Published
- 2014
- Full Text
- View/download PDF
45. PROXIMALLY GUIDED STOCHASTIC SUBGRADIENT METHOD FOR NONSMOOTH, NONCONVEX PROBLEMS.
- Author
-
DAVIS, DAMEK and GRIMMER, BENJAMIN
- Subjects
- *
SUBGRADIENT methods , *STOCHASTIC analysis , *ALGORITHMS , *CONVEX functions , *NONSMOOTH optimization - Abstract
In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions---a wide class of functions which includes the additive and convex composite classes. At a high level, the method is an inexact proximal-point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochastic projected subgradient method. The primary contribution of this paper is a simple proof that the proposed algorithm converges at the same rate as the stochastic gradient method for smooth nonconvex problems. This result appears to be the first convergence rate analysis of a stochastic (or even deterministic) subgradient method for the class of weakly convex functions. In addition, a two-phase variant is proposed that significantly reduces the variance of the solutions returned by the algorithm. Finally, preliminary numerical experiments are also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. On Solving Nonsmooth Mixed-Integer Nonlinear Programming Problems by Outer Approximation and Generalized Benders Decomposition.
- Author
-
Wei, Zhou, Ali, M. Montaz, Xu, Liang, Zeng, Bo, and Yao, Jen-Chih
- Subjects
- *
NONLINEAR programming , *NONSMOOTH optimization , *NONLINEAR equations , *CONVEX functions , *APPROXIMATION algorithms , *PROBLEM solving - Abstract
In this paper, we mainly study nonsmooth mixed-integer nonlinear programming problems and solution algorithms by outer approximation and generalized Benders decomposition. Outer approximation and generalized Benders algorithms are provided to solve these problems with nonsmooth convex functions and with conic constraint, respectively. We illustrate these two algorithms by providing detailed procedure of solving several examples. The numerical examples show that outer approximation and generalized Benders decomposition provide a feasible alternative for solving such problems without differentiability. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. STOCHASTIC MODEL-BASED MINIMIZATION OF WEAKLY CONVEX FUNCTIONS.
- Author
-
DAVIS, DAMEK and DRUSVYATSKIY, DMITRIY
- Subjects
- *
SMOOTHNESS of functions , *CONVEX functions , *NEWTON-Raphson method , *STOCHASTIC approximation , *STOCHASTIC convergence , *CONVEX sets , *STOCHASTIC models , *ALGORITHMS - Abstract
We consider a family of algorithms that successively sample and minimize simple stochastic models of the objective function. We show that under reasonable conditions on approximation quality and regularity of the models, any such algorithm drives a natural stationarity measure to zero at the rate O(k-1/4) As a consequence, we obtain the first complexity guarantees for the stochastic proximal point, proximal subgradient, and regularized Gauss{Newton methods for minimizing compositions of convex functions with smooth maps. The guiding principle, underlying the complexity guarantees, is that all algorithms under consideration can be interpreted as approximate descent methods on an implicit smoothing of the problem, given by the Moreau envelope. Specializing to classical circumstances, we obtain the long-sought convergence rate of the stochastic projected gradient method, without batching, for minimizing a smooth function on a closed convex set. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. On the Properties of the Method of Minimization for Convex Functions with Relaxation on the Distance to Extremum.
- Author
-
Krutikov, V. N., Samoilenko, N. S., and Meshechkin, V. V.
- Subjects
- *
GEOMETRIC series , *SUBGRADIENT methods , *CONVEX functions , *DISTANCES , *EQUATIONS - Abstract
We present a subgradient method of minimization, similar to the method of minimal iterations for solving systems of equations, which inherits from the latter convergence properties on quadratic functions. The proposed algorithm, for a certain set of parameters, coincides with the previously known method of minimizing piecewise linear functions and is an element of the family of minimization methods with relaxation of the distance to extremum, developed by B.T. Polyak, where the step length is calculated based on the predefined minimum value of the function. We link parameters of this method to the constraint on the degree of homogeneity of the function and obtain estimates on its convergence rate on convex functions. We prove that on some classes of functions it converges at the rate of a geometric progression. We also discuss the computational capabilities of this approach for solving problems with high dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. 基于延迟函数次梯度启发式道路交通补偿策略.
- Author
-
刘艳锐, 姚 迪, and 李金培
- Subjects
- *
TRAFFIC congestion , *TOLL roads , *PRICING , *TRAFFIC flow , *CONGESTION pricing , *TRAVEL time (Traffic engineering) , *CONSTRAINT programming , *NONLINEAR programming - Abstract
This paper considered the congestion pricing to be the effective way to solve the traffic congestion, the main idea of road traffic congestion was to charge for some easy to cause road congestion, and appropriated compensation for other underutilized roads. It presented a subgradient heuristic traffic delay compensation strategy based on function. Firstly, it used a nonlinear programming model in the road toll/subsidies, Beckmann was the main realization based on minimizing the objective function, and used the Kuhn Hills-Lagrange multiplier model to construct the constraint condition. Secondly, the algorithm used the pricing compensation strategy to establish the road traffic model based on heuristic algorithm, and also used the marginal cost model to establish the delay function analysis model. Finally, the simulation experiments on real road networks show that the proposed algorithm has better performance in terms of travel time, traffic flow, convergence and so on, and the effectiveness of the proposed algorithm is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. МЕТОД ЕЛІПСОЇДІВ ДЛЯ МІНІМІЗАЦІЇ ОПУКЛОЇ ФУНКЦІЇ.
- Author
-
Стецюк, П. І., Фішер, А., and Ляшко, В. І.
- Abstract
We consider the generalized ellipsoid method -- an algorithm with the space dilation. For a certain choice of the dilation coefficient, this is a method of outer approximation of semi-ellipsoids by ellipsoids with a monotonous decrease in their volume. The Yudin-Nemirovski-Shor ellipsoid method is a specific case. The paper provides properties of two algorithmic realizations of the generalized ellipsoid method. The first algorithm is based on updating nonsymmetric matrix B, as in the Shor ellipsoid method, and the second is based on updating symmetric matrix H = BBT, as in the Yudin-Nemirovski ellipsoid method. We present the Emshor algorithm (Ellipsoid Method of Shor) for computing a solution of the problem of unconstrained minimization of a convex function. It updates nonsymmetric matrix B and uses a stopping criterion that, for a convex function, guarantees to find a point at which the function value does not deviate more than a specified tolerance from the optimal function value. It is shown that the Emshor algorithm finds sufficiently accurate approximations to the minimum point of a ravine convex function, and for functions of twenty variables it takes no longer than a few seconds on a usual PC. Hence, the algorithm can be useful for solving small optimization problems. The Emshor algorithm is planned to be used to develop a dual method for solving a two-stage transportation problem, when the number of intermediate points is not greater than twenty. The com- putational complexity of the dual method is determined by the complexity of calculating the value of the dual function and its supergradient, for which we need not more than twenty times to find the mini- mum elements in two one-dimensional arrays whose lengths m and n correspond to the numbers of suppliers and consumers. This means that the dual method can be oriented to the case of large m and n (thousands, tens of thousands), for which solving linear programming problems corresponding to the two-stage transportation problem by general-purpose programs is impossible or requires significant computational resources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.