1,568 results
Search Results
2. Correspondence between a new pair of nondifferentiable mixed dual vector programs and higher-order generalized convexity.
- Author
-
Kailey, N., Sonali Sethi, and Dhingra, Vivek
- Abstract
In this paper, a new pair of higher-order nondifferentiable multiobjective mixed symmetric dual programs over arbitrary cones is formulated, where each of the objective functions contains a support function of a compact convex set. Usual duality theorems are established under higher-order K- (F , α , ρ , d) -convexity assumptions. Also, the example of a higher-order dual pair, which shows that higher-order provides tighter bounds for the value of the objective function of the primal and dual problem, is given in the paper. Several known results are also discussed as special cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Inexact-restoration modelling with monotone interpolation and parameter estimation.
- Author
-
Martínez, J. M. and Santos, L. T.
- Abstract
Complex real-life problems may be simplified in many possible ways. In several recent papers exactness requirements were considered as constraints of optimization problems and were handled with the tools of inexact restoration. Such techniques are revisited, generalized and simplified in the present paper. As a consequence, a new algorithm is introduced and applied to the estimation of parameters in hydraulic one-dimensional models. Moreover, the simplification includes the representation of well-known hydraulic parameters by a new family of monotone interpolatory functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Sequential M-Stationarity Conditions for General Optimization Problems.
- Author
-
Movahedian, Nooshin and Pourahmad, Fatemeh
- Abstract
In this paper, we investigate sequential M-stationarity conditions for a class of nonsmooth nonconvex general optimization problems. We introduce various types of such conditions and compare them with previously established conditions in smooth or convex cases. The application of the derived results is demonstrated in the context of nonsmooth sparsity-constrained optimization problems. Additionally, we devise a Lagrangian-type algorithm for a specific case of smooth sparsity problems. Several examples are presented throughout the paper to illustrate the results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Another hybrid conjugate gradient method as a convex combination of WYL and CD methods.
- Author
-
Guefassa, Imane, Chaib, Yacine, and Bechouat, Tahar
- Subjects
NUMERICAL functions ,LINEAR equations ,LINEAR systems ,PROBLEM solving ,ALGORITHMS ,NONLINEAR equations - Abstract
Conjugate gradient (CG) methods are a popular class of iterative methods for solving linear systems of equations and nonlinear optimization problems. In this paper, a new hybrid conjugate gradient (CG) method is presented and analyzed for solving unconstrained optimization problems, where the parameter β k is a convex combination of β k WYL and β k CD . Under the strong Wolfe line search, the new method possesses the sufficient descent condition and the global convergence properties. The preliminary numerical results show the efficiency of our method in comparison with other CG methods. Furthermore, the proposed algorithm HWYLCD was extended to solve the problem of a mode function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Effective matrix adaptation strategy for noisy derivative-free optimization.
- Author
-
Kimiaei, Morteza and Neumaier, Arnold
- Abstract
In this paper, we introduce a new effective matrix adaptation evolution strategy (MADFO) for noisy derivative-free optimization problems. Like every MAES solver, MADFO consists of three phases: mutation, selection and recombination. MADFO improves the mutation phase by generating good step sizes, neither too small not too large, that increase the probability of selecting mutation points with small inexact function values in the selection phase. In the recombination phase, a recombination point with lowest inexact function value found among all evaluated points so far may be found by a new randomized non-monotone line search method and accepted as the best point. If no best point is found, a heuristic point may be accepted as the best point. We compare MADFO with state-of-the-art DFO solvers on noisy test problems obtained by adding various kinds and levels of noise to all unconstrained CUTEst test problems with dimensions n ≤ 20 , and find that MADFO has the highest number of solved problems [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A trust-region framework for derivative-free mixed-integer optimization.
- Author
-
Torres, Juan J., Nannicini, Giacomo, Traversi, Emiliano, and Wolfler Calvo, Roberto
- Abstract
This paper overviews the development of a framework for the optimization of black-box mixed-integer functions subject to bound constraints. Our methodology is based on the use of tailored surrogate approximations of the unknown objective function, in combination with a trust-region method. To construct suitable model approximations, we assume that the unknown objective is locally quadratic, and we prove that this leads to fully-linear models in restricted discrete neighborhoods. We show that the proposed algorithm converges to a first-order mixed-integer stationary point according to several natural definitions of mixed-integer stationarity, depending on the structure of the objective function. We present numerical results to illustrate the computational performance of different implementations of this methodology in comparison with the state-of-the-art derivative-free solver NOMAD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Complexity of a projected Newton-CG method for optimization with bounds.
- Author
-
Xie, Yue and Wright, Stephen J.
- Subjects
NEWTON-Raphson method ,CONJUGATE gradient methods ,LOW-rank matrices ,DEFINITIONS ,ALGORITHMS - Abstract
This paper describes a method for solving smooth nonconvex minimization problems subject to bound constraints with good worst-case complexity guarantees and practical performance. The method contains elements of two existing methods: the classical gradient projection approach for bound-constrained optimization and a recently proposed Newton-conjugate gradient algorithm for unconstrained nonconvex optimization. Using a new definition of approximate second-order optimality parametrized by some tolerance ϵ (which is compared with related definitions from previous works), we derive complexity bounds in terms of ϵ for both the number of iterations required and the total amount of computation. The latter is measured by the number of gradient evaluations or Hessian-vector products. We also describe illustrative computational results on several test problems from low-rank matrix optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Iterative methods for solving tensor equations based on exponential acceleration.
- Author
-
Liang, Maolin, Dai, Lifang, and Zhao, Ruijuan
- Subjects
NEWTON-Raphson method ,SIGNAL processing ,EQUATIONS ,STATISTICS - Abstract
The tensor equation A x m - 1 = b with the tensor A of order m and dimension n and the vector b , has practical applications in several fields including signal processing, high-dimensional PDEs, high-order statistics, and so on. In this paper, a class of exponential accelerated iterative methods is proposed for solving the tensor equation mentioned above in the sense that the coefficient tensor A is a symmetric and nonsingular or singular M -tensor. The obtained iterative schemes involve the classical Newton's method as a special case. It is shown that the proposed method for nonsingular case is superlinearly convergent, while for singular cases, it is linearly convergent. The performed numerical experiments demonstrate that our methods outperform some existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems.
- Author
-
Guo, Chenzheng, Zhao, Jing, and Dong, Qiao-Li
- Subjects
NONNEGATIVE matrices ,MATRIX decomposition ,SPARSE matrices ,ALGORITHMS ,NONSMOOTH optimization - Abstract
In this paper, for solving a broad class of large-scale nonconvex and nonsmooth optimization problems, we propose a stochastic two-step inertial Bregman proximal alternating linearized minimization (STiBPALM) algorithm with variance-reduced stochastic gradient estimators. And we show that SAGA and SARAH are variance-reduced gradient estimators. Under expectation conditions with the Kurdyka–Łojasiewicz property and some suitable conditions on the parameters, we obtain that the sequence generated by the proposed algorithm converges to a critical point. And the general convergence rate is also provided. Numerical experiments on sparse nonnegative matrix factorization and blind image-deblurring are presented to demonstrate the performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. On New Generalized Differentials with Respect to a Set and Their Applications.
- Author
-
Qin, Xiaolong, Duc Thinh, Vo, and Yao, Jen-Chih
- Abstract
The notions and certain fundamental characteristics of the proximal and limiting normal cones with respect to a set are first presented in this paper. We present the ideas of the limiting coderivative and subdifferential with respect to a set of multifunctions and singleton mappings, respectively, based on these normal cones. The necessary and sufficient conditions for the Aubin property with respect to a set of multifunctions are then described by using the limiting coderivative with respect to a set. As a result of the limiting subdifferential with respect to a set, we offer the requisite optimality criteria for local solutions to optimization problems. In addition, we also provide examples to demonstrate the outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Fractional semi-infinite programming problems: optimality conditions and duality via tangential subdifferentials.
- Author
-
Tripathi, Indira P. and Arora, Mahamadsohil A.
- Abstract
In this paper, we have focused on a multi-objective fractional semi-infinite programming problems in which the constraints and objective functions are tangentially convex. A result has been established to find the tangential subdifferential of a fractional function, assuming the numerator and the negative of the denominator being tangentially convex functions. With this, optimality conditions have been derived using a non-parametric approach under ϝ -convexity assumption. Further, a Mond–Weir type dual has been considered and weak and strong duality relations have been developed. Moreover, an application in robot trajectory planning has been considered and solved using MATLAB. In addition, considering the same trajectory as in Vaz et al. (Eur J Oper Res 153(3):607–617, 2004), we have compared the results obtained in MATLAB with the results available in Vaz et al. (Eur J Oper Res 153(3):607–617, 2004) and Haaren-Retagne (A semi-infinite programming algorithm for robot trajectory planning, 1992), where the authors have solved using AMPL. It has been observed that our results are more efficient than the previously available results, with the implementation of MATLAB as it substantially reduces the computational time. Throughout the paper, nontrivial examples have also been provided for proper justification of the theorems developed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A test instance generator for multiobjective mixed-integer optimization.
- Author
-
Eichfelder, Gabriele, Gerlach, Tobias, and Warnow, Leo
- Subjects
NONLINEAR equations ,INTEGERS ,ALGORITHMS - Abstract
Application problems can often not be solved adequately by numerical algorithms as several difficulties might arise at the same time. When developing and improving algorithms which hopefully allow to handle those difficulties in the future, good test instances are required. These can then be used to detect the strengths and weaknesses of different algorithmic approaches. In this paper we present a generator for test instances to evaluate solvers for multiobjective mixed-integer linear and nonlinear optimization problems. Based on test instances for purely continuous and purely integer problems with known efficient solutions and known nondominated points, suitable multiobjective mixed-integer test instances can be generated. The special structure allows to construct instances scalable in the number of variables and objective functions. Moreover, it allows to control the resulting efficient and nondominated sets as well as the number of efficient integer assignments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A criterion space algorithm for solving linear multiplicative programming problems.
- Author
-
Shen, Peiping, Deng, Yaping, and Wu, Dianxiao
- Subjects
BRANCH & bound algorithms ,LINEAR programming ,ALGORITHMS ,RELAXATION techniques - Abstract
In this paper, we develop a branch and bound algorithm for globally solving linear multiplicative programs (LMP). The problem LMP is firstly converted to an equivalent problem (EP2), and then a novel linear relaxation technique is constructed for (EP2) to get the lower bound of the optimum of LMP. Subsequently, the presented region pruning technique aims to eliminate as many areas as possible where the optimal solution does not exist. The convergence analysis of the algorithm is provided, and the number of worst-case iterations required is estimated to obtain an ε -optimal solution. Finally, the numerical experiments show the effectiveness of our presented algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. An inertial method for solving split equality quasimonotone Minty variational inequality problems in reflexive Banach spaces.
- Author
-
Belay, Yirga A., Zegeye, Habtu, Boikanyo, Oganeditse A., Gidey, Hagos H., and Kagiso, Dintle
- Abstract
In this paper, we introduce the split equality Minty variational inequality problem in reflexive real Banach spaces. Then we construct a single projection inertial algorithm for solving the introduced problem. We establish a strong convergence result with the assumption that the mappings under consideration are Lipschitz continuous and quasimonotone. We give some specific applications of the main result and finally provide a numerical example to demonstrate the workability of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. An Iterative Method for Horizontal Tensor Complementarity Problems.
- Author
-
Sun, Chen, Wang, Yong, and Huang, Zheng-Hai
- Subjects
COMPLEMENTARITY constraints (Mathematics) ,ALGORITHMS ,EQUATIONS - Abstract
In this paper, we focus on a class of horizontal tensor complementarity problems (HTCPs). By introducing the block representative tensor, we show that finding a solution of HTCP is equivalent to finding a nonnegative solution of a related tensor equation. We establish the theory of the existence and uniqueness of solution of HTCPs under the proper assumptions. In particular, in the case of the concerned block representative tensor possessing the strong M-property, we propose an algorithm to solve HTCPs by efficiently exploiting the beneficial properties of block representative tensor, and show that the iterative sequence generated by the algorithm is monotone decreasing and converges to a solution of HTCPs. The final numerical experiments verify the correctness of the theory in this paper and show the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Arc-dependent networks: theoretical insights and a computational study.
- Author
-
Velasquez, Alvaro, Wojciechowski, P., Subramani, K., and Williamson, Matthew
- Subjects
INTEGER programming ,MATHEMATICAL programming ,FAMILY size ,EMPIRICAL research ,COST - Abstract
In this paper, we study the efficacy of several mathematical programming formulations for the single-source shortest path problem, the negative cost cycle detection problem, and the shortest negative cost cycle problem in arc-dependent networks. In an arc-dependent network, the cost of an arc a depends upon the arc preceding a. These networks differ from traditional networks in which the cost associated with an arc is a fixed constant and part of the input. Arc-dependent networks are useful for modeling a number of real-world problems, such as the turn-penalty shortest path problem, which cannot be captured in the traditional network setting. We present new integer and non-linear programming formulations for each problem. We also perform the first known empirical study for arc-dependent networks to contrast the execution times of the two formulations on a set of graphs with varying families and sizes. Our experiments indicate that although non-linear programming formulations are more compact, integer programming formulations are more efficient for the problems studied in this paper. Additionally, we introduce a number of cuts for each integer programming formulation and examine their effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Correction of nonmonotone trust region algorithm based on a modified diagonal regularized quasi-Newton method.
- Author
-
Mirzaei, Seyed Hamzeh and Ashrafi, Ali
- Subjects
QUASI-Newton methods ,HESSIAN matrices ,ALGORITHMS - Abstract
In this paper, a new appropriate diagonal matrix estimation of the Hessian is introduced by minimizing the Byrd and Nocedal function subject to the weak secant equation. The Hessian estimate is used to correct the framework of a nonmonotone trust region algorithm with the regularized quasi-Newton method. Moreover, to counteract the adverse effect of monotonicity, we introduce a new nonmonotone strategy. The global and superlinear convergence of the suggested algorithm is established under some standard conditions. The numerical experiments on unconstrained optimization test functions show that the new algorithm is efficient and robust. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A variational formulation for a trust and reputation system.
- Author
-
Colajanni, Gabriella, Daniele, Patrizia, Giuffrè, Sofia, and Marcianò, Attilio
- Abstract
Trust and reputation systems play a pivotal role in modern decentralized environments, fostering cooperation and mitigating risks in various online interactions. This paper introduces a variational formulation approach to model and analyze trust and reputation systems. By formulating trust and reputation as variational problems, this approach offers a novel perspective on understanding the underlying mechanisms governing trust establishment. The variational formulation provides a mathematical framework to determine the equilibrium weighted trust values, taking into account that each trustee tries to maximize its gain, namely the benefit minus the costs. The paper illustrates the applicability of this variational formulation through a pletora of simulations, demonstrating its effectiveness in modeling trust and reputation systems. Insights gained from this approach offer valuable guidance for the design and implementation of more reliable and efficient trust and reputation mechanisms in decentralized environments [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Proximal Augmented Lagrangian Method for Linearly Constrained Nonconvex Composite Optimization Problems.
- Author
-
Melo, Jefferson G., Monteiro, Renato D. C., and Wang, Hairong
- Subjects
LAGRANGE multiplier - Abstract
This paper proposes and establishes the iteration complexity of an inexact proximal accelerated augmented Lagrangian (IPAAL) method for solving linearly constrained smooth nonconvex composite optimization problems. Each IPAAL iteration consists of inexactly solving a proximal augmented Lagrangian subproblem by an accelerated composite gradient (ACG) method followed by a suitable Lagrange multiplier update. For any given (possibly infeasible) initial point and tolerance ρ > 0 , it is shown that IPAAL generates an approximate stationary solution in O (ρ - 3 log (ρ - 1)) ACG iterations, which can be improved to O (ρ - 2.5 log (ρ - 1)) if it is further assumed that a certain Slater condition holds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Convexity of Sets and Quadratic Functions on the Hyperbolic Space.
- Author
-
Ferreira, Orizon P., Németh, Sándor Z., and Zhu, Jinzhen
- Subjects
HYPERBOLIC spaces ,HYPERBOLIC functions ,FUNCTION spaces ,SET functions ,CONVEX sets ,CONVEXITY spaces - Abstract
In this paper, some concepts of convex analysis on hyperbolic spaces are studied. We first study properties of the intrinsic distance, for instance, we present the spectral decomposition of its Hessian. Next, we study the concept of convex sets and the intrinsic projection onto these sets. We also study the concept of convex functions and present first- and second-order characterizations of these functions, as well as some optimization concepts related to them. An extensive study of the hyperbolically convex quadratic functions is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A fuzzy mathematical model to solve multi-objective trapezoidal fuzzy fractional programming problems.
- Author
-
Maharana, Sujit and Nayak, Suvasis
- Abstract
Decision making problems with ambiguous data often arise in numerous practical fields which can be formulated as optimization models in fuzzy environment. This paper develops a new mathematical model using a proposed methodology to efficiently solve a multi-objective linear fuzzy fractional programming problem in trapezoidal fuzzy environment and generate a set of nondominated solutions. The concept of fuzzy cuts with different degrees of satisfaction is implemented which transforms the fuzzy optimization into an equivalent interval valued optimization. Subsequently, interval valued linear functions approximate the fuzzy valued fractional functions based on Taylor's series expansion. Finally, a proposed concept using weighting sum approach with varying weight vectors is utilized to design a mathematical model which generates the set of nondominated solutions. Two numerical examples including an existing problem and an additional practical problem in the field of production, are solved for the illustration of the proposed model. The results of the numerical problems are comparatively discussed with graphical analysis to justify the feasibility and applicability of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An inertial Fletcher–Reeves-type conjugate gradient projection-based method and its spectral extension for constrained nonlinear equations.
- Author
-
Zheng, Haiyan, Li, Jiayi, Liu, Pengjie, and Rong, Xianglin
- Abstract
In this paper, we initially enhance the Fletcher–Reeves (FR) conjugate parameter through a shrinkage multiplier, leading to a derivative-free two-term search direction and its extended spectral version. The results indicate that both search directions demonstrate sufficient descent and trust-region properties, irrespective of the line search method utilized. Then, by combining the hyperplane projection-based approach and inertia technique, we present two inertial FR-type conjugate gradient projection-based methods for solving constrained nonlinear equations. The global convergence of our methods is theoretically established, without requiring the monotonicity or pseudo-monotonicity of the underlying mapping, nor the Lipschitz continuity condition. Numerical experiments conducted on constrained nonlinear equations, as well as applications in regularized decentralized logistic regression problems and sparse signal restoration problems, have demonstrated the numerical efficacy of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Smoothing composite proximal gradient algorithm for sparse group Lasso problems with nonsmooth loss functions.
- Author
-
Shen, Huiling, Peng, Dingtao, and Zhang, Xian
- Abstract
In recent years, the sparse and group sparse optimization problem has attracted extensive attention due to its wide applications in statistics, bioinformatics, signal interpretation and machine learning, which yields the sparsity both in group-wise and element-wise. In this paper, the sparse and group sparse optimization problem with a nonsmooth loss function is considered, where the sparsity and group sparsity are induced by a penalty composed of a combination of ℓ 1 norm and ℓ 2 , 1 norm, so it is called the sparse group Lasso (SGLasso) problem. To solve this problem, the nonsmooth loss function is smoothed first. Then, based on the smooth approximation of the loss function, a smoothing composite proximal gradient (SCPG) algorithm is proposed. It is showed that any accumulation point of the sequence generated by SCPG algorithm is a global optimal solution of the problem. Moreover, it is proved that the convergence rate of the objective function value is O (1 k 1 - σ ) where σ ∈ (0.5 , 1) is a constant. Finally, numerical results illustrate that the proposed SCPG algorithm is effective and robust for sparse and group sparse optimization problems. Especially, compared with some popular algorithms, SCPG algorithm has obvious advantages in anti-outlier. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A global interior point method for nonconvex geometric programming.
- Author
-
do Nascimento, Roberto Quirino, de Oliveira Santos, Rubia Mara, and Maculan, Nelson
- Abstract
The strategy presented in this paper differs significantly from existing approaches as we formulate the problem as a standard optimization problem of difference of convex functions. We have developed the necessary and sufficient conditions for global solutions in this standard form. The main challenge in the standard form arises from a constraint of the form g (t) ≥ 1 , where g is a convex function. We utilize the classical inequality between the weighted arithmetic and harmonic means to overcome this challenge. This enables us to express the optimality conditions as a convex geometric programming problem and employ a predictor-corrector primal-dual interior point method for its solution, with weights updated during the predictor phase. The interior point method solves the dual problem of geometric programming and obtains the primal solution through exponential transformation. We have implemented the algorithm in Fortran 90 and validated it using a set of test problems from the literature. The proposed method successfully solved all the test problems, and the computational results are presented alongside the tested problems and the corresponding solutions found. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Alleviating limit cycling in training GANs with an optimization technique.
- Author
-
Li, Keke, Tang, Liping, and Yang, Xinmin
- Abstract
In this paper, we undertake further investigation to alleviate the issue of limit cycling behavior in training generative adversarial networks (GANs) through the proposed predictive centripetal acceleration algorithm (PCAA). Specifically, we first derive the upper and lower complexity bounds of PCAA for a general bilinear game, with the last-iterate convergence rate notably improving upon previous results. Then, we combine PCAA with the adaptive moment estimation algorithm (Adam) to propose PCAA-Adam, for practical training of GANs to enhance their generalization capability. Finally, we validate the effectiveness of the proposed algorithm through experiments conducted on bilinear games, multivariate Gaussian distributions, and the CelebA dataset, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Forward–Reflected–Backward Splitting Algorithms with Momentum: Weak, Linear and Strong Convergence Results.
- Author
-
Yao, Yonghong, Adamu, Abubakar, and Shehu, Yekini
- Subjects
MONOTONE operators ,HILBERT space ,ALGORITHMS - Abstract
This paper studies the forward–reflected–backward splitting algorithm with momentum terms for monotone inclusion problem of the sum of a maximal monotone and Lipschitz continuous monotone operators in Hilbert spaces. The forward–reflected–backward splitting algorithm is an interesting algorithm for inclusion problems with the sum of maximal monotone and Lipschitz continuous monotone operators due to the inherent feature of one forward evaluation and one backward evaluation per iteration it possesses. The results in this paper further explore the convergence behavior of the forward–reflected–backward splitting algorithm with momentum terms. We obtain weak, linear, and strong convergence results under the same inherent feature of one forward evaluation and one backward evaluation at each iteration. Numerical results show that forward–reflected–backward splitting algorithms with momentum terms are efficient and promising over some related splitting algorithms in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Generalized Derivatives and Optimality Conditions in Nonconvex Optimization.
- Author
-
Yalcin, Gulcin Dinc and Kasimbeyli, Refail
- Abstract
In this paper, we study the radial epiderivative notion for nonconvex functions, which extends the (classical) directional derivative concept. The paper presents new definition and new properties for this notion and establishes relationships between the radial epiderivative, the Clarke’s directional derivative, the Rockafellar’s subderivative and the directional derivative. The radial epiderivative notion is used to establish new regularity conditions without convexity conditions. The paper presents explicit formulations for computing the radial epiderivatives in terms of weak subgradients and vice versa. We also present an iterative algorithm for approximate computing of radial epiderivatives and show that the algorithm terminates in a finite number of iterations. The paper analyzes necessary and sufficient conditions for global optimums in nonconvex optimization via the radial epiderivatives. We formulate a necessary and sufficient condition for a global descent direction for radially epidifferentiable nonconvex functions. All the properties and theorems presented in this paper are illustrated and interpreted on examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Exact penalty method for knot selection of B-spline regression.
- Author
-
Yagishita, Shotaro and Gotoh, Jun-ya
- Abstract
This paper presents a new approach to selecting knots at the same time as estimating the B-spline regression model. Such simultaneous selection of knots and model is not trivial, but our strategy can make it possible by employing a nonconvex regularization on the least square method that is usually applied. More specifically, motivated by the constraint that directly designates (the upper bound of) the number of knots to be used, we present an (unconstrained) regularized least square reformulation, which is later shown to be equivalent to the motivating cardinality-constrained formulation. The obtained formulation is further modified so that we can employ a proximal gradient-type algorithm, known as GIST, for a class of nonconvex nonsmooth optimization problems. We show that under a mild technical assumption, the algorithm is shown to reach a local minimum of the problem. Since it is shown that any local minimum of the problem satisfies the cardinality constraint, the proposed algorithm can be used to obtain a spline regression model that depends only on a designated number of knots at most. Numerical experiments demonstrate how our approach performs on synthetic and real data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Application of a globally convergent hybrid conjugate gradient method in portfolio optimization.
- Author
-
Mtagulwa, P., Kaelo, P., Diphofu, T., and Kaisara, K.
- Abstract
In this paper, we propose a modification that improves efficiency, robustness and reliability of the famous HS conjugate gradient method. In particular, we propose a hybrid of the HS and DHS methods, where DHS is another recent modification of the HS method. Irrespective of the line search, the search direction of the proposed method is sufficiently descent. Moreover, the new approach guarantees global convergence for general functions under the strong Wolfe line search. Numerical results and performance profiles are reported, and indicate that the new approach outperforms three similar methods in the literature. We also give a practical application of the new approach in minimizing risk in portfolio selection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Unified Analysis of a Class of Proximal Bundle Methods for Solving Hybrid Convex Composite Optimization Problems.
- Author
-
Liang, Jiaming and Monteiro, Renato D. C.
- Subjects
AIR forces - Abstract
This paper presents a proximal bundle (PB) framework based on a generic bundle update scheme for solving the hybrid convex composite optimization (HCCO) problem and establishes a common iteration-complexity bound for any variant belonging to it. As a consequence, iteration-complexity bounds for three PB variants based on different bundle update schemes are obtained in the HCCO context for the first time and in a unified manner. Although two of the PB variants are universal (i.e., their implementations do not require parameters associated with the HCCO instance), the other newly (as far as the authors are aware) proposed one is not, but has the advantage that it generates simple—namely, one-cut—bundle models. The paper also presents a universal adaptive PB variant (which is not necessarily an instance of the framework) based on one-cut models and shows that its iteration-complexity is the same as the two aforementioned universal PB variants. Funding: Financial support from the Office of Naval Research [N00014-18-1-2077] and the Air Force Office of Scientific Research [Grant FA9550-22-1-0088] is gratefully acknowledged. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Notes on the value function approach to multiobjective bilevel optimization.
- Author
-
Hoff, Daniel and Mehlitz, Patrick
- Subjects
BILEVEL programming ,PROBLEM solving - Abstract
This paper is concerned with the value function approach to multiobjective bilevel optimization which exploits a lower-level frontier-type mapping in order to replace the hierarchical model of two interdependent multiobjective optimization problems by a single-level multiobjective optimization problem. As a starting point, different value-function-type reformulations are suggested and their relations are discussed. Here, we focus on the situations where the lower-level problem is solved up to efficiency or weak efficiency, and an intermediate solution concept is suggested as well. We study the graph-closedness of the associated efficiency-type and frontier-type mappings. These findings are then used for two purposes. First, we investigate existence results in multiobjective bilevel optimization. Second, for the derivation of necessary optimality conditions via the value function approach, it is inherent to differentiate frontier-type mappings in a generalized way. Here, we are concerned with the computation of upper coderivative estimates for the frontier-type mapping associated with the setting where the lower-level problem is solved up to weak efficiency. We proceed in two ways, relying, on the one hand, on a weak domination property and, on the other hand, on a scalarization approach. Illustrative examples visualize our findings and some flaws in the related literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. On the convergence of local solutions for the method quADAPT.
- Author
-
Seidel, Tobias and Küfer, Karl-Heinz
- Subjects
DISCRETIZATION methods ,ALGORITHMS - Abstract
A classical solution approach to semi-infinite programming, which is easy to implement, is based on discretizing the semi-infinite index set. The Blankenship and Falk algorithm adaptively chooses a small discretization. On every iteration, a solution based on the current discretization is calculated. In a second step, the most violated constraint is determined and added to the discretization. In a previous work, the authors showed that the algorithm has a slow convergence and introduced a new method that exhibits a quadratic rate [Seidel and Küfer, An adaptive discretization method solving semi-infinite optimization problems with quadratic rate of convergence. Optimization. 2022;71(8):2211–2239]. In this paper, we further investigate the introduced method. The method implements new constraints that can cut off parts of the feasible set. We will study the effect on local minima. We assume that in each iteration local solutions to the discretized problems are computed. We will give an example showing that in general a limit point is not necessarily a local solution of the original semi-infinite problem, but only of an approximate problem. We then study second-order conditions and show that they coincide for both problems. We use this to develop conditions under which local solutions converge to a local solution in the limit. Finally, we present quadratic convergence results for the case of local solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. First- and second-order high probability complexity bounds for trust-region methods with noisy oracles.
- Author
-
Cao, Liyuan, Berahas, Albert S., and Scheinberg, Katya
- Subjects
NOISE ,PROBABILITY theory ,ALGORITHMS ,LITERATURE ,DESIGN - Abstract
In this paper, we present convergence guarantees for a modified trust-region method designed for minimizing objective functions whose value and gradient and Hessian estimates are computed with noise. These estimates are produced by generic stochastic oracles, which are not assumed to be unbiased or consistent. We introduce these oracles and show that they are more general and have more relaxed assumptions than the stochastic oracles used in prior literature on stochastic trust-region methods. Our method utilizes a relaxed step acceptance criterion and a cautious trust-region radius updating strategy which allows us to derive exponentially decaying tail bounds on the iteration complexity for convergence to points that satisfy approximate first- and second-order optimality conditions. Finally, we present two sets of numerical results. We first explore the tightness of our theoretical results on an example with adversarial zeroth- and first-order oracles. We then investigate the performance of the modified trust-region algorithm on standard noisy derivative-free optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Optimal gradient tracking for decentralized optimization.
- Author
-
Song, Zhuoqing, Shi, Lei, Pu, Shi, and Yan, Ming
- Subjects
UNDIRECTED graphs ,CONVEX functions ,ALGORITHMS ,DISTRIBUTED algorithms - Abstract
In this paper, we focus on solving the decentralized optimization problem of minimizing the sum of n objective functions over a multi-agent network. The agents are embedded in an undirected graph where they can only send/receive information directly to/from their immediate neighbors. Assuming smooth and strongly convex objective functions, we propose an Optimal Gradient Tracking (OGT) method that achieves the optimal gradient computation complexity O κ log 1 ϵ and the optimal communication complexity O κ θ log 1 ϵ simultaneously, where κ and 1 θ denote the condition numbers related to the objective functions and the communication graph, respectively. To our best knowledge, OGT is the first single-loop decentralized gradient-type method that is optimal in both gradient computation and communication complexities. The development of OGT involves two building blocks that are also of independent interest. The first one is another new decentralized gradient tracking method termed "Snapshot" Gradient Tracking (SS-GT), which achieves the gradient computation and communication complexities of O κ log 1 ϵ and O κ θ log 1 ϵ , respectively. SS - G T can be potentially extended to more general settings compared to OGT. The second one is a technique termed Loopless Chebyshev Acceleration (LCA), which can be implemented "looplessly" but achieves a similar effect by adding multiple inner loops of Chebyshev acceleration in the algorithm. In addition to SS - G T , this LCA technique can accelerate many other gradient tracking based methods with respect to the graph condition number 1 θ . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A new class of generalized Nash-population games via variational inequalities and fixed points.
- Author
-
Zhan, Yue-tian, Li, Xue-song, and Huang, Nan-jing
- Abstract
In this paper, we propose a new class of generalized Nash-population games (GNPGs) which can be used to capture the desired features of both population games (PGs) and generalized Nash games within the same framework. We introduce the concept of generalized inertial Nash equilibrium (GINE) for the GNPG and show the existence of GINE by using the method of the system of variational inequalities and fixed point theorems both in the compact and noncompact cases. Moreover, we introduce a slightly altruistic generalized inertial Nash equilibrium (SAGINE) as a refinement concept of the GINE and prove that the GNPG has at least an SAGINE under some mild assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Proximal Point Algorithms with Inertial Extrapolation for Quasi-convex Pseudo-monotone Equilibrium Problems.
- Author
-
Izuchukwu, Chinedu, Ogwo, Grace N., and Shehu, Yekini
- Subjects
EXTRAPOLATION ,ALGORITHMS ,EQUILIBRIUM ,POSSIBILITY ,ARGUMENT - Abstract
In this paper, we study the proximal point algorithm with inertial extrapolation to approximate a solution to the quasi-convex pseudo-monotone equilibrium problem. In the proposed algorithm, the inertial parameter is allowed to take both negative and positive values during implementations. The possibility of the choice of negative values for the inertial parameter sheds more light on the range of values of the inertial parameter for the proximal point algorithm. Under standard assumptions, we prove that the sequence of iterates generated by the proposed algorithm converges to a solution of the equilibrium problem when the bifunction is strongly quasi-convex in its second argument. Sublinear and linear rates of convergence are also given under standard conditions. Numerical results are reported for both cases of negative and positive inertial factor of the proposed algorithm and comparison with related algorithm is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. An approximation proximal gradient algorithm for nonconvex-linear minimax problems with nonconvex nonsmooth terms.
- Author
-
He, Jiefei, Zhang, Huiling, and Xu, Zi
- Subjects
MIMO systems ,WIRELESS communications ,MACHINE learning ,ALGORITHMS ,NONCONVEX programming - Abstract
Nonconvex minimax problems have attracted significant attention in machine learning, wireless communication and many other fields. In this paper, we propose an efficient approximation proximal gradient algorithm for solving a class of nonsmooth nonconvex-linear minimax problems with a nonconvex nonsmooth term, and the number of iteration to find an ε -stationary point is upper bounded by O (ε - 3) . Some numerical results on one-bit precoding problem in massive MIMO system and a distributed non-convex optimization problem demonstrate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Applying tangential subdifferentials in bilevel optimization.
- Author
-
Gadhi, Nazih Abderrazzak and Ohda, Mohamed
- Subjects
BILEVEL programming ,SUBDIFFERENTIALS ,CALMNESS - Abstract
The problem considered in this paper is a sequence of two optimization problems in which the feasible region of the upper-level optimization problem is determined implicitly by the solution set of the lower-level optimization problem. Using the optimal value reformulation, together with the partial calmness property, we give necessary optimality conditions in terms of tangential subdifferentials. An example that illustrates our finding is also given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Simple proximal-type algorithms for equilibrium problems.
- Author
-
Yao, Yonghong, Adamu, Abubakar, Shehu, Yekini, and Yao, Jen-Chih
- Subjects
EQUILIBRIUM ,PROBLEM solving - Abstract
This paper proposes two simple and elegant proximal-type algorithms to solve equilibrium problems with pseudo-monotone bifunctions in the setting of Hilbert spaces. The proposed algorithms use one proximal point evaluation of the bifunction at each iteration. Consequently, prove that the sequences of iterates generated by the first algorithm converge weakly to a solution of the equilibrium problem (assuming existence) and obtain a linear convergence rate under standard assumptions. We also design a viscosity version of the first algorithm and obtain its corresponding strong convergence result. Some popular existing algorithms in the literature are recovered. We finally give some numerical tests and compare our algorithms with some related ones to show the performance and efficiency of our proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A Globally Convergent Inertial First-Order Optimization Method for Multidimensional Scaling.
- Author
-
Ram, Noga and Sabach, Shoham
- Subjects
MULTIDIMENSIONAL scaling ,NONSMOOTH optimization ,DATA reduction ,DATA visualization ,ALGORITHMS - Abstract
Multidimensional scaling (MDS) is a popular tool for dimensionality reduction and data visualization. Given distances between data points and a target low-dimension, the MDS problem seeks to find a configuration of these points in the low-dimensional space, such that the inter-point distances are preserved as well as possible. We focus on the most common approach to formulate the MDS problem, known as stress minimization, which results in a challenging non-smooth and non-convex optimization problem. In this paper, we propose an inertial version of the well-known SMACOF Algorithm, which we call AI-SMACOF. This algorithm is proven to be globally convergent, and to the best of our knowledge this is the first result of this kind for algorithms aiming at solving the stress MDS minimization. In addition to the theoretical findings, numerical experiments provide another evidence for the superiority of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Exterior-Point Optimization for Sparse and Low-Rank Optimization.
- Author
-
Das Gupta, Shuvomoy, Stellato, Bartolomeo, and Van Parys, Bart P. G.
- Subjects
CONVEX functions ,MACHINE learning ,PROBLEM solving ,DATA science ,ALGORITHMS - Abstract
Many problems of substantial current interest in machine learning, statistics, and data science can be formulated as sparse and low-rank optimization problems. In this paper, we present the nonconvex exterior-point optimization solver (NExOS)—a first-order algorithm tailored to sparse and low-rank optimization problems. We consider the problem of minimizing a convex function over a nonconvex constraint set, where the set can be decomposed as the intersection of a compact convex set and a nonconvex set involving sparse or low-rank constraints. Unlike the convex relaxation approaches, NExOS finds a locally optimal point of the original problem by solving a sequence of penalized problems with strictly decreasing penalty parameters by exploiting the nonconvex geometry. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and low-rank optimization problems, empirically demonstrating that our algorithm outperforms specialized methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Third Order Dynamical Systems for the Sum of Two Generalized Monotone Operators.
- Author
-
Hai, Pham Viet and Vuong, Phan Tu
- Subjects
DYNAMICAL systems ,HILBERT space ,ALGORITHMS - Abstract
In this paper, we propose and analyze a third-order dynamical system for finding zeros of the sum of two generalized operators in a Hilbert space H . We establish the existence and uniqueness of the trajectories generated by the system under appropriate continuity conditions, and prove exponential convergence to the unique zero when the sum of the operators is strongly monotone. Additionally, we derive an explicit discretization of the dynamical system, which results in a forward–backward algorithm with double inertial effects and larger range of stepsize. We establish the linear convergence of the iterates to the unique solution using this algorithm. Furthermore, we provide convergence analysis for the class of strongly pseudo-monotone variational inequalities. We illustrate the effectiveness of our approach by applying it to structured optimization and pseudo-convex optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. On first and second order multiobjective programming with interval-valued objective functions.
- Author
-
Antczak, Tadeusz
- Subjects
SET-valued maps ,DIFFERENTIABLE functions ,DECISION making - Abstract
The growing use of optimization models to help decision making has created a demand for such tools that allow formulating and solving more models of real-world processes and systems related to human activity in which hypotheses are not verify in a way specific for classical optimization. One of the approaches for real-world extremum problems under uncertainty is interval-valued optimization. In this paper, a twice differentiable vector optimization problem with multiple interval-valued objective function and both inequality and equality constraints is considered. In this paper, the first order necessary optimality conditions of Karush-Kuhn-Tucker type are proved for differentiable interval-valued vector optimization problems under the first order constraint qualification. If the interval-valued objective function is assumed to be twice weakly differentiable and constraints functions are assumed to be twice differentiable, then two types of second order necessary optimality conditions under two various constraint qualifications are proved for such smooth interval-valued vector optimization problems. Finally, in order to illustrate the Karush-Kuhn-Tucker type necessary optimality conditions established in the paper, an example of an interval-valued optimization is given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Relaxed method for optimization problems with cardinality constraints.
- Author
-
Liang, Yan-Chao and Lin, Gui-Hua
- Abstract
In this paper, we review optimality conditions and constraint qualifications for the optimization problems with cardinality constraints (OPCC). OPCC is a class of optimization problems with important applications. In this paper, we provide a relaxed method for OPCC. We show that the Mangasarian-Fromovitz constraint qualification or constant positive linear dependence constraint qualification holds for the relaxed problem under some mild conditions. We provide that the local solution of the relaxed problem converges to the M-stationarity of OPCC under appropriate conditions. Furthermore, we obtain that the inexact stationary points of relaxed problem converges to the M-stationarity of OPCC under very weaker conditions. Numerical experiments show the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Variational Analysis Based on Proximal Subdifferential on Smooth Banach Spaces.
- Author
-
Zheng, Xi Yin
- Subjects
BANACH spaces ,HILBERT space ,NONSMOOTH optimization ,DIFFERENTIABLE functions - Abstract
This paper first shows that for any p ∈ (1, 2) there exists a continuously differentiable function f on l
p (and Lp ) such that the proximal subdifferential of f is empty everywhere, and hence it is not suitable to develop theory on proximal subdifferential in the classical Banach spaces lp and LP with p ∈ (1, 2). On the other hand, this paper establishes variational analysis based on the proximal subdifferential in the framework of smooth Banach spaces of power type 2, which conclude all Hilbert spaces and all the classical spaces lp and Lp with p ∈ (2, +∞). In particular, in such a smooth space, we provide the proximal subdifferential rules for sum functions, product functions, composite functions and supremum functions, which extend the basic results on the proximal subdifferential established in the framework of Hilbert spaces. Some of our main results are new even in the Hilbert space case. As applications, we provide KKT-like conditions for nonsmooth optimization problems in terms of proximal subdifferential. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
47. Gradient regularization of Newton method with Bregman distances.
- Author
-
Doikov, Nikita and Nesterov, Yurii
- Subjects
NEWTON-Raphson method ,LIPSCHITZ continuity ,REGULARIZATION parameter ,CONVEX functions ,EUCLIDEAN distance ,SQUARE root - Abstract
In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order O (k - 2) both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method (Nesterov and Polyak in Math Program 108(1):177–205, 2006) for convex minimization problems. This relaxation preserves the convergence properties and global complexities of the Cubic Newton in convex case, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate O (k - 3) , where k is the iteration counter. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A convergence analysis of hybrid gradient projection algorithm for constrained nonlinear equations with applications in compressed sensing.
- Author
-
Li, Dandan, Wang, Songhua, Li, Yong, and Wu, Jiaqi
- Subjects
NONLINEAR equations ,COMPRESSED sensing ,IMAGE reconstruction ,CONJUGATE gradient methods ,ALGORITHMS ,ORTHOGONAL matching pursuit - Abstract
In this paper, we propose a projection-based hybrid spectral gradient algorithm for nonlinear equations with convex constraints, which is based on a certain line search strategy. Convex combination technique is used to design a novel spectral parameter that is inspired by some classical spectral gradient methods. The search direction also meets the sufficient descent condition and trust region feature. The global convergence of the proposed algorithm has been established under reasonable assumptions. The results of the experiment demonstrate the proposed algorithm is both more promising and robust than some similar methods, and it is also capable of handling large-scale optimization problems. Furthermore, we apply it to problems involving sparse signal recovery and blurred image restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. On vector variational E-inequalities and differentiable vector optimization problem.
- Author
-
Antczak, Tadeusz and Abdulaleem, Najeeb
- Abstract
In this paper, in order to characterize optimality of a class of nonconvex differentiable multiobjective programming problems, we introduce two new types of vector variational-like inequalities, namely, a weak variational E-inequality and a vector variational E-inequality. Namely, under (strictly) E-convexity, we prove relationships between the aforesaid vector variational-like inequalities and differentiable vector optimization problems. Further, under (strictly) pseudo-E-convexity, we are in position to identify vector critical E-points, weak E-Pareto (E-Pareto) solutions of differentiable vector optimization problems and solutions of these introduced vector variational-like inequalities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Geometric Programming Problems with Triangular and Trapezoidal Twofold Uncertainty Distributions.
- Author
-
Mondal, Tapas, Ojha, Akshay Kumar, and Pani, Sabyasachi
- Subjects
GEOMETRIC programming - Abstract
Geometric programming is a well-known optimization tool for dealing with a wide range of nonlinear optimization and engineering problems. In general, it is assumed that the parameters of a geometric programming problem are deterministic and accurate. However, in the real-world geometric programming problem, the parameters are frequently inaccurate and ambiguous. To tackle the ambiguity, this paper investigates the geometric programming problem in an uncertain environment, with the coefficients as triangular and trapezoidal twofold uncertain variables. In this paper, we introduce uncertain measures in a generalized version and focus on more complicated twofold uncertainties to propose triangular and trapezoidal twofold uncertain variables within the context of uncertainty theory. We develop three reduction methods to convert triangular and trapezoidal twofold uncertain variables into singlefold uncertain variables using optimistic, pessimistic, and expected value criteria. Reduction methods are used to convert the geometric programming problem with twofold uncertainty into the geometric programming problem with singlefold uncertainty. Furthermore, the chance-constrained uncertain-based framework is used to solve the reduced singlefold uncertain geometric programming problem. Finally, a numerical example is provided to demonstrate the effectiveness of the procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.