711 results
Search Results
2. An area-type nonmonotone filter method for nonlinear constrained optimization.
- Author
-
Ke Su, Wei Lu, and Shaohua Liu
- Subjects
NONMONOTONIC logic ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,PARAMETERS (Statistics) - Abstract
In this paper, we define a new area-type filter algorithm based on the trust-region method. A relaxed trust-region quadratic correction subproblem is proposed to compute the trial direction at the current point. Consider the objective function and the constraint violation function at the current point as a point pair. We divide the point pairs into different partitions by the dominant region of the filter and calculate the contributions of the point pairs to the area of the filter separately. Different from the conventional filter, we define the contribution as the filter acceptance criterion for the trial point. The nonmonotone area-average form is also adopted in the filter mechanism. In this paper, monotone and nonmonotone methods are proposed and compared with the numerical values. Furthermore, the algorithm is proved to be convergent under some reasonable assumptions. The numerical experiment shows the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Two modified spectral conjugate gradient methods and their global convergence for unconstrained optimization.
- Author
-
Sun, Zhongbo, Li, Hongyang, Wang, Jing, and Tian, Yantao
- Subjects
SPECTRAL theory ,CONJUGATE gradient methods ,STOCHASTIC convergence ,MATHEMATICAL optimization ,CONVEX functions ,ALGORITHMS - Abstract
In this paper, two modified spectral conjugate gradient methods which satisfy sufficient descent property are developed for unconstrained optimization problems. For uniformly convex problems, the first modified spectral type of conjugate gradient algorithm is proposed under the Wolfe line search rule. Moreover, the search direction of the modified spectral conjugate gradient method is sufficiently descent for uniformly convex functions. Furthermore, according to the Dai-Liao's conjugate condition, the second spectral type of conjugate gradient algorithm can generate some sufficient decent direction at each iteration for general functions. Therefore, the second method could be considered as a modification version of the Dai-Liao's algorithm. Under the suitable conditions, the proposed algorithms are globally convergent for uniformly convex functions and general functions. The numerical results show that the approaches presented in this paper are feasible and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Subgradient-Based Neural Networks for Nonsmooth Nonconvex Optimization Problems.
- Author
-
Wei Bian and Xiaoping Xue
- Subjects
ARTIFICIAL neural networks ,NONCONVEX programming ,STOCHASTIC convergence ,MATHEMATICAL optimization ,ALGORITHMS ,SIGNAL processing - Abstract
This paper presents a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with a nonsmooth nonconvex objective function, a class of affine equality constraints, and a class of nonsmooth convex inequality constraints. The proposed neural network is modeled with a differential inclusion. Under a suitable assumption on the constraint set and a proper assumption on the objective function, it is proved that for a sufficiently large penalty parameter, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that the trajectory of the neural network converges to the set which consists of the equilibrium points of the neural network, and coincides with the set which consists of the critical points of the objective function in the feasible region. A condition is given to ensure the convergence to the equilibrium point set in finite time. Moreover, under suitable assumptions, the coincidence between the solution to the differential inclusion and the "slow solution" of it is also proved. Furthermore, three typical examples are given to present the effectiveness of the theoretic results obtained in this paper and the good performance of the proposed neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
5. Weiszfeld's Method: Old and New Results.
- Author
-
Beck, Amir and Sabach, Shoham
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,LOCATION problems (Programming) ,ALGORITHMS - Abstract
In 1937, the 16-years-old Hungarian mathematician Endre Weiszfeld, in a seminal paper, devised a method for solving the Fermat-Weber location problem-a problem whose origins can be traced back to the seventeenth century. Weiszfeld's method stirred up an enormous amount of research in the optimization and location communities, and is also being discussed and used till these days. In this paper, we review both the past and the ongoing research on Weiszfed's method. The existing results are presented in a self-contained and concise manner-some are derived by new and simplified techniques. We also establish two new results using modern tools of optimization. First, we establish a non-asymptotic sublinear rate of convergence of Weiszfeld's method, and second, using an exact smoothing technique, we present a modification of the method with a proven better rate of convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. A novel hybrid trust region algorithm based on nonmonotone and LOOCV techniques.
- Author
-
Ahmadvand, M., Esmaeilbeigi, M., Kamandi, A., and Yaghoobi, F. M.
- Subjects
ALGORITHMS ,RADIAL basis functions ,INTERPOLATION ,MATHEMATICAL optimization ,STOCHASTIC convergence - Abstract
In this paper, a novel hybrid trust-region algorithm using radial basis function (RBF) interpolations is proposed. The new algorithm is an improved version of ORBIT algorithm based on two novel ideas. Because the accuracy and stability of RBF interpolation depends on a shape parameter, so it is more appropriate to select this parameter according to the optimization problem. In the new algorithm, the appropriate shape parameter value is determined according to the optimization problem based on an effective statistical approach, while the ORBIT algorithm in all problems uses a fixed shape parameter value. In addition, the new algorithm is equipped with a new intelligent nonmonotone strategy which improves the speed of convergence, while the monotonicity of the sequence of objective function values in the ORBIT may decrease the rate of convergence, especially when an iteration is trapped near a narrow curved valley. The global convergence of the new hybrid algorithm is analyzed under some mild assumptions. The numerical results significantly indicate the superiority of the new algorithm compared with the original version. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Efficient Design of Microstrip Antennas for SDR Applications Using Modified PSO Algorithm.
- Author
-
Modiri, A. and Kiasaleh, K.
- Subjects
MICROSTRIP antennas ,ALGORITHMS ,PARTICLE swarm optimization ,SOFTWARE radio ,STOCHASTIC convergence ,MATHEMATICAL optimization ,NUMERICAL analysis - Abstract
In this paper, several irregularly shaped microstrip antenna structures are designed using a new version of Binary Particle Swarm Optimization (BPSO) algorithm in which the velocity calculation is modified towards a more accelerated and still robust search procedure, particularly aimed for software-defined radio (SDR) applications. The optimization results are compared using both the modified and the conventional BPSO algorithms. Pros and cons are studied in terms of optimization length, convergence speed and final design conformability to desired objectives. It is depicted that the modified BPSO achieves the design criterion considerably faster than the conventional one, at the cost of slightly limiting particle exploration ability. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
8. Improving Classical and Decentralized Differential Evolution With New Mutation Operator and Population Topologies.
- Author
-
Dorronsoro, Bernabé and Bouvry, Pascal
- Subjects
EVOLUTIONARY computation ,MUTATIONS (Algebra) ,TOPOLOGY ,OPERATOR theory ,ALGORITHMS ,MATHEMATICAL optimization ,HEURISTIC algorithms ,STOCHASTIC convergence - Abstract
Differential evolution (DE) algorithms compose an efficient type of evolutionary algorithm (EA) for the global optimization domain. Although it is well known that the population structure has a major influence on the behavior of EAs, there are few works studying its effect in DE algorithms. In this paper, we propose and analyze several DE variants using different panmictic and decentralized population schemes. As it happens for other EAs, we demonstrate that the population scheme has a marked influence on the behavior of DE algorithms too. Additionally, a new operator for generating the mutant vector is proposed and compared versus a classical one on all the proposed population models. After that, a new heterogeneous decentralized DE algorithm combining the two studied operators in the best performing studied population structure has been designed and evaluated. In total, 13 new DE algorithms are presented and evaluated in this paper. Summarizing our results, all the studied algorithms are highly competitive compared to the state-of-the-art DE algorithms taken from the literature for most considered problems, and the best ones implement a decentralized population. With respect to the population structure, the proposed decentralized versions clearly provide a better performance compared to the panmictic ones. The new mutation operator demonstrates a faster convergence on most of the studied problems versus a classical operator taken from the DE literature. Finally, the new heterogeneous decentralized DE is shown to improve the previously obtained results, and outperform the compared state-of-the-art DEs. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
9. A nonmonotone scaled conjugate gradient algorithm for large-scale unconstrained optimization.
- Author
-
Ou, Yigui and Zhou, Xin
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,SMOOTHNESS of functions - Abstract
This paper proposes a nonmonotone scaled conjugate gradient algorithm for solving large-scale unconstrained optimization problems, which combines the idea of scaled memoryless Broyden-Fletcher-Goldfarb-Shanno preconditioned conjugate gradient method with the nonmonotone technique. An attractive property of the proposed method is that the search direction always provides sufficient descent step at each iteration. This property is independent of the line search used. Under appropriate assumptions, the method is proven to possess global convergence for nonconvex smooth functions, and R-linear convergence for strongly convex functions. Preliminary numerical results and related comparisons show the efficiency of the proposed method in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Outdoor WLAN planning via non-monotone derivative-free optimization: algorithm adaptation and case study.
- Author
-
González-Castaño, Francisco J., Costa-Montenegro, Enrique, Burguillo-Rial, Juan C., and García-Palomares, Ubaldo
- Subjects
WIRELESS LANs ,MATHEMATICAL optimization ,ALGORITHMS ,SIMULATED annealing ,STOCHASTIC convergence ,MOBILE communication systems - Abstract
In this paper, we study the application of non-monotone derivative-free optimization algorithms to wireless local area networks (WLAN) planning, which can be modeled as an unconstrained minimization problem. We wish to determine the access point (AP) positions that maximize coverage in order to provide connectivity to static and mobile users. As the objective function of the optimization model is not everywhere differentiable, previous research has discarded gradient methods and employed heuristics such as neighborhood search (NS) and simulated annealing (SA). In this paper, we show that the model fulfills the conditions required by recently proposed non-monotone derivative-free (DF) algorithms. Unlike SA, DF has guaranteed convergence. The numerical tests reveal that a tailored DF implementation (termed "zone search") outperforms NS and SA. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
11. Sequential Penalty Algorithm for Nonlinear Constrained Optimization.
- Author
-
Zhang, J.L. and Zhang, X.S.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONLINEAR theories ,STOCHASTIC convergence ,MATHEMATICS ,MATHEMATICAL analysis - Abstract
In this paper, a new sequential penalty algorithm, based on the L[sub ∞] exact penalty function, is proposed for a general nonlinear constrained optimization problem. The algorithm has the following characteristics: it can start from an arbitrary initial point; the feasibility of the subproblem is guaranteed; the penalty parameter is adjusted automatically; global convergence without any regularity assumption is proved. The update formula of the penalty parameter is new. It is proved that the algorithm proposed in this paper behaves equivalently to the standard SQP method after sufficiently many iterations. Hence, the local convergence results of the standard SQP method can be applied to this algorithm. Preliminary numerical experiments show the efficiency and stability of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
12. Parameter optimization in iterative learning control.
- Author
-
Owens, D. H. and Feng, K.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MONOTONIC functions ,ZERO (The number) ,STOCHASTIC convergence - Abstract
In this paper parameter optimization through a quadratic performance index is introduced as a method to establish a new iterative learning control law. With this new algorithm, monotonic convergence of the error to zero is guaranteed if the original system is a discrete-time LTI system and it satisfies a positivity condition. If the original system is not positive, two methods are derived to make the system positive. The effect of the choice of weighting parameters in the performance index on convergence rate is analysed. As a result adaptive weights are introduced as a method to improve the convergence properties of the algorithm. A high-order version of the algorithm is also derived and its convergence analysed. The theoretical findings in this paper are highlighted with simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
13. Balance Between Genetic Search and Local Search in Memetic Algorithms for Multiobjective Permutation Flowshop Scheduling.
- Author
-
Ishibuchi, Hisao, Yoshida, Tadashi, and Murata, Tadahiko
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,SCIENTIFIC experimentation ,MEMETICS - Abstract
This paper shows how the performance of evolutionary multiobjective optimization (EMO) algorithms can be improved by hybridization with local search. The main positive effect of the hybridization is the improvement in the convergence speed to the Pareto front. On the other hand, the main negative effect is the increase in the computation time per generation. Thus, the number of generations is decreased when the available computation time is limited. As a result, the global search ability of EMO algorithms is not fully utilized. These positive and negative effects are examined by computational experiments on multiobjective permutation flowshop scheduling problems. Results of our computational experiments clearly show the importance of striking a balance between genetic search and local search. In this paper, we first modify our former multiobjective genetic local search (MOGLS) algorithm by choosing only good individuals as initial solutions for local search and assigning an appropriate local search direction to each initial solution. Next, we demonstrate the importance of striking a balance between genetic search and local search through computational experiments. Then we compare the modified MOGLS with recently developed EMO algorithms: strength Pareto evolutionary algorithm and revised nondominated sorting genetic algorithm. Finally, we demonstrate that local search can be easily combined with those EMO algorithms for designing multiobjective memetic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
14. The robust constant and its applications in random global search for unconstrained global optimization.
- Author
-
Peng, Zheng, Wu, Donghua, and Zhu, Wenxing
- Subjects
ROBUST optimization ,GLOBAL optimization ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,ALGORITHMS ,STOCHASTIC convergence - Abstract
Robust analysis is important for designing and analyzing algorithms for global optimization. In this paper, we introduce a new concept, robust constant, to quantitatively characterize the robustness of measurable sets and functions. The new concept is consistent to the theoretical robustness presented in literatures. This paper shows that, from the respects of convergence theory and numerical computational cost, robust constant is valuable significantly for analyzing random global search methods for unconstrained global optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. A new linearization technique for minimax linear fractional programming.
- Author
-
Jiao, Hongwei and Liu, Sanyang
- Subjects
LINEAR programming ,NONCONVEX programming ,MATHEMATICAL optimization ,ALGORITHMS ,RELAXATION methods (Mathematics) ,STOCHASTIC convergence ,ROBUST control - Abstract
This paper presents a deterministic global optimization algorithm for solving minimax linear fractional programming (MLFP). In this algorithm, a new linearization technique is proposed, which uses more information of the objective function of the (MLFP) than other techniques. By utilizing this new linearization technique, the initial nonconvex programming problem (MLFP) is reduced to a sequence of linear relaxation programming (LRP) problems, which can provide reliable lower bound of the optimal value. The proposed algorithm is convergent to the global minimum of the (MLFP) through the successive refinement of the feasible region and solutions of a series of the (LRP). Compared with the known algorithms, numerical results show that the proposed algorithm is robust and effective. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
16. A convergent relaxation of the Douglas-Rachford algorithm.
- Author
-
Thao, Nguyen Hieu
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,PROBLEM solving - Abstract
This paper proposes an algorithm for solving structured optimization problems, which covers both the backward-backward and the Douglas-Rachford algorithms as special cases, and analyzes its convergence. The set of fixed points of the corresponding operator is characterized in several cases. Convergence criteria of the algorithm in terms of general fixed point iterations are established. When applied to nonconvex feasibility including potentially inconsistent problems, we prove local linear convergence results under mild assumptions on regularity of individual sets and of the collection of sets. In this special case, we refine known linear convergence criteria for the Douglas-Rachford (DR) algorithm. As a consequence, for feasibility problem with one of the sets being affine, we establish criteria for linear and sublinear convergence of convex combinations of the alternating projection and the DR methods. These results seem to be new. We also demonstrate the seemingly improved numerical performance of this algorithm compared to the RAAR algorithm for both consistent and inconsistent sparse feasibility problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. A skin membrane-driven membrane algorithm for many-objective optimization.
- Author
-
Li, Zhangxiao, Zhang, Lei, Su, Yansen, Li, Jun, and Wang, Xun
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,NATURAL computation ,CELL membranes - Abstract
Many-objective optimization problems refer to problems that hold more than three conflicting objectives, which are more challenging than those with two or three objectives. Membrane computing models, usually termed P systems, are a class of living cell-inspired computing models, which provide a rich framework for solving a variety of challenging problems. In this paper, a membrane computing model-based algorithm is proposed for many-objective optimization. Specifically, the population in the skin membrane is divided into two subpopulations, one used for guiding the convergence of populations in the internal membrane, while the other taking charge of the diversity of populations. Experimental results on benchmark test problems for many-objective optimization indicate the superiority of the developed algorithm over existing evolutionary many-objective optimization algorithms and P systems based multi-objective optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. On the Theoretical Analysis of the Plant Propagation Algorithms.
- Author
-
Sulaiman, Muhammad, Salhi, Abdellah, Khan, Asfandyar, Muhammad, Shakoor, and Khan, Wali
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,HEURISTIC algorithms ,PROBLEM solving ,STOCHASTIC convergence - Abstract
Plant Propagation Algorithms (PPA) are powerful and flexible solvers for optimisation problems. They are nature-inspired heuristics which can be applied to any optimisation/search problem. There is a growing body of research, mainly experimental, on PPA in the literature. Little, however, has been done on the theoretical front. Given the prominence this algorithm is gaining in terms of performance on benchmark problems as well as practical ones, some theoretical insight into its convergence is needed. The current paper is aimed at fulfilling this by providing a sketch for a global convergence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. MM algorithms for geometric and signomial programming.
- Author
-
Lange, Kenneth and Zhou, Hua
- Subjects
GEOMETRIC programming ,ALGORITHMS ,GENERALIZATION ,MATHEMATICAL optimization ,ARITHMETIC ,MATHEMATICAL inequalities ,STOCHASTIC convergence - Abstract
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
20. Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee.
- Author
-
Frandi, Emanuele, Ñanculef, Ricardo, Lodi, Stefano, Sartori, Claudio, and Suykens, Johan
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MACHINE learning ,MATHEMATICAL optimization ,REGRESSION analysis ,MATHEMATICAL regularization - Abstract
Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of machine learning. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems where computing a sparse model is desirable. In this paper, we present a high-performance implementation of the FW method tailored to solve large-scale Lasso regression problems, based on a randomized iteration, and prove that the convergence guarantees of the standard FW method are preserved in the stochastic setting. We show experimentally that our algorithm outperforms several existing state of the art methods, including the Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso solvers), on several benchmark datasets with a very large number of features, without sacrificing the accuracy of the model. Our results illustrate that the algorithm is able to generate the complete regularization path on problems of size up to four million variables in <1 min. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. On the Solution of Generalized Multiplicative Extremum Problems.
- Author
-
Ashtiani, Alireza M. and Ferreira, Paulo A. V.
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,EIGENVALUES ,MATHEMATICAL inequalities ,STOCHASTIC convergence ,QUADRATIC programming - Abstract
The paper addresses the problem of maximizing a sum of products of positive and concave real-valued functions over a convex feasible set. A reformulation based on the image of the feasible set through the vector-valued function which describes the problem, combined with an adequate application of convex analysis results, lead to an equivalent indefinite quadratic extremum problem with infinitely many linear constraints. Special properties of this later problem allow to solve it by an efficient relaxation algorithm. Some numerical tests illustrate the approach proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
22. Pseudotransient Continuation for Solving Systems of Nonsmooth Equations with Inequality Constraints.
- Author
-
Chen, J. and Qi, L.
- Subjects
NUMERICAL solutions to nonlinear evolution equations ,NONSMOOTH optimization ,MATHEMATICAL inequalities ,ALGORITHMS ,STOCHASTIC convergence ,CONTINUATION methods ,MATHEMATICAL optimization ,CONSTRAINT satisfaction - Abstract
This paper investigates a pseudotransient continuation algorithm for solving a system of nonsmooth equations with inequality constraints. We first transform the inequality constrained system of nonlinear equations to an augmented nonsmooth system, and then employ the pseudotransient continuation algorithm for solving the corresponding augmented nonsmooth system. The method gets its global convergence properties from the dynamics, and inherits its local convergence properties from the semismooth Newton method. Finally, we illustrate the behavior of our approach by some numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
23. A nonmonotone quasi-Newton trust-region method of conic model for unconstrained optimization.
- Author
-
Qu, Shao-Jian, Zhang, Qing-Pu, and Jiang, Su-Da
- Subjects
CONIC sections ,ALGORITHMS ,MATHEMATICAL optimization ,VECTOR analysis ,MATRICES (Mathematics) ,STOCHASTIC convergence - Abstract
This paper presents a new nonmonotone quasi-Newton trust-region algorithm of the conic model for the solution of unconstrained optimization problems, where the computation of the horizontal vectors is easier and the approximate Hessian matrices can be maintained positive definite. Under reasonable assumptions, the global convergence, the locally linear and superlinear convergence of the proposed algorithm are developed, respectively. It is well known that in applying trust-region algorithms, the basic issue is how to solve the trust-region subproblem efficiently. To deal with the issue, an approximate solution method is developed in this paper. Note that the approximate solution method not only is computationally cheap, but also preserves the strong convergence properties as the exact solution methods. Numerical results are shown for a number of test problems from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
24. A mixed strategy of combining evolutionary algorithms with multigrid methods.
- Author
-
He, Jun and Kang, Lishan
- Subjects
MULTIGRID methods (Numerical analysis) ,MATHEMATICAL optimization ,NUMERICAL analysis ,STOCHASTIC convergence ,ALGORITHMS - Abstract
Multigrid methods have been proven to be an efficient approach in accelerating the convergence rate of numerical algorithms for solving partial differential equations. This paper investigates whether multigrid methods are helpful to accelerate the convergence rate of evolutionary algorithms for solving global optimization problems. A novel multigrid evolutionary algorithm is proposed and its convergence is proven. The algorithm is tested on a set of 13 well-known benchmark functions. Experiment results demonstrate that multigrid methods can accelerate the convergence rate of evolutionary algorithms and improve their performance. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
25. Injection moulding optimisation of multi-class design variables using a PSO algorithm.
- Author
-
Deng, Y.-M., Zheng, D., and Lu, X.-J.
- Subjects
MOLDING (Founding) ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,COMPUTER software - Abstract
Injection moulding optimisation seeks to achieve the highest possible moulding quality under the specified constraints. To this end, the factors (design variables) affecting the moulding quality should be adjusted, including those of process parameters, mould design, part geometry, etc. Past work in this aspect is primarily focused on tuning the process parameters and mould design (e.g., gate location, runner and cooling channel layout), with less attention on the part geometry, and none on them all. To address this problem, this paper presents a PSO (particle swarm optimisation) algorithm for the optimisation of multi-class design variables, such as the part thickness, process parameters (melt temperature, mould temperature, injection time) and gate location. The optimisation is targeted at different aspects of moulding quality, including part warpage, weld lines, air traps, and so on. In applying the PSO algorithm, the paper proposes a modified elite archiving method, which can expedite the convergence speed, hence improving the efficiency of the algorithm. A computer program was developed that automates the steps such as adjusting the part thickness, the injection moulding process parameters and the gate location, activating the CAE software to simulate the injection moulding process, retrieving the simulation results, and evaluating the objective functions. The whole procedure iterates a number of generations by following the search process of the algorithm. A case study was also presented to illustrate as well as to test the proposed methodology, which was demonstrated as both effective and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
26. Guaranteeing Practical Convergence in Algorithms for Sensor and Source Localization.
- Author
-
Fidan, Bariş, Dasgupta, Soura, and Anderson, Brian D. O.
- Subjects
STOCHASTIC convergence ,DETECTORS ,ALGORITHMS ,SENSOR networks ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,ENGINEERING instruments ,MATHEMATICAL functions - Abstract
This paper considers localization of a source or a sensor from distance measurements. We argue that linear algorithms proposed for this purpose are susceptible to poor noise performance. Instead given a set of sensors/anchors of known positions and measured distances of the source/sensor to be localized from them we propose a potentially nonconvex weighted cost function whose global minimum estimates the location of the source/sensor one seeks. The contribution of this paper is to provide nontrivial ellipsoidal and polytopic regions surrounding these sensors/anchors of known positions, such that if the object to be localized is in this region, localization occurs by globally exponentially convergent gradient descent in the noise free case. Exponential convergence in the noise free case represents practical convergence as it ensures graceful performance degradation in the presence of noise. These results guide the deployment of sensors/anchors so that small subsets can be made responsible for practical localization in geographical areas determined by our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization.
- Author
-
Chih-Jen Lin
- Subjects
STOCHASTIC convergence ,NONNEGATIVE matrices ,FACTORIZATION ,ALGORITHMS ,STATIONARY processes ,MATHEMATICAL optimization - Abstract
Nonnegative matrix factorization (NMF) is useful to find basis information of nonnegative data. Currently, multiplicative updates are a simple and popular way to find the factorization. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, no proof has shown that multiplicative updates converge to a stationary point of the NMF optimization problem. Stationarity is important as it is a necessary condition of a local minimum. This paper discusses the difficulty of proving the convergence. We propose slight modifications of existing updates and prove their convergence. Techniques invented in this paper may be applied to prove the convergence for other bound-constrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
28. A generating set search method using curvature information.
- Author
-
Frimannslund, Lennart and Steihaug, Trond
- Subjects
CURVATURE ,SMOOTHING (Numerical analysis) ,ALGORITHMS ,CALCULUS ,MATHEMATICAL optimization ,STOCHASTIC convergence - Abstract
Direct search methods have been an area of active research in recent years. On many real-world problems involving computationally expensive and often noisy functions, they are one of the few applicable alternatives. However, although these methods are usually easy to implement, robust and provably convergent in many cases, they suffer from a slow rate of convergence. Usually these methods do not take the local topography of the objective function into account. We present a new algorithm for unconstrained optimisation which is a modification to a basic generating set search method. The new algorithm tries to adapt its search directions to the local topography by accumulating curvature information about the objective function as the search progresses. The curvature information is accumulated over a region thus smoothing out noise and minor discontinuities. We present some theory regarding its properties, as well as numerical results. Preliminary numerical testing shows that the new algorithm outperforms the basic method most of the time, sometimes by significant relative margins, on noisy as well as smooth problems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
29. A novel algorithm to optimize complicated low-thrust trajectory.
- Author
-
Yuan Ren, Pingyuan Cui, and Enjie Luan
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,COMPUTATIONAL complexity ,GENETIC algorithms ,STOCHASTIC convergence ,TRAJECTORIES (Mechanics) - Abstract
Purpose - This paper aims to investigate, a new optimization algorithm for complex orbit transfer missions with low-thrust propulsion system, which minimizes the drawbacks of traditional optimization methods, such as bad convergence, difficulty of initial guesses and local optimality. Design/methodology/approach - First, the trajectory optimization problem comes down to a nonlinear constraint parameter optimization by using the concept of traditional hybrid method. Then, one utilizes genetic algorithm (GA) to solve this parameter optimization problem after treating the constraints with the simulated annealing (SA) and random penalty function. Finally, one makes use of localized optimization to improve the precision of the final solutions. Findings - This algorithm not only keeps the advantages of traditional hybrid method such as high precision and smooth solutions, but also inherits the merits of GA which could avoid initial guess work and obtain a globally optimal solution. Research limitations/implications - Further, research is required to reduce the computational complexity when the transfer trajectory is very complex and/or has many adjustable variables. Practical implications - By using this method, the globally optimal solutions of some complex missions, which could not be obtained by traditional method, could be found. Originality/value - This method combines the GA with traditional hybrid method, and utilizes SA and random penalty functions to treat with constraints, and then gives out a super convergence way to find the globally optimal low-thrust transfer orbit. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
30. Global Convergence of a Class of Discrete-Time Interconnected Pendulum-Like Systems.
- Author
-
Yang, Y., Duan, Z. S., and Huang, L.
- Subjects
STOCHASTIC convergence ,DISCRETE-time systems ,DIGITAL control systems ,SYSTEM analysis ,MATRIX inequalities ,LARGE scale systems ,ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
This paper focuses on a class of discrete-time interconnected pendulumlike systems. Sufficient conditions for the global convergence of systems with both structured and unstructured uncertainties in its linear part are established in terms of linear matrix inequalities (LMIs). In the traditional decentralized control of large scale systems, the effects of interconnections were seldom studied. In this paper, a square matrix specifying the interconnecting structure is introduced in order to discuss the effects of nonlinear interconnections. It is shown that the global convergence of the whole interconnected system can be achieved by designing an appropriate interconnection matrix. In order to solve the nonlinear matrix inequalities (NMIs) arising in the synthesis problem, a global optimization algorithm is presented which can be used to handle a class of NMI problems. An illustrative example is given to demonstrate the applicability and validity of the main results and the presented algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. On the Method of Shortest Residuals for Unconstrained Optimization.
- Author
-
Pytlak, R. and Tarnawski, T.
- Subjects
CONJUGATE gradient methods ,APPROXIMATION theory ,NUMERICAL solutions to equations ,ITERATIVE methods (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,NUMERICAL analysis ,CALCULUS of variations - Abstract
The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm, first introduced by Lemaréchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an implementation of a standard Polak–Ribière conjugate gradient algorithm is also provided. It supports the claim that the method of shortest residuals is a viable technique, competitive to other conjugate gradient algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Globally Convergent Optimization Algorithms on Riemannian Manifolds: Uniform Framework for Unconstrained and Constrained Optimization.
- Author
-
Yang, Y.
- Subjects
RIEMANNIAN manifolds ,MANIFOLDS (Mathematics) ,DIFFERENTIAL geometry ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,ACCELERATION of convergence in numerical analysis ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICS - Abstract
This paper proposes several globally convergent geometric optimization algorithms on Riemannian manifolds, which extend some existing geometric optimization techniques. Since any set of smooth constraints in the Euclidean space R
n (corresponding to constrained optimization) and the Rn space itself (corresponding to unconstrained optimization) are both special Riemannian manifolds, and since these algorithms are developed on general Riemannian manifolds, the techniques discussed in this paper provide a uniform framework for constrained and unconstrained optimization problems. Unlike some earlier works, the new algorithms have less restrictions in both convergence results and in practice. For example, global minimization in the one-dimensional search is not required. All the algorithms addressed in this paper are globally convergent. For some special Riemannian manifold other than Rn , the newalgorithms are very efficient. Convergence rates are obtained. Applications are discussed. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
33. Toward the Training of Feed-Forward Neural Networks With the D-Optimum Input Sequence.
- Author
-
Witczak, Marcin
- Subjects
STOCHASTIC convergence ,EXPERIMENTAL design ,MATHEMATICAL optimization ,TRAINING ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
The problem under consideration is to obtain a measurement schedule for training neural networks. This task is perceived as an experimental design in a given design space that is obtained in such a way as to minimize the difference between the neural network and the system being considered. This difference can be expressed in many different ways and one of them, namely, the D-optimality criterion is used in this paper. In particular, the paper presents a unified and comprehensive treatment of this problem by discussing the existing and previously unpublished properties of the optimum experimental design (OED) for neural networks. The consequences of the above properties are discussed as well. A hybrid algorithm that can be used for both the training and data development of neural networks is another important contribution of this paper. A careful analysis of the algorithm is presented and its comprehensive convergence analysis with the help of the Lyapunov method are given. The paper contains a number of numerical examples that justify the application of the OED theory for neural networks. Moreover, an industrial application example is given that deals with the valve actuator. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. New global optimization algorithm via dynamic control.
- Author
-
Kiyotaka Shimizu and Kosuke Suzaki
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,DYNAMIC programming ,MATHEMATICAL programming ,STOCHASTIC convergence ,MATHEMATICS - Abstract
This paper proposes a new algorithm for the unconstrained optimization problem, which is derived as an application of the control theory called Direct Gradient Descent Control. The static optimization problem is solved by using a dynamic controller in which the convergence speed can be greatly accelerated. The major idea is to consider the objective function F(
x ) and its time derivative dF(x (t))/dt as the evaluation criteria, and to apply the gradient descent method. It is verified by simulation that the proposed dynamic mathematical programming has a very great capability to converge to the optimal solution. It is also interesting to note that the method has to a certain extent the ability to find not only local optimal solutions, but also global optimal solutions. This paper improves the basic algorithm of dynamic mathematical programming so that it is suited to global optimization. The new algorithm generates movement toward the global optimal solution by utilizing two different trajectories with interaction (the chaotic trajectory and convergence trajectory). It is verified by simulation that the new algorithm functions as a global optimization procedure with a high success probability. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(6): 20–27, 2005; Published online in Wiley InterScience (www.interscience.wiley.com ). DOI 10.1002/ecjc.20094 [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
35. Denoising of Smooth Images Using L1-Fitting.
- Author
-
Kärkkäinen, T., Kunisch, K., and Majava, K.
- Subjects
ALGORITHMS ,NUMERICAL analysis ,STOCHASTIC convergence ,MATHEMATICAL optimization ,PROBLEM solving - Abstract
In this paper, denoising of smooth ( H
1 0 -regular) images is considered. The purpose of the paper is basically twofold. First, to compare the denoising methods based on L1 - and L2 -fitting. Second, to analyze and realize an active-set method for solving the non-smooth optimization problem arising from the former approach. More precisely, we formulate the algorithm, proof its convergence, and give an efficient numerical realization. Several numerical experiments are presented, where the convergence of the proposed active-set algorithm is studied and the denoising properties of the methods based on L1 - and L2 -fitting are compared. Also a heuristic method for determining the regularization parameter is presented and tested. [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
36. A new nonmonotone filter Barzilai–Borwein method for solving unconstrained optimization problems.
- Author
-
Arzani, F. and Peyghami, M. Reza
- Subjects
MONOTONE operators ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper, a finite filter is used in the structure of the Barzilai–Browein (BB) gradient method in order to propose a new modified BB algorithm for solving large-scale unconstrained optimization problems. Our algorithm is equipped with a relaxed nonmonotone line search technique which allows the algorithm to enjoy the nonmonotonicity properties from scratch. Under some suitable conditions, the global convergence property of the new proposed algorithm is established. Numerical results on some test problems in CUTEr library show the efficiency and effectiveness of the new algorithm in practice too. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
37. A globally and quadratically convergent algorithm with efficient implementation for unconstrained optimization.
- Author
-
Yang, Yaguang
- Subjects
STOCHASTIC convergence ,NONLINEAR theories ,GLOBAL analysis (Mathematics) ,ALGORITHMS ,MATHEMATICAL optimization ,HESSIAN matrices - Abstract
In this paper, an efficient modified Newton-type algorithm is proposed for nonlinear unconstrained optimization problems. The modified Hessian is a convex combination of the identity matrix (for steepest descent algorithm) and the Hessian matrix (for Newton algorithm). The coefficients of the convex combination are dynamically chosen in every iteration. The algorithm is proved to be globally and quadratically convergent for (convex and non-convex) nonlinear functions. Efficient implementation is described. Numerical test on widely used CUTEr test problems is conducted for the new algorithm. The test results are compared with those obtained by MATLAB optimization toolbox function fminunc. The test results are also compared with those obtained by some established and state $$-$$ of-the $$-$$ art algorithms, such as a limited memory BFGS, a descent and conjugate gradient algorithm, and a limited memory and descent conjugate gradient algorithm. The comparisons show that the new algorithm is promising. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. General Split Equality Equilibrium Problems with Application to Split Optimization Problems.
- Author
-
Chang, Shih-sen, Wang, Lin, Wang, Xiong, and Wang, Gang
- Subjects
MATHEMATICAL optimization ,HILBERT space ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL sequences - Abstract
The purpose of this paper is to introduce and study the general split equality equilibrium problem and the general split equilibrium problem in Hilbert spaces. In order to solve these problems, a new simultaneous iterative algorithm is proposed and several strong convergence theorems for the sequences generated by the algorithm are proved. As applications, we utilize our results to study the general split equality optimization problem and the general split optimization problem. The results presented in the paper extend and improve some recent results. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. A Numerical Analysis of Electromagnetic Scattering of Perfect Conducting Cylinders by Means of Discrete Singularity Method Improved by Optimization Process.
- Author
-
Nishimura, Mampei, Takamatsu, Shuji, and Shigesawa, Hiroshi
- Subjects
SCATTERING (Physics) ,NUMERICAL analysis ,MATHEMATICAL optimization ,WAVE functions ,ALGORITHMS ,STOCHASTIC convergence - Abstract
As a method of solving the electromagnetic scattering by perfectly conducting cylinders, the discrete singularity method is discussed. For this method, a finite number of singular points is distributed within the scatterer and the wave function composed of a linear combination of solutions (each of which is derived from a helmholtz equation with each single singularity among the many) is matched with the incident wave function on the boundary. The remaining important problem in this method is how to distributed these singularities in order to obtain the accurate solution with faster convergence and with fewer number of singular points for an arbitrary configuration of the scatterer. The purpose of this paper is to present a reasonable solution to this problem. The essential of the method is that by making as unknown quantities, not only the unknown amplitude coefficients involved in the linear combination of the solutions, but also the coordinates of the singularities, the scattered wave is matched on the boundary in the sense of the least squares. This problem is linear in the amplitude coefficient but nonlinear in the singularity coordinate. Thus we solve this problem by using the algorithm by marquardt et al, of a nonlinear optimization process. This paper considers the general description and indicates its effectiveness by showing a few examples of the scattering problem of rectangular cylinders. [ABSTRACT FROM AUTHOR]
- Published
- 1984
- Full Text
- View/download PDF
40. Linesearch methods for bilevel split pseudomonotone variational inequality problems.
- Author
-
Anh, Tran Viet
- Subjects
VARIATIONAL inequalities (Mathematics) ,MATHEMATICAL mappings ,STOCHASTIC convergence ,ALGORITHMS ,FIXED point theory ,MATHEMATICAL optimization - Abstract
In this paper, we propose Linesearch methods for solving a bilevel split variational inequality problem (BSVIP) involving a strongly monotone mapping in the upper-level problem and pseudomonotone mappings in the lower-level one. A strongly convergent algorithm for such a BSVIP is proposed and analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. THREE CONVERGENCE RESULTS FOR INEXACT ORBITS OF NONEXPANSIVE MAPPINGS.
- Author
-
ZASLAVSKI, ALEXANDER J.
- Subjects
STOCHASTIC convergence ,MATHEMATICAL regularization ,MATHEMATICS theorems ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper we study the convergence of inexact iterates of nonexpansive mappings which take a nonempty, closed subset of a complete metric space into the space, under the presence of summable errors, and generalize the known results in the literature for nonexpansive self-mappings of the complete metric space. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Handling many-objective optimisation problems with R2 indicator and decomposition-based particle swarm optimiser.
- Author
-
Liu, Jianchang, Li, Fei, Kong, Xiangyong, and Huang, Peiqiu
- Subjects
PARTICLE swarm optimization ,STOCHASTIC convergence ,EVOLUTIONARY algorithms ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
An R2 indicator-based multi-objective particle swarm optimiser (R2-MOPSO) can obtain well-convergence and well-distributed solutions while solving two and three objectives optimisation problems. However, R2-MOPSO faces difficulty to tackle many-objective optimisation problems because balancing convergence and diversity is a key issue in high-dimensional objective space. In order to address this issue, this paper proposes a novel algorithm, named R2-MaPSO, which combines the R2 indicator and decomposition-based archiving pruning strategy into particle swarm optimiser for many-objective optimisation problems. The innovations of the proposed algorithm mainly contains three crucial factors: (1) A bi-level archiving maintenance approach based on the R2 indicator and objective space decomposition strategy is designed to balance convergence and diversity. (2) The global-best leader selection is based on the R2 indicator and the personal-best leader selection is based on the Pareto dominance. Meanwhile, the objective space decomposition leader selection adopts the feedback information from the bi-level archive. (3) A new velocity updated method is modified to enhance the exploration and exploitation ability. In addition, an elitist learning strategy and a smart Gaussian learning strategy are embedded into R2-MaPSO to help the algorithm jump out of the local optimal front. The performance of the proposed algorithm is validated and compared with some algorithms on a number of unconstraint benchmark problems, i.e. DTLZ1-DTLZ4, WFG test suites from 3 to 15 objectives. Experimental results have demonstrated a better performance of the proposed algorithm compared with several multi-objective particle swarm optimisers and multi-objective evolutionary algorithms for many-objective optimisation problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. General parameterized proximal point algorithm with applications in statistical learning.
- Author
-
Bai, Jianchao, Li, Jicheng, Dai, Pingfan, and Li, Jiaofen
- Subjects
ALGORITHMS ,STATISTICAL learning ,CONVEX functions ,MATHEMATICAL optimization ,STOCHASTIC convergence - Abstract
In the literature, there are a few researches to design some parameters in the proximal point algorithm (PPA), especially for the multi-objective convex optimizations. Introducing some parameters to PPA can make it more flexible and attractive. Mainly motivated by our recent work [Bai et al. A parameterized proximal point algorithm for separable convex optimization. Optim Lett. (2017) doi:10.1007/s11590-017-1195-9], in this paper we develop a general parameterized PPA with a relaxation step for solving the multi-block separable structured convex programming. By making use of the variational inequality and some mathematical identities, the global convergence and the worst-case convergence rate of the proposed algorithm are established. Preliminary numerical experiments on solving a sparse matrix minimization problem from statistical learning validate that our algorithm is more efficient than several state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. Optimization over the Pareto outcome set associated with a convex bi-objective optimization problem: theoretical results, deterministic algorithm and application to the stochastic case.
- Author
-
Bonnel, Henri and Collonge, Julien
- Subjects
PARETO optimum ,MATHEMATICAL optimization ,ALGORITHMS ,RANDOM functions (Mathematics) ,STOCHASTIC convergence - Abstract
Our paper consists of two main parts. In the first one, we deal with the deterministic problem of minimizing a real valued function $$f$$ over the Pareto outcome set associated with a deterministic convex bi-objective optimization problem (BOP), in the particular case where $$f$$ depends on the objectives of (BOP), i.e. we optimize over the Pareto set in the outcome space. In general, the optimal value $$U$$ of such a kind of problem cannot be computed directly, so we propose a deterministic outcome space algorithm whose principle is to give at every step a range (lower bound, upper bound) that contains $$U$$ . Then we show that for any given error bound, the algorithm terminates in a finite number of steps. In the second part of our paper, in order to handle also the stochastic case, we consider the situation where the two objectives of (BOP) are given by expectations of random functions, and we deal with the stochastic problem $$(S)$$ of minimizing a real valued function $$f$$ over the Pareto outcome set associated with this Stochastic bi-objective Optimization Problem (SBOP). Because of the presence of random functions, the Pareto set associated with this type of problem cannot be explicitly given, and thus it is not possible to compute the optimal value $$V$$ of problem $$(S)$$ . That is why we consider a sequence of Sample Average Approximation problems (SAA- $$N$$ , where $$N$$ is the sample size) whose optimal values converge almost surely to $$V$$ as the sample size $$N$$ goes to infinity. Assuming $$f$$ nondecreasing, we show that the convergence rate is exponential, and we propose a confidence interval for $$V$$ . Finally, some computational results are given to illustrate the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
45. Recent advances in trust region algorithms.
- Author
-
Yuan, Ya-xiang
- Subjects
MATHEMATICAL optimization ,NUMERICAL analysis ,ITERATIVE methods (Mathematics) ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL regularization - Abstract
Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In the paper, we review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods are also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
46. Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems.
- Author
-
Sitthithakerngkiet, Kanokwan, Deepho, Jitsupa, Martínez-Moreno, Juan, and Kumam, Poom
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL optimization ,FIXED point theory ,HILBERT space ,NONEXPANSIVE mappings - Abstract
The purpose of this paper is to introduce a general iterative method for finding a common element of the set of common fixed points of an infinite family of nonexpansive mappings and the set of split variational inclusion problem in the framework Hilbert spaces. Strong convergence theorem of the sequences generated by the purpose iterative scheme is obtained. In the last section, we present some computational examples to illustrate the assumptions of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. A Derivative-Free Trust Region Algorithm with Nonmonotone Filter Technique for Bound Constrained Optimization.
- Author
-
Gao, Jing, Cao, Jian, and Yang, Yueting
- Subjects
NONDIFFERENTIABLE functions ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL bounds ,STOCHASTIC convergence - Abstract
We propose a derivative-free trust region algorithm with a nonmonotone filter technique for bound constrained optimization. The derivative-free strategy is applied for special minimization functions in which derivatives are not all available. A nonmonotone filter technique ensures not only the trust region feature but also the global convergence under reasonable assumptions. Numerical experiments demonstrate that the new algorithm is effective for bound constrained optimization. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. The performance of the best choice of parameter values obtained by the algorithm we presented which differs from traditionally used values indicates that the algorithm proposed in this paper has a certain advantage for the nondifferentiable optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Modified multiple search cooperative foraging strategy for improved artificial bee colony optimization with robustness analysis.
- Author
-
Harfouchi, F., Habbi, H., Ozturk, C., and Karaboga, D.
- Subjects
BEE swarming ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,GROUP work in education - Abstract
Considering that extending the concept of bees partitioning into subgroups of foragers to the onlooker phase of the cooperative learning artificial bee colony (CLABC) strategy is not only feasible from algorithmic viewpoint but might reflect real behavioral foraging characteristics of bee swarms, this paper studies whether the performance of CLABC can be enhanced by developing a new model for the proposed cooperative foraging scheme. Relying on this idea, we design a modified cooperative learning artificial bee colony algorithm, referred to as mCLABC. The design procedure is built upon the definition of a partitioning scheme of onlookers allowing the generation of subgroups of foragers that might evolve differently by using specific solution search rules. In order to improve the involving of local and global search at both employed and onlooker levels, the multiple search mechanism is further tuned and scheduled according to a random selection strategy defined on the evolving parameters. Moreover, a detailed performance and robustness study of the proposed algorithm dealing with the analysis of the impact of different structural and parametric settings is conducted on benchmark optimization problems. Superior convergence performance, better solution quality, and strong robustness are the main features of the proposed strategy in comparison with recent ABC variants and advanced methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis.
- Author
-
Bergou, El Houcine, Diouane, Youssef, and Gratton, Serge
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,NONCONVEX programming ,MATHEMATICAL regularization ,STOCHASTIC convergence - Abstract
Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity among second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such a scaled norm, the obtained method behaves as a line-search algorithm along the quasi-Newton direction with a special backtracking strategy. Under appropriate assumptions, the new algorithm enjoys the same convergence and complexity properties as adaptive regularized algorithm using cubics. The complexity for finding an approximate first-order stationary point can be improved to be optimal whenever a second-order version of the proposed algorithm is regarded. In a similar way, using the same scaled norm to define the trust-region neighborhood, we show that the trust-region algorithm behaves as a line-search algorithm. The good potential of the obtained algorithms is shown on a set of large-scale optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Block SOR methods for the solution of indefinite least squares problems.
- Author
-
Liu, Qiaohua and Liu, Aijing
- Subjects
LEAST squares ,STOCHASTIC convergence ,JACOBI method ,QUADRATIC forms ,ALGORITHMS ,MATHEMATICAL optimization ,FACTORIZATION - Abstract
This paper describes a technique for constructing block SOR methods for the solution of the large and sparse indefinite least squares problem which involves minimizing a certain type of indefinite quadratic form. Two block SOR-based algorithms and convergence results are presented. The optimum parameters for the methods are also given. It has been shown both theoretically and numerically that the optimum block SOR methods have a faster convergence than block Jacobi and Gauss-Seidel methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.