25 results
Search Results
2. A Derivative-Free Trust Region Algorithm with Nonmonotone Filter Technique for Bound Constrained Optimization.
- Author
-
Gao, Jing, Cao, Jian, and Yang, Yueting
- Subjects
- *
NONDIFFERENTIABLE functions , *MATHEMATICAL optimization , *ALGORITHMS , *MATHEMATICAL bounds , *STOCHASTIC convergence - Abstract
We propose a derivative-free trust region algorithm with a nonmonotone filter technique for bound constrained optimization. The derivative-free strategy is applied for special minimization functions in which derivatives are not all available. A nonmonotone filter technique ensures not only the trust region feature but also the global convergence under reasonable assumptions. Numerical experiments demonstrate that the new algorithm is effective for bound constrained optimization. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. The performance of the best choice of parameter values obtained by the algorithm we presented which differs from traditionally used values indicates that the algorithm proposed in this paper has a certain advantage for the nondifferentiable optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. A Three-Term Conjugate Gradient Algorithm with Quadratic Convergence for Unconstrained Optimization Problems.
- Author
-
Wu, Gaoyi, Li, Yong, and Yuan, Gonglin
- Subjects
- *
CONJUGATE gradient methods , *STOCHASTIC convergence , *MATHEMATICAL optimization , *CONVEX functions - Abstract
This paper further studies the WYL conjugate gradient (CG) formula with βkWYL≥0 and presents a three-term WYL CG algorithm, which has the sufficiently descent property without any conditions. The global convergence and the linear convergence are proved; moreover the n-step quadratic convergence with a restart strategy is established if the initial step length is appropriately chosen. Numerical experiments for large-scale problems including the normal unconstrained optimization problems and the engineer problems (Benchmark Problems) show that the new algorithm is competitive with the other similar CG algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. On the Theoretical Analysis of the Plant Propagation Algorithms.
- Author
-
Sulaiman, Muhammad, Salhi, Abdellah, Khan, Asfandyar, Muhammad, Shakoor, and Khan, Wali
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *HEURISTIC algorithms , *PROBLEM solving , *STOCHASTIC convergence - Abstract
Plant Propagation Algorithms (PPA) are powerful and flexible solvers for optimisation problems. They are nature-inspired heuristics which can be applied to any optimisation/search problem. There is a growing body of research, mainly experimental, on PPA in the literature. Little, however, has been done on the theoretical front. Given the prominence this algorithm is gaining in terms of performance on benchmark problems as well as practical ones, some theoretical insight into its convergence is needed. The current paper is aimed at fulfilling this by providing a sketch for a global convergence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. Total Variation Image Restoration Method Based on Subspace Optimization.
- Author
-
Liu, XiaoGuang and Gao, XingBao
- Subjects
- *
IMAGE reconstruction , *SUBSPACES (Mathematics) , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ENERGY function - Abstract
The alternating direction method is widely applied in total variation image restoration. However, the search directions of the method are not accurate enough. In this paper, one method based on the subspace optimization is proposed to improve its optimization performance. This method corrects the search directions of primal alternating direction method by using the energy function and a linear combination of the previous search directions. In addition, the convergence of the primal alternating direction method is proven under some weaker conditions. Thus the convergence of the corrected method could be easily obtained since it has same convergence with the primal alternating direction method. Numerical examples are given to show the performance of proposed method finally. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. A Conjugate Gradient Algorithm under Yuan-Wei-Lu Line Search Technique for Large-Scale Minimization Optimization Models.
- Author
-
Li, Xiangrong, Wang, Songhua, Jin, Zhongzhou, and Pham, Hongtruong
- Subjects
- *
CONJUGATE gradient methods , *ALGORITHMS , *STOCHASTIC convergence , *CONVEX functions , *MATHEMATICAL optimization - Abstract
This paper gives a modified Hestenes and Stiefel (HS) conjugate gradient algorithm under the Yuan-Wei-Lu inexact line search technique for large-scale unconstrained optimization problems, where the proposed algorithm has the following properties: (1) the new search direction possesses not only a sufficient descent property but also a trust region feature; (2) the presented algorithm has global convergence for nonconvex functions; (3) the numerical experiment showed that the new algorithm is more effective than similar algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems.
- Author
-
Zhao, Fuqing, Lei, Wenchang, Ma, Weimin, Liu, Yang, and Zhang, Chuck
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *MARKOV processes , *POLYNOMIALS , *STOCHASTIC convergence - Abstract
A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA) is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Enhanced Simulated Annealing for Solving Aggregate Production Planning.
- Author
-
Abu Bakar, Mohd Rizam, Bakheet, Abdul Jabbar Khudhur, Kamil, Farah, Kalaf, Bayda Atiya, Abbas, Iraq T., and Soon, Lee Lai
- Subjects
- *
SIMULATED annealing , *PRODUCTION planning , *LINEAR programming , *MATHEMATICAL optimization , *MATHEMATICAL variables , *STOCHASTIC convergence - Abstract
Simulated annealing (SA) has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP) is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA) is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS), the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. Particle Swarm and Bacterial Foraging Inspired Hybrid Artificial Bee Colony Algorithm for Numerical Function Optimization.
- Author
-
Mao, Li, Mao, Yu, Zhou, Changxi, Li, Chaofeng, Wei, Xiao, and Yang, Hong
- Subjects
- *
PARTICLE swarm optimization , *ANT algorithms , *NUMERICAL functions , *MATHEMATICAL optimization , *STOCHASTIC convergence - Abstract
Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. A Spectral Dai-Yuan-Type Conjugate Gradient Method for Unconstrained Optimization.
- Author
-
Zhou, Guanghui and Ni, Qin
- Subjects
- *
CONJUGATE gradient methods , *SPECTRAL theory , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NONLINEAR programming - Abstract
A new spectral conjugate gradient method (SDYCG) is presented for solving unconstrained optimization problems in this paper. Our method provides a new expression of spectral parameter. This formula ensures that the sufficient descent condition holds. The search direction in the SDYCG can be viewed as a combination of the spectral gradient and the Dai-Yuan conjugate gradient. The global convergence of the SDYCG is also obtained. Numerical results show that the SDYCG may be capable of solving large-scale nonlinear unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. A Hybrid Mutation Chemical Reaction Optimization Algorithm for Global Numerical Optimization.
- Author
-
Ngambusabongsopa, Ransikarn, Li, Zhiyong, and Eldesouky, Esraa
- Subjects
- *
MUTATIONS (Algebra) , *CHEMICAL reactions , *MATHEMATICAL optimization , *METAHEURISTIC algorithms , *STOCHASTIC convergence , *COMPUTER science - Abstract
This paper proposes a hybrid metaheuristic approach that improves global numerical optimization by increasing optimal quality and accelerating convergence. This algorithm involves a recently developed process for chemical reaction optimization and two adjustment operators (turning and mutation operators). Three types of mutation operators (uniform, nonuniform, and polynomial) were combined with chemical reaction optimization and turning operator to find the most appropriate framework. The best solution among these three options was selected to be a hybrid mutation chemical reaction optimization algorithm for global numerical optimization. The optimal quality, convergence speed, and statistical hypothesis testing of our algorithm are superior to those previous high performance algorithms such as RCCRO, HP-CRO2, and OCRO. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. Weight Optimization in Recurrent Neural Networks with Hybrid Metaheuristic Cuckoo Search Techniques for Data Classification.
- Author
-
Nawi, Nazri Mohd, Khan, Abdullah, Rehman, M. Z., Chiroma, Haruna, and Herawan, Tutut
- Subjects
- *
RECURRENT neural networks , *MATHEMATICAL optimization , *BACK propagation , *STOCHASTIC convergence , *METAHEURISTIC algorithms , *SEARCH algorithms - Abstract
Recurrent neural network (RNN) has been widely used as a tool in the data classification. This network can be educated with gradient descent back propagation. However, traditional training algorithms have some drawbacks such as slow speed of convergence being not definite to find the global minimum of the error function since gradient descent may get stuck in local minima. As a solution, nature inspired metaheuristic algorithms provide derivative-free solution to optimize complex problems. This paper proposes a new metaheuristic search algorithm called Cuckoo Search (CS) based on Cuckoo bird’s behavior to train Elman recurrent network (ERN) and back propagation Elman recurrent network (BPERN) in achieving fast convergence rate and to avoid local minima problem. The proposed CSERN and CSBPERN algorithms are compared with artificial bee colony using BP algorithm and other hybrid variants algorithms. Specifically, some selected benchmark classification problems are used. The simulation results show that the computational efficiency of ERN and BPERN training process is highly enhanced when coupled with the proposed hybrid method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. A Novel Tournament Selection Based Differential Evolution Variant for Continuous Optimization Problems.
- Author
-
Abbas, Qamar, Ahmad, Jamil, and Jabeen, Hajira
- Subjects
- *
DIFFERENTIAL evolution , *CONTINUOUS functions , *MATHEMATICAL optimization , *STOCHASTIC convergence , *PERFORMANCE evaluation - Abstract
Differential evolution (DE) is a powerful global optimization algorithm which has been studied intensively by many researchers in the recent years. A number of variants have been established for the algorithm that makes DE more applicable. However, most of the variants are suffering from the problems of convergence speed and local optima. A novel tournament based parent selection variant of DE algorithm is proposed in this research. The proposed variant enhances searching capability and improves convergence speed of DE algorithm. This paper also presents a novel statistical comparison of existing DE mutation variants which categorizes these variants in terms of their overall performance. Experimental results show that the proposed DE variant has significance performance over other DE mutation variants. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
14. A Problem-Reduction Evolutionary Algorithm for Solving the Capacitated Vehicle Routing Problem.
- Author
-
Liu, Wanfeng and Li, Xia
- Subjects
- *
EVOLUTIONARY algorithms , *VEHICLE routing problem , *MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *ROBUST control - Abstract
Assessment of the components of a solution helps provide useful information for an optimization problem. This paper presents a new population-based problem-reduction evolutionary algorithm (PREA) based on the solution components assessment. An individual solution is regarded as being constructed by basic elements, and the concept of acceptability is introduced to evaluate them. The PREA consists of a searching phase and an evaluation phase. The acceptability of basic elements is calculated in the evaluation phase and passed to the searching phase. In the searching phase, for each individual solution, the original optimization problem is reduced to a new smaller-size problem. With the evolution of the algorithm, the number of common basic elements in the population increases until all individual solutions are exactly the same which is supposed to be the near-optimal solution of the optimization problem. The new algorithm is applied to a large variety of capacitated vehicle routing problems (CVRP) with customers up to nearly 500. Experimental results show that the proposed algorithm has the advantages of fast convergence and robustness in solution quality over the comparative algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Adaptive Randomness: A New Population Initialization Method.
- Author
-
Weifeng Pan, Kangshun Li, Muchou Wang, Jing Wang, and Bo Jiang
- Subjects
- *
MATHEMATICAL optimization , *INFORMATION theory , *RANDOM numbers , *MATHEMATICAL functions , *STOCHASTIC convergence , *GENERALIZATION - Abstract
Population initialization is a crucial task in population-based optimization methods, which can affect the convergence speed and also the quality of the final solutions. Generally, if no a priori information about the solutions is available, the initial population is often selected randomly using random numbers. This paper presents a new initialization method by applying the concept of adaptive randomness (AR) to distribute the individuals as spaced out as possible over the search space. To verify the performance of AR, a comprehensive set of 34 benchmark functions with a wide range of dimensions is utilized. Conducted experiments demonstrate that AR-based population initialization performs better than other population initialization methods such as random population initialization, opposition-based population initialization, and generalized opposition-based population initialization in the convergence speed and the quality of the final solutions. Further, the influences of the problem dimensionality, the new control parameter, and the number of trial individuals are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. A Free Search Krill Herd Algorithm for Functions Optimization.
- Author
-
Liangliang Li, Yongquan Zhou, and Jian Xie
- Subjects
- *
SEARCH algorithms , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ROBUST control , *PRECISION (Information retrieval) , *MATHEMATICAL analysis - Abstract
To simulate the freedom and uncertain individual behavior of krill herd, this paper introduces the opposition based learning (OBL) strategy and free search operator into krill herd optimization algorithm (KH) and proposes a novel opposition-based free search krill herd optimization algorithm (FSKH). In FSKH, each krill individual can search according to its own perception and scope of activities. The free search strategy highly encourages the individuals to escape from being trapped in local optimal solution. So the diversity and exploration ability of krill population are improved. And FSKH can achieve a better balance between local search and global search. The experiment results of fourteen benchmark functions indicate that the proposed algorithm can be effective and feasible in both low-dimensional and high-dimensional cases. And the convergence speed and precision of FSKH are higher. Compared to PSO, DE, KH, HS, FS, and BA algorithms, the proposed algorithm shows a better optimization performance and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
17. A Local and Global Search Combined Particle Swarm Optimization Algorithm and Its Convergence Analysis.
- Author
-
Weitian Lin, Zhigang Lian, Xingsheng Gu, and Bin Jiao
- Subjects
- *
PARTICLE swarm optimization , *STOCHASTIC convergence , *COMPUTER algorithms , *BENCHMARKING (Management) , *MATHEMATICAL analysis , *MATHEMATICAL optimization - Abstract
Particle swarm optimization algorithm (PSOA) is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA), and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA). Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
18. Flower Pollination Algorithm with Dimension by Dimension Improvement.
- Author
-
Rui Wang and Yongquan Zhou
- Subjects
- *
POLLINATION , *STOCHASTIC convergence , *DIMENSIONS , *PROBLEM solving , *MATHEMATICAL optimization - Abstract
Flower pollination algorithm (FPA) is a new nature-inspired intelligent algorithm which uses the whole update and evaluation strategy on solutions. For solving multi dimension function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution of algorithm due to interference phenomena among dimensions. To overcome this shortage, in this paper a dimension by dimension improvement based flower pollination algorithm is proposed. In the progress of iteration of improved algorithm, a dimension by dimension based update and evaluation strategy on solutions is used. And, in order to enhance the local searching ability, local neighborhood search strategy is also applied in this improved algorithm. The simulation experiments show that the proposed strategies can improve the convergence speed and the quality of solutions effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
19. Efficient LED-SAC Sparse Estimator Using Fast Sequential Adaptive Coordinate-Wise Optimization (LED-2SAC).
- Author
-
Yousefi Rezaii, T., Beheshti, S., and Tinati, M. A.
- Subjects
- *
MATHEMATICAL optimization , *LINEAR equations , *SIGNAL processing , *LEAST squares , *SEQUENTIAL analysis , *STOCHASTIC convergence - Abstract
Solving the underdetermined system of linear equations is of great interest in signal processing application, particularly when the underlying signal to be estimated is sparse. Recently, a new sparsity encouraging penalty function is introduced as Linearized Exponentially Decaying penalty, LED, which results in the sparsest solution for an underdetermined system of equations subject to the minimization of the least squares loss function. A sequential solution is available for LED-based objective function, which is denoted by LED-SAC algorithm. This solution, which aims to sequentially solve the LED-based objective function, ignores the sparsity of the solution. In this paper, we present a new sparse solution. The new method benefits from the sparsity of the signal both in the optimization criterion (LED) and its solution path, denoted by Sparse SAC (2SAC). The new reconstruction method denoted by LED-2SAC (LED-Sparse SAC) is consequently more efficient and considerably fast compared to the LED-SAC algorithm, in terms of adaptability and convergence rate. In addition, the computational complexity of both LED-SAC and LED-2SAC is shown to be of order O(d2), which is better than the other batch solutions like LARS. LARS algorithm has complexity of order O(d3 +nd2), where d is the dimension of the sparse signal and n is the number of observations. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
20. A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization.
- Author
-
Zhongbo Hu, Shengwu Xiong, Zhixiang Fang, and Qinghua Su
- Subjects
- *
DIFFERENTIAL evolution , *ALGORITHMS , *STOCHASTIC convergence , *PROBABILITY theory , *MATHEMATICAL optimization , *PARAMETERS (Statistics) - Abstract
Many improved differential Evolution (DE) algorithms have emerged as a very competitive class of evolutionary computation more than a decade ago. However, few improved DE algorithms guarantee global convergence in theory. This paper developed a convergent DE algorithmin theory, which employs a self-adaptation scheme for the parameters and two operators, that is, uniform mutation and hidden adaptation selection (haS) operators. The parameter self-adaptation and uniformmutation operator enhance the diversity of populations and guarantee ergodicity. The haS can automatically remove some inferior individuals in the process of the enhancing population diversity. The haS controls the proposed algorithm to break the loop of current generation with a small probability. The breaking probability is a hidden adaptation and proportional to the changes of the number of inferior individuals. The proposed algorithm is tested on ten engineering optimization problems taken from IEEE CEC2011. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
21. Sensor Scheduling with Intelligent Optimization Algorithm Based on Quantum Theory.
- Author
-
Zhiguo Chen, Yi Fu, and Wenbo Xu
- Subjects
- *
COMPUTER scheduling , *MATHEMATICAL optimization , *COMPUTER algorithms , *QUANTUM theory , *PARTICLE swarm optimization , *STOCHASTIC convergence - Abstract
The particle swarm optimization (PSO) algorithm superiority exists in convergence rate, but it tends to get stuck in local optima. An improved PSO algorithm is proposed using a best dimension mutation technique based on quantum theory, and it was applied to sensor scheduling problem for target tracking. The dynamics of the target are assumed as linear Gaussian model, and the sensor measurements showa linear correlation with the state of the target. This paper discusses the single target tracking problemwithmultiple sensors using the proposed best dimensionmutation particle swarmoptimization (BDMPSO) algorithm for various cases. Our experimental results verify that the proposed algorithm is able to track the target more reliably and accurately than previous ones. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
22. Lévy-Flight Krill Herd Algorithm.
- Author
-
Gaige Wang, Lihong Guo, Gandomi, Amir Hossein, Lihua Cao, Alavi, Amir Hossein, Hong Duan, and Jiang Li
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *PROBLEM solving , *MATHEMATICAL functions , *METAHEURISTIC algorithms , *OPTIMAL control theory - Abstract
To improve the performance of the krill herd (KH) algorithm, in this paper, a L'evy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local L'evy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimizationmethods. Especially, this newmethod can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
23. A Simple and Efficient Artificial Bee Colony Algorithm.
- Author
-
Yunfeng Xu, Ping Fan, and Ling Yuan
- Subjects
- *
BEES algorithm , *MATHEMATICAL optimization , *STOCHASTIC analysis , *STOCHASTIC convergence , *PERFORMANCE evaluation , *MATHEMATICAL functions , *SIMULATION methods & models , *POPULATION research - Abstract
Artificial bee colony (ABC) is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC) algorithm, which modifies the search pattern of both employed and on looker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
24. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems.
- Author
-
Shouheng Tuo, Longquan Yong, and Tao Zhou
- Subjects
- *
SEARCH algorithms , *MATHEMATICAL optimization , *METAHEURISTIC algorithms , *MUSIC improvisation , *STOCHASTIC convergence , *ROBUST control , *MACHINE learning - Abstract
Harmony search (HS) algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL), is presented for high dimension complex optimization problems. InHSTL algorithm, four strategies (harmonymemory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
25. Stabilizing of Subspaces Based on DPGA and Chaos Genetic Algorithm for Optimizing State Feedback Controller.
- Author
-
Hosseinpour, M., Nikdel, P., Badamchizadeh, M. A., and Poor, M. A.
- Subjects
- *
STATE feedback (Feedback control systems) , *ALGORITHMS , *FEEDBACK control systems , *MATHEMATICAL optimization , *COMBINATORIAL optimization , *STOCHASTIC convergence - Abstract
The main purpose of the paper is to optimize state feedback parameters using intelligent method, GA, Hermite-Biehler, and chaos algorithm. GA is implemented for local search but it has some deficiencies such as trapping into a local minimum and slow convergence, so the combination of Hermite-Biehler and chaos algorithm has been added to GA to avoid its deficiencies. Dividing search space is usually done by distributed population genetic algorithm (DPGA).Moreover, using generalized Hermite-Biehler Theorem can find the domain of parameters. In order to speed up the convergence at the first step, Hermite-Biehler method finds some intervals for controller, in the next step the GA will be added, and, finally, chaos disturbance will help the algorithm to reach a global minimum. Therefore, the proposed method can optimize the parameters of the state feedback controller. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.