51 results
Search Results
2. A Derivative-Free Trust Region Algorithm with Nonmonotone Filter Technique for Bound Constrained Optimization.
- Author
-
Gao, Jing, Cao, Jian, and Yang, Yueting
- Subjects
- *
NONDIFFERENTIABLE functions , *MATHEMATICAL optimization , *ALGORITHMS , *MATHEMATICAL bounds , *STOCHASTIC convergence - Abstract
We propose a derivative-free trust region algorithm with a nonmonotone filter technique for bound constrained optimization. The derivative-free strategy is applied for special minimization functions in which derivatives are not all available. A nonmonotone filter technique ensures not only the trust region feature but also the global convergence under reasonable assumptions. Numerical experiments demonstrate that the new algorithm is effective for bound constrained optimization. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. The performance of the best choice of parameter values obtained by the algorithm we presented which differs from traditionally used values indicates that the algorithm proposed in this paper has a certain advantage for the nondifferentiable optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. A Modified Sine-Cosine Algorithm Based on Neighborhood Search and Greedy Levy Mutation.
- Author
-
Qu, Chiwen, Zeng, Zhiliu, Dai, Jun, Yi, Zhongjun, and He, Wei
- Subjects
- *
COSINE function , *MATHEMATICAL optimization , *STOCHASTIC convergence , *PARAMETERS (Statistics) , *GREEDY algorithms - Abstract
For the deficiency of the basic sine-cosine algorithm in dealing with global optimization problems such as the low solution precision and the slow convergence speed, a new improved sine-cosine algorithm is proposed in this paper. The improvement involves three optimization strategies. Firstly, the method of exponential decreasing conversion parameter and linear decreasing inertia weight is adopted to balance the global exploration and local development ability of the algorithm. Secondly, it uses the random individuals near the optimal individuals to replace the optimal individuals in the primary algorithm, which allows the algorithm to easily jump out of the local optimum and increases the search range effectively. Finally, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. The experimental results show that the proposed algorithm can effectively avoid falling into the local optimum, and it has faster convergence speed and higher optimization accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. A Three-Term Conjugate Gradient Algorithm with Quadratic Convergence for Unconstrained Optimization Problems.
- Author
-
Wu, Gaoyi, Li, Yong, and Yuan, Gonglin
- Subjects
- *
CONJUGATE gradient methods , *STOCHASTIC convergence , *MATHEMATICAL optimization , *CONVEX functions - Abstract
This paper further studies the WYL conjugate gradient (CG) formula with βkWYL≥0 and presents a three-term WYL CG algorithm, which has the sufficiently descent property without any conditions. The global convergence and the linear convergence are proved; moreover the n-step quadratic convergence with a restart strategy is established if the initial step length is appropriately chosen. Numerical experiments for large-scale problems including the normal unconstrained optimization problems and the engineer problems (Benchmark Problems) show that the new algorithm is competitive with the other similar CG algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. On the Theoretical Analysis of the Plant Propagation Algorithms.
- Author
-
Sulaiman, Muhammad, Salhi, Abdellah, Khan, Asfandyar, Muhammad, Shakoor, and Khan, Wali
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *HEURISTIC algorithms , *PROBLEM solving , *STOCHASTIC convergence - Abstract
Plant Propagation Algorithms (PPA) are powerful and flexible solvers for optimisation problems. They are nature-inspired heuristics which can be applied to any optimisation/search problem. There is a growing body of research, mainly experimental, on PPA in the literature. Little, however, has been done on the theoretical front. Given the prominence this algorithm is gaining in terms of performance on benchmark problems as well as practical ones, some theoretical insight into its convergence is needed. The current paper is aimed at fulfilling this by providing a sketch for a global convergence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems.
- Author
-
Wang, Hailong, Hu, Zhongbo, Sun, Yuqiu, Su, Qinghua, and Xia, Xuewen
- Subjects
- *
SEARCH algorithms , *MATHEMATICAL optimization , *SIMULATED annealing , *EVOLUTIONARY algorithms , *STOCHASTIC convergence - Abstract
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Total Variation Image Restoration Method Based on Subspace Optimization.
- Author
-
Liu, XiaoGuang and Gao, XingBao
- Subjects
- *
IMAGE reconstruction , *SUBSPACES (Mathematics) , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ENERGY function - Abstract
The alternating direction method is widely applied in total variation image restoration. However, the search directions of the method are not accurate enough. In this paper, one method based on the subspace optimization is proposed to improve its optimization performance. This method corrects the search directions of primal alternating direction method by using the energy function and a linear combination of the previous search directions. In addition, the convergence of the primal alternating direction method is proven under some weaker conditions. Thus the convergence of the corrected method could be easily obtained since it has same convergence with the primal alternating direction method. Numerical examples are given to show the performance of proposed method finally. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. A Conjugate Gradient Algorithm under Yuan-Wei-Lu Line Search Technique for Large-Scale Minimization Optimization Models.
- Author
-
Li, Xiangrong, Wang, Songhua, Jin, Zhongzhou, and Pham, Hongtruong
- Subjects
- *
CONJUGATE gradient methods , *ALGORITHMS , *STOCHASTIC convergence , *CONVEX functions , *MATHEMATICAL optimization - Abstract
This paper gives a modified Hestenes and Stiefel (HS) conjugate gradient algorithm under the Yuan-Wei-Lu inexact line search technique for large-scale unconstrained optimization problems, where the proposed algorithm has the following properties: (1) the new search direction possesses not only a sufficient descent property but also a trust region feature; (2) the presented algorithm has global convergence for nonconvex functions; (3) the numerical experiment showed that the new algorithm is more effective than similar algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. A General Approximation Method for a Kind of Convex Optimization Problems in Hilbert Spaces.
- Author
-
Ming Tian and Li-Hua Huang
- Subjects
- *
APPROXIMATION theory , *CONVEX functions , *HILBERT space , *MATHEMATICAL optimization , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
The constrained convex minimization problem is to find a point xn with the property that xn ϵ C, and h(xn ) = min h(x), ∀x ϵ C, where C is a nonempty, closed, and convex subset of a real Hilbert space H, h(x) is a real-valued convex function, and h(x) is not Fréchet differentiable, but lower semicontinuous. In this paper, we discuss an iterative algorithm which is different from traditional gradient-projection algorithms. We firstly construct a bifunction F1 (x, y) defined as F1 (x, y) = h(y) - h(x). And we ensure the equilibrium problem for F1 (x, y) equivalent to the above optimization problem. Then we use iterative methods for equilibrium problems to study the above optimization problem. Based on Jung's method (2011), we propose a general approximation method and prove the strong convergence of our algorithm to a solution of the above optimization problem. In addition, we apply the proposed iterative algorithm for finding a solution of the split feasibility problem and establish the strong convergence theorem. The results of this paper extend and improve some existing results [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Modification of Nonlinear Conjugate Gradient Method with Weak Wolfe-Powell Line Search.
- Author
-
Alhawarat, Ahmad and Salleh, Zabidin
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL functions , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Conjugate gradient (CG) method is used to find the optimum solution for the large scale unconstrained optimization problems. Based on its simple algorithm, low memory requirement, and the speed of obtaining the solution, this method is widely used in many fields, such as engineering, computer science, and medical science. In this paper, we modified CG method to achieve the global convergence with various line searches. In addition, it passes the sufficient descent condition without any line search. The numerical computations under weak Wolfe-Powell line search shows that the efficiency of the new method is superior to other conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems.
- Author
-
Zhao, Fuqing, Lei, Wenchang, Ma, Weimin, Liu, Yang, and Zhang, Chuck
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *MARKOV processes , *POLYNOMIALS , *STOCHASTIC convergence - Abstract
A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA) is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Enhanced Simulated Annealing for Solving Aggregate Production Planning.
- Author
-
Abu Bakar, Mohd Rizam, Bakheet, Abdul Jabbar Khudhur, Kamil, Farah, Kalaf, Bayda Atiya, Abbas, Iraq T., and Soon, Lee Lai
- Subjects
- *
SIMULATED annealing , *PRODUCTION planning , *LINEAR programming , *MATHEMATICAL optimization , *MATHEMATICAL variables , *STOCHASTIC convergence - Abstract
Simulated annealing (SA) has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP) is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA) is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS), the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. Stability and Probability 1 Convergence for Queueing Networks via Lyapunov Optimization.
- Author
-
Neely, Michael J.
- Subjects
- *
QUEUEING networks , *STABILITY theory , *PROBABILITY theory , *STOCHASTIC convergence , *LYAPUNOV functions , *MATHEMATICAL optimization , *STOCHASTIC processes - Abstract
Lyapunov drift is a powerful tool for optimizing stochastic queueing networks subject to stability. However, the most convenient drift conditions often provide results in terms of a time average expectation, rather than a pure time average. This paper provides an extended drift-plus-penalty result that ensures stability with desired time averages with probability 1. The analysis uses the law of large numbers for martingale differences. This is applied to quadratic and subquadratic Lyapunov methods for minimizing the time average of a network penalty function subject to stability and to additional time average constraints. Similar to known results for time average expectations, this paper shows that pure time average penalties can be pushed arbitrarily close to optimality, with a corresponding tradeoff in average queue size. Further, in the special case of quadratic Lyapunov functions, the basic drift condition is shown to imply all major forms of queue stability. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
14. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.
- Author
-
Zhu, Binglian, Zhu, Wenyong, Liu, Zijuan, Duan, Qingyan, and Cao, Long
- Subjects
- *
QUANTUM theory , *MATHEMATICAL optimization , *STOCHASTIC convergence , *COMPUTER algorithms , *STATISTICS - Abstract
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.
- Author
-
He, Xiangzhu, Huang, Jida, Rao, Yunqing, and Gao, Liang
- Subjects
- *
NEUROSCIENCES , *COMPUTATIONAL intelligence , *MATHEMATICAL optimization , *NONLINEAR analysis , *METAHEURISTIC algorithms , *STOCHASTIC convergence - Abstract
Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. Particle Swarm and Bacterial Foraging Inspired Hybrid Artificial Bee Colony Algorithm for Numerical Function Optimization.
- Author
-
Mao, Li, Mao, Yu, Zhou, Changxi, Li, Chaofeng, Wei, Xiao, and Yang, Hong
- Subjects
- *
PARTICLE swarm optimization , *ANT algorithms , *NUMERICAL functions , *MATHEMATICAL optimization , *STOCHASTIC convergence - Abstract
Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. A Spectral Dai-Yuan-Type Conjugate Gradient Method for Unconstrained Optimization.
- Author
-
Zhou, Guanghui and Ni, Qin
- Subjects
- *
CONJUGATE gradient methods , *SPECTRAL theory , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NONLINEAR programming - Abstract
A new spectral conjugate gradient method (SDYCG) is presented for solving unconstrained optimization problems in this paper. Our method provides a new expression of spectral parameter. This formula ensures that the sufficient descent condition holds. The search direction in the SDYCG can be viewed as a combination of the spectral gradient and the Dai-Yuan conjugate gradient. The global convergence of the SDYCG is also obtained. Numerical results show that the SDYCG may be capable of solving large-scale nonlinear unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. A Hybrid Mutation Chemical Reaction Optimization Algorithm for Global Numerical Optimization.
- Author
-
Ngambusabongsopa, Ransikarn, Li, Zhiyong, and Eldesouky, Esraa
- Subjects
- *
MUTATIONS (Algebra) , *CHEMICAL reactions , *MATHEMATICAL optimization , *METAHEURISTIC algorithms , *STOCHASTIC convergence , *COMPUTER science - Abstract
This paper proposes a hybrid metaheuristic approach that improves global numerical optimization by increasing optimal quality and accelerating convergence. This algorithm involves a recently developed process for chemical reaction optimization and two adjustment operators (turning and mutation operators). Three types of mutation operators (uniform, nonuniform, and polynomial) were combined with chemical reaction optimization and turning operator to find the most appropriate framework. The best solution among these three options was selected to be a hybrid mutation chemical reaction optimization algorithm for global numerical optimization. The optimal quality, convergence speed, and statistical hypothesis testing of our algorithm are superior to those previous high performance algorithms such as RCCRO, HP-CRO2, and OCRO. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. Weight Optimization in Recurrent Neural Networks with Hybrid Metaheuristic Cuckoo Search Techniques for Data Classification.
- Author
-
Nawi, Nazri Mohd, Khan, Abdullah, Rehman, M. Z., Chiroma, Haruna, and Herawan, Tutut
- Subjects
- *
RECURRENT neural networks , *MATHEMATICAL optimization , *BACK propagation , *STOCHASTIC convergence , *METAHEURISTIC algorithms , *SEARCH algorithms - Abstract
Recurrent neural network (RNN) has been widely used as a tool in the data classification. This network can be educated with gradient descent back propagation. However, traditional training algorithms have some drawbacks such as slow speed of convergence being not definite to find the global minimum of the error function since gradient descent may get stuck in local minima. As a solution, nature inspired metaheuristic algorithms provide derivative-free solution to optimize complex problems. This paper proposes a new metaheuristic search algorithm called Cuckoo Search (CS) based on Cuckoo bird’s behavior to train Elman recurrent network (ERN) and back propagation Elman recurrent network (BPERN) in achieving fast convergence rate and to avoid local minima problem. The proposed CSERN and CSBPERN algorithms are compared with artificial bee colony using BP algorithm and other hybrid variants algorithms. Specifically, some selected benchmark classification problems are used. The simulation results show that the computational efficiency of ERN and BPERN training process is highly enhanced when coupled with the proposed hybrid method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. A Novel Tournament Selection Based Differential Evolution Variant for Continuous Optimization Problems.
- Author
-
Abbas, Qamar, Ahmad, Jamil, and Jabeen, Hajira
- Subjects
- *
DIFFERENTIAL evolution , *CONTINUOUS functions , *MATHEMATICAL optimization , *STOCHASTIC convergence , *PERFORMANCE evaluation - Abstract
Differential evolution (DE) is a powerful global optimization algorithm which has been studied intensively by many researchers in the recent years. A number of variants have been established for the algorithm that makes DE more applicable. However, most of the variants are suffering from the problems of convergence speed and local optima. A novel tournament based parent selection variant of DE algorithm is proposed in this research. The proposed variant enhances searching capability and improves convergence speed of DE algorithm. This paper also presents a novel statistical comparison of existing DE mutation variants which categorizes these variants in terms of their overall performance. Experimental results show that the proposed DE variant has significance performance over other DE mutation variants. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
21. A Problem-Reduction Evolutionary Algorithm for Solving the Capacitated Vehicle Routing Problem.
- Author
-
Liu, Wanfeng and Li, Xia
- Subjects
- *
EVOLUTIONARY algorithms , *VEHICLE routing problem , *MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *ROBUST control - Abstract
Assessment of the components of a solution helps provide useful information for an optimization problem. This paper presents a new population-based problem-reduction evolutionary algorithm (PREA) based on the solution components assessment. An individual solution is regarded as being constructed by basic elements, and the concept of acceptability is introduced to evaluate them. The PREA consists of a searching phase and an evaluation phase. The acceptability of basic elements is calculated in the evaluation phase and passed to the searching phase. In the searching phase, for each individual solution, the original optimization problem is reduced to a new smaller-size problem. With the evolution of the algorithm, the number of common basic elements in the population increases until all individual solutions are exactly the same which is supposed to be the near-optimal solution of the optimization problem. The new algorithm is applied to a large variety of capacitated vehicle routing problems (CVRP) with customers up to nearly 500. Experimental results show that the proposed algorithm has the advantages of fast convergence and robustness in solution quality over the comparative algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
22. An Improved Central Force Optimization Algorithm for Multimodal Optimization.
- Author
-
Jie Liu and Yu-ping Wang
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *SWARM intelligence , *NUMERICAL analysis , *EVOLUTIONARY algorithms , *STOCHASTIC convergence - Abstract
This paper proposes the hybrid CSM-CFO algorithm based on the simplex method (SM), clustering technique, and central force optimization (CFO) for unconstrained optimization. CSM-CFO is still a deterministic swarm intelligent algorithm, such that the complex statistical analysis of the numerical results can be omitted, and the convergence intends to produce faster and more accurate by clustering technique and good points set. When tested against benchmark functions, in low and high dimensions, the CSM-CFO algorithm has competitive performance in terms of accuracy and convergence speed compared to other evolutionary algorithms: particle swarmoptimization, evolutionary program, and simulated annealing. The comparison results demonstrate that the proposed algorithm is effective and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
23. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization.
- Author
-
San-Yang Liu and Yuan-Yuan Huang
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL optimization , *LARGE scale systems , *NUMERICAL analysis , *STOCHASTIC convergence - Abstract
This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent conditiongTK ⩽ -(1 - 1/(4θk))‖gk‖ 2 (θk > 1/4) and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
24. A New Subband Adaptive Filtering Algorithm for Sparse System Identification with Impulsive Noise.
- Author
-
Young-Seok Choi
- Subjects
- *
ADAPTIVE filters , *MATHEMATICAL optimization , *ROBUST control , *COMPUTER simulation , *STOCHASTIC convergence - Abstract
This paper presents a novel subband adaptive filter (SAF) for systemidentificationwhere an impulse response is sparse and disturbed with an impulsive noise. Benefiting from the uses of l1-norm optimization and l0-norm penalty of the weight vector in the cost function, the proposed l0-norm sign SAF (l0-SSAF) achieves both robustness against impulsive noise and remarkably improved convergence behavior more than the classical adaptive filters. Simulation results in the system identification scenario confirm that the proposed l0-norm SSAF is not only more robust but also faster and more accurate than its counterparts in the sparse system identification in the presence of impulsive noise. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. Adaptive Randomness: A New Population Initialization Method.
- Author
-
Weifeng Pan, Kangshun Li, Muchou Wang, Jing Wang, and Bo Jiang
- Subjects
- *
MATHEMATICAL optimization , *INFORMATION theory , *RANDOM numbers , *MATHEMATICAL functions , *STOCHASTIC convergence , *GENERALIZATION - Abstract
Population initialization is a crucial task in population-based optimization methods, which can affect the convergence speed and also the quality of the final solutions. Generally, if no a priori information about the solutions is available, the initial population is often selected randomly using random numbers. This paper presents a new initialization method by applying the concept of adaptive randomness (AR) to distribute the individuals as spaced out as possible over the search space. To verify the performance of AR, a comprehensive set of 34 benchmark functions with a wide range of dimensions is utilized. Conducted experiments demonstrate that AR-based population initialization performs better than other population initialization methods such as random population initialization, opposition-based population initialization, and generalized opposition-based population initialization in the convergence speed and the quality of the final solutions. Further, the influences of the problem dimensionality, the new control parameter, and the number of trial individuals are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
26. A Free Search Krill Herd Algorithm for Functions Optimization.
- Author
-
Liangliang Li, Yongquan Zhou, and Jian Xie
- Subjects
- *
SEARCH algorithms , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ROBUST control , *PRECISION (Information retrieval) , *MATHEMATICAL analysis - Abstract
To simulate the freedom and uncertain individual behavior of krill herd, this paper introduces the opposition based learning (OBL) strategy and free search operator into krill herd optimization algorithm (KH) and proposes a novel opposition-based free search krill herd optimization algorithm (FSKH). In FSKH, each krill individual can search according to its own perception and scope of activities. The free search strategy highly encourages the individuals to escape from being trapped in local optimal solution. So the diversity and exploration ability of krill population are improved. And FSKH can achieve a better balance between local search and global search. The experiment results of fourteen benchmark functions indicate that the proposed algorithm can be effective and feasible in both low-dimensional and high-dimensional cases. And the convergence speed and precision of FSKH are higher. Compared to PSO, DE, KH, HS, FS, and BA algorithms, the proposed algorithm shows a better optimization performance and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. A Local and Global Search Combined Particle Swarm Optimization Algorithm and Its Convergence Analysis.
- Author
-
Weitian Lin, Zhigang Lian, Xingsheng Gu, and Bin Jiao
- Subjects
- *
PARTICLE swarm optimization , *STOCHASTIC convergence , *COMPUTER algorithms , *BENCHMARKING (Management) , *MATHEMATICAL analysis , *MATHEMATICAL optimization - Abstract
Particle swarm optimization algorithm (PSOA) is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA), and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA). Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
28. Flower Pollination Algorithm with Dimension by Dimension Improvement.
- Author
-
Rui Wang and Yongquan Zhou
- Subjects
- *
POLLINATION , *STOCHASTIC convergence , *DIMENSIONS , *PROBLEM solving , *MATHEMATICAL optimization - Abstract
Flower pollination algorithm (FPA) is a new nature-inspired intelligent algorithm which uses the whole update and evaluation strategy on solutions. For solving multi dimension function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution of algorithm due to interference phenomena among dimensions. To overcome this shortage, in this paper a dimension by dimension improvement based flower pollination algorithm is proposed. In the progress of iteration of improved algorithm, a dimension by dimension based update and evaluation strategy on solutions is used. And, in order to enhance the local searching ability, local neighborhood search strategy is also applied in this improved algorithm. The simulation experiments show that the proposed strategies can improve the convergence speed and the quality of solutions effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Efficient LED-SAC Sparse Estimator Using Fast Sequential Adaptive Coordinate-Wise Optimization (LED-2SAC).
- Author
-
Yousefi Rezaii, T., Beheshti, S., and Tinati, M. A.
- Subjects
- *
MATHEMATICAL optimization , *LINEAR equations , *SIGNAL processing , *LEAST squares , *SEQUENTIAL analysis , *STOCHASTIC convergence - Abstract
Solving the underdetermined system of linear equations is of great interest in signal processing application, particularly when the underlying signal to be estimated is sparse. Recently, a new sparsity encouraging penalty function is introduced as Linearized Exponentially Decaying penalty, LED, which results in the sparsest solution for an underdetermined system of equations subject to the minimization of the least squares loss function. A sequential solution is available for LED-based objective function, which is denoted by LED-SAC algorithm. This solution, which aims to sequentially solve the LED-based objective function, ignores the sparsity of the solution. In this paper, we present a new sparse solution. The new method benefits from the sparsity of the signal both in the optimization criterion (LED) and its solution path, denoted by Sparse SAC (2SAC). The new reconstruction method denoted by LED-2SAC (LED-Sparse SAC) is consequently more efficient and considerably fast compared to the LED-SAC algorithm, in terms of adaptability and convergence rate. In addition, the computational complexity of both LED-SAC and LED-2SAC is shown to be of order O(d2), which is better than the other batch solutions like LARS. LARS algorithm has complexity of order O(d3 +nd2), where d is the dimension of the sparse signal and n is the number of observations. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization.
- Author
-
Zhongbo Hu, Shengwu Xiong, Zhixiang Fang, and Qinghua Su
- Subjects
- *
DIFFERENTIAL evolution , *ALGORITHMS , *STOCHASTIC convergence , *PROBABILITY theory , *MATHEMATICAL optimization , *PARAMETERS (Statistics) - Abstract
Many improved differential Evolution (DE) algorithms have emerged as a very competitive class of evolutionary computation more than a decade ago. However, few improved DE algorithms guarantee global convergence in theory. This paper developed a convergent DE algorithmin theory, which employs a self-adaptation scheme for the parameters and two operators, that is, uniform mutation and hidden adaptation selection (haS) operators. The parameter self-adaptation and uniformmutation operator enhance the diversity of populations and guarantee ergodicity. The haS can automatically remove some inferior individuals in the process of the enhancing population diversity. The haS controls the proposed algorithm to break the loop of current generation with a small probability. The breaking probability is a hidden adaptation and proportional to the changes of the number of inferior individuals. The proposed algorithm is tested on ten engineering optimization problems taken from IEEE CEC2011. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization.
- Author
-
Feng Ma, Mingfang Ni, Lei Zhu, and Zhanke Yu
- Subjects
- *
MATHEMATICAL optimization , *CONVEX functions , *ALGORITHMS , *STOCHASTIC convergence , *PRINCIPAL components analysis - Abstract
Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
32. On the Strong Convergence of a Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Method.
- Author
-
Min Sun and Jing Liu
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL optimization , *LINEAR complementarity problem , *CONJUGATE gradient methods , *LINEAR differential equations - Abstract
Recently, Zhang et al. proposed a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for large-scale unconstrained optimization problems and proved its global convergence in the sense that lim inf... = 0 when an Armijo-type line search is used. In this paper, motivated by the line searches proposed by the line searches proposed by Shi et al. and Zhang et al., we propose two new Armijo-type line searches and show that the SDPRP method has strong convergence in the sense that lim ... = 0 under the two new line searches. Numerical results are reported to show the efficiency of the SDPRP with the new Armijo-type line searches in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
33. An Improved Differential Evolution Method Based on the Dynamic Search Strategy to Solve Dynamic Economic Dispatch Problem with Valve-Point Effects.
- Author
-
Guangyu Chen and Xiaoqun Ding
- Subjects
- *
DIFFERENTIAL evolution , *MATHEMATICAL optimization , *PARAMETERS (Statistics) , *STOCHASTIC convergence , *ALGORITHMS - Abstract
An improved differential evolution (DE) method based on the dynamic search strategy (IDEBDSS) is proposed to solve dynamic economic dispatch problem with valve-point effects in this paper. The proposed method combines the DE algorithm with the dynamic search strategy, which improves the performance of the algorithm. DE is the main optimizer in the method proposed. While chaotic sequences are applied to obtain the dynamic parameter settings in DE, dynamic search strategy which consists of two steps, global search strategy and local search strategy, is used to improve algorithm efficiency. To accelerate convergence, a new infeasible solution handing method is adopted in the local search strategy; meanwhile, an orthogonal crossover (OX) operator is added to the global search strategy to enhance the optimization search ability. Finally, the feasibility and effectiveness of the proposed methods are demonstrated by three test systems, and the simulation results reveal that the IDEBDSS method can obtain better solutions with higher efficiency than the standard DE and other methods reported in the recent literature. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
34. A Generalized Gradient Projection Filter Algorithm for Inequality Constrained Optimization.
- Author
-
Wei Wang, Shaoli Hua, and Junjie Tang
- Subjects
- *
GENERALIZATION , *ALGORITHMS , *MATHEMATICAL inequalities , *CONSTRAINTS (Physics) , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ITERATIVE methods (Mathematics) - Abstract
A generalized gradient projection filter algorithm for inequality constrained optimization is presented. It has three merits. The first is that the amount of computation is lower, since the gradient matrix only needs to be computed one time at each iterate. The second is that the paper uses the filter technique instead of any penalty function for constrained programming. The third is that the algorithm is of global convergence and locally superlinear convergence under some mild conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
35. A Conjugate Gradient Method with Global Convergence for Large-Scale Unconstrained Optimization Problems.
- Author
-
Shengwei Yao, Xiwen Lu, and Zengxin Wei
- Subjects
- *
CONJUGATE gradient methods , *STOCHASTIC convergence , *LARGE scale systems , *MATHEMATICAL optimization , *NUMERICAL analysis - Abstract
The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. This paper proposes a conjugate gradient method which is similar to Dai-Liao conjugate gradient method (Dai and Liao, 2001) but has stronger convergence properties. The given method possesses the sufficient descent condition, and is globally convergent under strong Wolfe-Powell (SWP) line search for general function. Our numerical results show that the proposed method is very efficient for the test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
36. Sufficient Conditions for Global Convergence of Differential Evolution Algorithm.
- Author
-
Zhongbo Hu, Shengwu Xiong, Qinghua Su, and Xiaowei Zhang
- Subjects
- *
EVOLUTIONARY algorithms , *STOCHASTIC convergence , *DIFFERENTIAL equations , *PARAMETERS (Statistics) , *MATHEMATICAL optimization , *MATHEMATICAL functions - Abstract
The differential evolution algorithm (DE) is one of the most powerful stochastic real-parameter optimization algorithms. The theoretical studies on DE have gradually attracted the attention of more and more researchers. However, few theoretical researches have been done to deal with the convergence conditions for DE. In this paper, a sufficient condition and a corollary for the convergence of DE to the global optima are derived by using the infinite product. A DE algorithm framework satisfying the convergence conditions is then established. It is also proved that the two common mutation operators satisfy the algorithm framework. Numerical experiments are conducted on two parts. One aims to visualize the process that five convergent DE based on the classical DE algorithms escape from a local optimal set on two low dimensional functions. The other tests the performance of a modified DE algorithm inspired of the convergent algorithm framework on the benchmarks of the CEC2005. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
37. A Global Convergence of LS-CD Hybrid Conjugate Gradient Method.
- Author
-
Xiangfei Yang, Zhijun Luo, and Xiaoyu Dai
- Subjects
- *
STOCHASTIC convergence , *HYBRID systems , *CONJUGATE gradient methods , *MATHEMATICAL optimization , *SEARCH algorithms - Abstract
Conjugate gradient method is one of the most effective algorithms for solving unconstrained optimization problem. In this paper, a modified conjugate gradient method is presented and analyzed which is a hybridization of known LS and CD conjugate gradient algorithms. Under some mild conditions, the Wolfe-type line search can guarantee the global convergence of the LS-CD method. The numerical results show that the algorithm is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
38. Sensor Scheduling with Intelligent Optimization Algorithm Based on Quantum Theory.
- Author
-
Zhiguo Chen, Yi Fu, and Wenbo Xu
- Subjects
- *
COMPUTER scheduling , *MATHEMATICAL optimization , *COMPUTER algorithms , *QUANTUM theory , *PARTICLE swarm optimization , *STOCHASTIC convergence - Abstract
The particle swarm optimization (PSO) algorithm superiority exists in convergence rate, but it tends to get stuck in local optima. An improved PSO algorithm is proposed using a best dimension mutation technique based on quantum theory, and it was applied to sensor scheduling problem for target tracking. The dynamics of the target are assumed as linear Gaussian model, and the sensor measurements showa linear correlation with the state of the target. This paper discusses the single target tracking problemwithmultiple sensors using the proposed best dimensionmutation particle swarmoptimization (BDMPSO) algorithm for various cases. Our experimental results verify that the proposed algorithm is able to track the target more reliably and accurately than previous ones. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. Lévy-Flight Krill Herd Algorithm.
- Author
-
Gaige Wang, Lihong Guo, Gandomi, Amir Hossein, Lihua Cao, Alavi, Amir Hossein, Hong Duan, and Jiang Li
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *PROBLEM solving , *MATHEMATICAL functions , *METAHEURISTIC algorithms , *OPTIMAL control theory - Abstract
To improve the performance of the krill herd (KH) algorithm, in this paper, a L'evy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local L'evy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimizationmethods. Especially, this newmethod can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. A Simple and Efficient Artificial Bee Colony Algorithm.
- Author
-
Yunfeng Xu, Ping Fan, and Ling Yuan
- Subjects
- *
BEES algorithm , *MATHEMATICAL optimization , *STOCHASTIC analysis , *STOCHASTIC convergence , *PERFORMANCE evaluation , *MATHEMATICAL functions , *SIMULATION methods & models , *POPULATION research - Abstract
Artificial bee colony (ABC) is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC) algorithm, which modifies the search pattern of both employed and on looker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems.
- Author
-
Shouheng Tuo, Longquan Yong, and Tao Zhou
- Subjects
- *
SEARCH algorithms , *MATHEMATICAL optimization , *METAHEURISTIC algorithms , *MUSIC improvisation , *STOCHASTIC convergence , *ROBUST control , *MACHINE learning - Abstract
Harmony search (HS) algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL), is presented for high dimension complex optimization problems. InHSTL algorithm, four strategies (harmonymemory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. A Self-Adjusting Spectral Conjugate Gradient Method for Large-Scale Unconstrained Optimization.
- Author
-
Yuanying Qiu, Dandan Cui, Wei Xue, and Gaohang Yu
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *PROBLEM solving , *NUMERICAL analysis , *MATHEMATICAL models - Abstract
This paper presents a hybrid spectral conjugate gradient method for large-scale unconstrained optimization, which possesses a self-adjusting property. Under the standard Wolfe conditions, its global convergence result is established. Preliminary numerical results are reported on a set of large-scale problems in CUTEr to show the convergence and efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
43. Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search.
- Author
-
Yuan-Yuan Chen and Shou-Qiang Du
- Subjects
- *
NONLINEAR systems , *CONJUGATE gradient methods , *MATHEMATICAL optimization , *PROBLEM solving , *STOCHASTIC convergence , *GLOBAL analysis (Mathematics) - Abstract
Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods withWolfe type line search are efficient for some unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
44. Single Directional SMO Algorithm for Least Squares Support Vector Machines.
- Author
-
Xigao Shao, Kun Wu, and Bifeng Liao
- Subjects
- *
ALGORITHMS , *LEAST squares , *SUPPORT vector machines , *MATHEMATICAL decomposition , *STOCHASTIC convergence , *PROOF theory , *MATHEMATICAL optimization - Abstract
Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
45. Strong Convergence of a Projected Gradient Method.
- Author
-
Shunhou Fan and Yonghong Yao
- Subjects
- *
STOCHASTIC convergence , *CONJUGATE gradient methods , *MATHEMATICAL optimization , *NUMERICAL solutions to boundary value problems , *MATHEMATICAL analysis , *CONVEX domains - Abstract
The projected-gradient method is a powerful tool for solving constrained convex optimization problems and has extensively been studied. In the present paper, a projected-gradient method is presented for solving the minimization problem, and the strong convergence analysis of the suggested gradient projection method is given. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. Convergence Study of Minimizing the Nonconvex Total Delay Using the Lane-Based Optimization Method for Signal-Controlled Junctions.
- Author
-
Wong, C. K. and Lee, Y. Y.
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL optimization , *NONLINEAR systems , *INTEGER programming , *ALGORITHM research - Abstract
This paper presents a 2D convergence density criterion for minimizing the total junction delay at isolated junctions in the lane-based optimization framework. The lane-based method integrates the design of lane markings and signal settings for traffic movements in a unified framework. The problem of delay minimization is formulated as a Binary Mix Integer Non Linear Program (BMINLP). A cutting plane algorithm can be applied to solve this difficult BMINLP problem by adding hyperplanes sequentially until sufficient numbers of planes are created in the form of solution constraints to replicate the original nonlinear surface in the solution space. A set of constraints is set up to ensure the feasibility and safety of the resultant optimized lane markings and signal settings. The main difficulty to solve this high-dimension nonlinear nonconvex delay minimization problem using cutting plane algorithm is the requirement of substantial computational efforts to reach a good-quality solution while approximating the nonlinear solution space. A new stopping criterion is proposed by monitoring a 2D convergence density to obtain a converged solution. A numerical example is given to demonstrate the effectiveness of the proposed methodology. The cutting-plane algorithm producing an effective signal design will become more computationally attractive with adopting the proposed stopping criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
47. Endogenous Reactivity in a Dynamic Model of Consumer's Choice.
- Author
-
Naimzada, Ahmad K. and Tramontana, Fabio
- Subjects
- *
BOUNDARY value problems , *DECISION making , *STOCHASTIC convergence , *MATHEMATICAL optimization , *PROBABILITY theory - Abstract
We move from a boundedly rational consumer model (Naimzada and Tramontana, 2008, 2010) characterized by a gradient-like decisional process in which, under particular parameters conditions, the asymptotical convergence to the optimal choice does not happen but it does under a least squared learning mechanism. In the present paper, we prove that even a less sophisticated learning mechanism leads to convergence to the rational choice and also prove that convergence is ensured when both learning mechanisms are available. The stability results that we obtain give more strength to the rational behavior assumption of the original model; in fact, the less demanding is the learning mechanism ensuring convergence to the rational behavior, the higher is the probability that even quite naive consumers will learn the composition of their optimum consumption bundles. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
48. Stabilizing of Subspaces Based on DPGA and Chaos Genetic Algorithm for Optimizing State Feedback Controller.
- Author
-
Hosseinpour, M., Nikdel, P., Badamchizadeh, M. A., and Poor, M. A.
- Subjects
- *
STATE feedback (Feedback control systems) , *ALGORITHMS , *FEEDBACK control systems , *MATHEMATICAL optimization , *COMBINATORIAL optimization , *STOCHASTIC convergence - Abstract
The main purpose of the paper is to optimize state feedback parameters using intelligent method, GA, Hermite-Biehler, and chaos algorithm. GA is implemented for local search but it has some deficiencies such as trapping into a local minimum and slow convergence, so the combination of Hermite-Biehler and chaos algorithm has been added to GA to avoid its deficiencies. Dividing search space is usually done by distributed population genetic algorithm (DPGA).Moreover, using generalized Hermite-Biehler Theorem can find the domain of parameters. In order to speed up the convergence at the first step, Hermite-Biehler method finds some intervals for controller, in the next step the GA will be added, and, finally, chaos disturbance will help the algorithm to reach a global minimum. Therefore, the proposed method can optimize the parameters of the state feedback controller. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
49. Strong Convergence Theorems for Equilibrium Problems and k-Strict Pseudocontractions in Hilbert Spaces.
- Author
-
Dao-Jun Wen
- Subjects
- *
STOCHASTIC convergence , *FIXED point theory , *HILBERT space , *MATHEMATICAL mappings , *ITERATIVE methods (Mathematics) , *MATHEMATICAL optimization , *MATHEMATICAL inequalities - Abstract
We introduce a new iterative scheme for finding a common element of the set of solutions of an equilibrium problem and the set of common fixed point of a finite family of k-strictly pseudo-contractive nonself-mappings. Strong convergence theorems are established in a real Hilbert space under some suitable conditions. Our theorems presented in this paper improve and extend the corresponding results announced by many others. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. A Clustering Approach Using Cooperative Artificial Bee Colony Algorithm.
- Author
-
Wenping Zou, Yunlong Zhu, Hanning Chen, and Xin Sui
- Subjects
- *
ALGORITHMS , *DECISION support systems , *MANAGEMENT information systems , *MATHEMATICAL optimization , *STOCHASTIC convergence , *PARTICLE swarm optimization - Abstract
Artificial Bee Colony (ABC) is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. This paper presents an extended ABC algorithm, namely, the Cooperative Article Bee Colony (CABC), which significantly improves the original ABC in solving complex optimization problems. Clustering is a popular data analysis and data mining technique; therefore, the CABC could be used for solving clustering problems. In this work, first the CABC algorithm is used for optimizing six widely used benchmark functions and the comparative results produced by ABC, Particle Swarm Optimization (PSO), and its cooperative version (CPSO) are studied. Second, the CABC algorithm is used for data clustering on several benchmark data sets. The performance of CABC algorithm is compared with PSO, CPSO, and ABC algorithms on clustering problems. The simulation results show that the proposed CABC outperforms the other three algorithms in terms of accuracy, robustness, and convergence speed. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.