251 results
Search Results
2. An area-type nonmonotone filter method for nonlinear constrained optimization.
- Author
-
Ke Su, Wei Lu, and Shaohua Liu
- Subjects
NONMONOTONIC logic ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,PARAMETERS (Statistics) - Abstract
In this paper, we define a new area-type filter algorithm based on the trust-region method. A relaxed trust-region quadratic correction subproblem is proposed to compute the trial direction at the current point. Consider the objective function and the constraint violation function at the current point as a point pair. We divide the point pairs into different partitions by the dominant region of the filter and calculate the contributions of the point pairs to the area of the filter separately. Different from the conventional filter, we define the contribution as the filter acceptance criterion for the trial point. The nonmonotone area-average form is also adopted in the filter mechanism. In this paper, monotone and nonmonotone methods are proposed and compared with the numerical values. Furthermore, the algorithm is proved to be convergent under some reasonable assumptions. The numerical experiment shows the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Subgradient-Based Neural Networks for Nonsmooth Nonconvex Optimization Problems.
- Author
-
Wei Bian and Xiaoping Xue
- Subjects
ARTIFICIAL neural networks ,NONCONVEX programming ,STOCHASTIC convergence ,MATHEMATICAL optimization ,ALGORITHMS ,SIGNAL processing - Abstract
This paper presents a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with a nonsmooth nonconvex objective function, a class of affine equality constraints, and a class of nonsmooth convex inequality constraints. The proposed neural network is modeled with a differential inclusion. Under a suitable assumption on the constraint set and a proper assumption on the objective function, it is proved that for a sufficiently large penalty parameter, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that the trajectory of the neural network converges to the set which consists of the equilibrium points of the neural network, and coincides with the set which consists of the critical points of the objective function in the feasible region. A condition is given to ensure the convergence to the equilibrium point set in finite time. Moreover, under suitable assumptions, the coincidence between the solution to the differential inclusion and the "slow solution" of it is also proved. Furthermore, three typical examples are given to present the effectiveness of the theoretic results obtained in this paper and the good performance of the proposed neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
4. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM.
- Author
-
Giselsson, Pontus and Boyd, Stephen
- Subjects
LINEAR systems ,STOCHASTIC convergence ,MULTIPLIERS (Mathematical analysis) ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
5. Efficient Design of Microstrip Antennas for SDR Applications Using Modified PSO Algorithm.
- Author
-
Modiri, A. and Kiasaleh, K.
- Subjects
MICROSTRIP antennas ,ALGORITHMS ,PARTICLE swarm optimization ,SOFTWARE radio ,STOCHASTIC convergence ,MATHEMATICAL optimization ,NUMERICAL analysis - Abstract
In this paper, several irregularly shaped microstrip antenna structures are designed using a new version of Binary Particle Swarm Optimization (BPSO) algorithm in which the velocity calculation is modified towards a more accelerated and still robust search procedure, particularly aimed for software-defined radio (SDR) applications. The optimization results are compared using both the modified and the conventional BPSO algorithms. Pros and cons are studied in terms of optimization length, convergence speed and final design conformability to desired objectives. It is depicted that the modified BPSO achieves the design criterion considerably faster than the conventional one, at the cost of slightly limiting particle exploration ability. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
6. The robust constant and its applications in random global search for unconstrained global optimization.
- Author
-
Peng, Zheng, Wu, Donghua, and Zhu, Wenxing
- Subjects
ROBUST optimization ,GLOBAL optimization ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,ALGORITHMS ,STOCHASTIC convergence - Abstract
Robust analysis is important for designing and analyzing algorithms for global optimization. In this paper, we introduce a new concept, robust constant, to quantitatively characterize the robustness of measurable sets and functions. The new concept is consistent to the theoretical robustness presented in literatures. This paper shows that, from the respects of convergence theory and numerical computational cost, robust constant is valuable significantly for analyzing random global search methods for unconstrained global optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. On the Theoretical Analysis of the Plant Propagation Algorithms.
- Author
-
Sulaiman, Muhammad, Salhi, Abdellah, Khan, Asfandyar, Muhammad, Shakoor, and Khan, Wali
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,HEURISTIC algorithms ,PROBLEM solving ,STOCHASTIC convergence - Abstract
Plant Propagation Algorithms (PPA) are powerful and flexible solvers for optimisation problems. They are nature-inspired heuristics which can be applied to any optimisation/search problem. There is a growing body of research, mainly experimental, on PPA in the literature. Little, however, has been done on the theoretical front. Given the prominence this algorithm is gaining in terms of performance on benchmark problems as well as practical ones, some theoretical insight into its convergence is needed. The current paper is aimed at fulfilling this by providing a sketch for a global convergence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Injection moulding optimisation of multi-class design variables using a PSO algorithm.
- Author
-
Deng, Y.-M., Zheng, D., and Lu, X.-J.
- Subjects
MOLDING (Founding) ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,COMPUTER software - Abstract
Injection moulding optimisation seeks to achieve the highest possible moulding quality under the specified constraints. To this end, the factors (design variables) affecting the moulding quality should be adjusted, including those of process parameters, mould design, part geometry, etc. Past work in this aspect is primarily focused on tuning the process parameters and mould design (e.g., gate location, runner and cooling channel layout), with less attention on the part geometry, and none on them all. To address this problem, this paper presents a PSO (particle swarm optimisation) algorithm for the optimisation of multi-class design variables, such as the part thickness, process parameters (melt temperature, mould temperature, injection time) and gate location. The optimisation is targeted at different aspects of moulding quality, including part warpage, weld lines, air traps, and so on. In applying the PSO algorithm, the paper proposes a modified elite archiving method, which can expedite the convergence speed, hence improving the efficiency of the algorithm. A computer program was developed that automates the steps such as adjusting the part thickness, the injection moulding process parameters and the gate location, activating the CAE software to simulate the injection moulding process, retrieving the simulation results, and evaluating the objective functions. The whole procedure iterates a number of generations by following the search process of the algorithm. A case study was also presented to illustrate as well as to test the proposed methodology, which was demonstrated as both effective and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
9. Guaranteeing Practical Convergence in Algorithms for Sensor and Source Localization.
- Author
-
Fidan, Bariş, Dasgupta, Soura, and Anderson, Brian D. O.
- Subjects
STOCHASTIC convergence ,DETECTORS ,ALGORITHMS ,SENSOR networks ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,ENGINEERING instruments ,MATHEMATICAL functions - Abstract
This paper considers localization of a source or a sensor from distance measurements. We argue that linear algorithms proposed for this purpose are susceptible to poor noise performance. Instead given a set of sensors/anchors of known positions and measured distances of the source/sensor to be localized from them we propose a potentially nonconvex weighted cost function whose global minimum estimates the location of the source/sensor one seeks. The contribution of this paper is to provide nontrivial ellipsoidal and polytopic regions surrounding these sensors/anchors of known positions, such that if the object to be localized is in this region, localization occurs by globally exponentially convergent gradient descent in the noise free case. Exponential convergence in the noise free case represents practical convergence as it ensures graceful performance degradation in the presence of noise. These results guide the deployment of sensors/anchors so that small subsets can be made responsible for practical localization in geographical areas determined by our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
10. On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization.
- Author
-
Chih-Jen Lin
- Subjects
STOCHASTIC convergence ,NONNEGATIVE matrices ,FACTORIZATION ,ALGORITHMS ,STATIONARY processes ,MATHEMATICAL optimization - Abstract
Nonnegative matrix factorization (NMF) is useful to find basis information of nonnegative data. Currently, multiplicative updates are a simple and popular way to find the factorization. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, no proof has shown that multiplicative updates converge to a stationary point of the NMF optimization problem. Stationarity is important as it is a necessary condition of a local minimum. This paper discusses the difficulty of proving the convergence. We propose slight modifications of existing updates and prove their convergence. Techniques invented in this paper may be applied to prove the convergence for other bound-constrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
11. Toward the Training of Feed-Forward Neural Networks With the D-Optimum Input Sequence.
- Author
-
Witczak, Marcin
- Subjects
STOCHASTIC convergence ,EXPERIMENTAL design ,MATHEMATICAL optimization ,TRAINING ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
The problem under consideration is to obtain a measurement schedule for training neural networks. This task is perceived as an experimental design in a given design space that is obtained in such a way as to minimize the difference between the neural network and the system being considered. This difference can be expressed in many different ways and one of them, namely, the D-optimality criterion is used in this paper. In particular, the paper presents a unified and comprehensive treatment of this problem by discussing the existing and previously unpublished properties of the optimum experimental design (OED) for neural networks. The consequences of the above properties are discussed as well. A hybrid algorithm that can be used for both the training and data development of neural networks is another important contribution of this paper. A careful analysis of the algorithm is presented and its comprehensive convergence analysis with the help of the Lyapunov method are given. The paper contains a number of numerical examples that justify the application of the OED theory for neural networks. Moreover, an industrial application example is given that deals with the valve actuator. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
12. Optimization over the Pareto outcome set associated with a convex bi-objective optimization problem: theoretical results, deterministic algorithm and application to the stochastic case.
- Author
-
Bonnel, Henri and Collonge, Julien
- Subjects
PARETO optimum ,MATHEMATICAL optimization ,ALGORITHMS ,RANDOM functions (Mathematics) ,STOCHASTIC convergence - Abstract
Our paper consists of two main parts. In the first one, we deal with the deterministic problem of minimizing a real valued function $$f$$ over the Pareto outcome set associated with a deterministic convex bi-objective optimization problem (BOP), in the particular case where $$f$$ depends on the objectives of (BOP), i.e. we optimize over the Pareto set in the outcome space. In general, the optimal value $$U$$ of such a kind of problem cannot be computed directly, so we propose a deterministic outcome space algorithm whose principle is to give at every step a range (lower bound, upper bound) that contains $$U$$ . Then we show that for any given error bound, the algorithm terminates in a finite number of steps. In the second part of our paper, in order to handle also the stochastic case, we consider the situation where the two objectives of (BOP) are given by expectations of random functions, and we deal with the stochastic problem $$(S)$$ of minimizing a real valued function $$f$$ over the Pareto outcome set associated with this Stochastic bi-objective Optimization Problem (SBOP). Because of the presence of random functions, the Pareto set associated with this type of problem cannot be explicitly given, and thus it is not possible to compute the optimal value $$V$$ of problem $$(S)$$ . That is why we consider a sequence of Sample Average Approximation problems (SAA- $$N$$ , where $$N$$ is the sample size) whose optimal values converge almost surely to $$V$$ as the sample size $$N$$ goes to infinity. Assuming $$f$$ nondecreasing, we show that the convergence rate is exponential, and we propose a confidence interval for $$V$$ . Finally, some computational results are given to illustrate the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. A Derivative-Free Trust Region Algorithm with Nonmonotone Filter Technique for Bound Constrained Optimization.
- Author
-
Gao, Jing, Cao, Jian, and Yang, Yueting
- Subjects
NONDIFFERENTIABLE functions ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL bounds ,STOCHASTIC convergence - Abstract
We propose a derivative-free trust region algorithm with a nonmonotone filter technique for bound constrained optimization. The derivative-free strategy is applied for special minimization functions in which derivatives are not all available. A nonmonotone filter technique ensures not only the trust region feature but also the global convergence under reasonable assumptions. Numerical experiments demonstrate that the new algorithm is effective for bound constrained optimization. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. The performance of the best choice of parameter values obtained by the algorithm we presented which differs from traditionally used values indicates that the algorithm proposed in this paper has a certain advantage for the nondifferentiable optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Center-based l1-clustering method.
- Author
-
Sabo, Kristian
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,OUTLIERS (Statistics) ,DATA editing ,ITERATIVE methods (Mathematics) ,STOCHASTIC convergence - Abstract
In this paper, we consider the l1-clustering problem for a finite data-point set which should be partitioned into k disjoint nonempty subsets. In that case, the objective function does not have to be either convex or differentiable, and generally it may have many local or global minima. Therefore, it becomes a complex global optimization problem. A method of searching for a locally optimal solution is proposed in the paper, the convergence of the corresponding iterative process is proved and the corresponding algorithm is given. The method is illustrated by and compared with some other clustering methods, especially with the l2-clustering method, which is also known in the literature as a smooth k-means method, on a few typical situations, such as the presence of outliers among the data and the clustering of incomplete data. Numerical experiments show in this case that the proposed l1-clustering algorithm is faster and gives significantly better results than the l2-clustering algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
15. Restricted Normal Cones and Sparsity Optimization with Affine Constraints.
- Author
-
Bauschke, Heinz, Luke, D., Phan, Hung, and Wang, Xianfu
- Subjects
MATHEMATICAL optimization ,PROBLEM solving ,PARALLEL computers ,STOCHASTIC convergence ,ALGORITHMS ,ESTIMATION theory - Abstract
The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely behaved nonconvex relaxations. In this paper we consider the elementary method of alternating projections (MAP) for solving the sparsity optimization problem without employing convex heuristics. In a parallel paper we recently introduced the restricted normal cone which generalizes the classical Mordukhovich normal cone and reconciles some fundamental gaps in the theory of sufficient conditions for local linear convergence of the MAP algorithm. We use the restricted normal cone together with the notion of superregularity, which is inherently satisfied for the affine sparse optimization problem, to obtain local linear convergence results with estimates for the radius of convergence of the MAP algorithm applied to sparsity optimization with an affine constraint. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. A Conjugate Gradient Algorithm under Yuan-Wei-Lu Line Search Technique for Large-Scale Minimization Optimization Models.
- Author
-
Li, Xiangrong, Wang, Songhua, Jin, Zhongzhou, and Pham, Hongtruong
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,STOCHASTIC convergence ,CONVEX functions ,MATHEMATICAL optimization - Abstract
This paper gives a modified Hestenes and Stiefel (HS) conjugate gradient algorithm under the Yuan-Wei-Lu inexact line search technique for large-scale unconstrained optimization problems, where the proposed algorithm has the following properties: (1) the new search direction possesses not only a sufficient descent property but also a trust region feature; (2) the presented algorithm has global convergence for nonconvex functions; (3) the numerical experiment showed that the new algorithm is more effective than similar algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Parallel distributed block coordinate descent methods based on pairwise comparison oracle.
- Author
-
Matsui, Kota, Kumagai, Wataru, and Kanamori, Takafumi
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,COORDINATES ,STOCHASTIC convergence ,ESTIMATION theory - Abstract
This paper provides a block coordinate descent algorithm to solve unconstrained optimization problems. Our algorithm uses only pairwise comparison of function values, which tells us only the order of function values over two points, and does not require computation of a function value itself or a gradient. Our algorithm iterates two steps: the direction estimate step and the search step. In the direction estimate step, a Newton-type search direction is estimated through a block coordinate descent-based computation method with the pairwise comparison. In the search step, a numerical solution is updated along the estimated direction. The computation in the direction estimate step can be easily parallelized, and thus, the algorithm works efficiently to find the minimizer of the objective function. Also, we theoretically derive an upper bound of the convergence rate for our algorithm and show that our algorithm achieves the optimal query complexity for specific cases. In numerical experiments, we show that our method efficiently finds the optimal solution compared to some existing methods based on the pairwise comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Almost sure convergence of randomised‐difference descent algorithm for stochastic convex optimisation.
- Author
-
Geng, Xiaoxue, Huang, Gao, and Zhao, Wenxiao
- Subjects
- *
STOCHASTIC convergence , *CONVEX geometry , *ALGORITHMS , *STOCHASTIC control theory , *MATHEMATICAL optimization , *MACHINE learning - Abstract
Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies on obtaining the unbiased estimates of gradients/subgradients of the objective functions. In this paper, by constructing the randomised differences of the objective function, a gradient‐free algorithm, named the stochastic randomised‐difference descent algorithm, is proposed for stochastic convex optimisation. Under the strongly convex assumption of the objective function, it is proved that the estimates generated from stochastic randomised‐difference descent converge to the optimal value with probability one, and the convergence rates of both the mean square error of estimates and the regret functions are established. Finally, some numerical examples are prsented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. A Fast Algorithm for Nonnegative Matrix Factorization and Its Convergence.
- Author
-
Li, Li-Xin, Wu, Lin, Zhang, Hui-Sheng, and Wu, Fang-Xiang
- Subjects
ALGORITHMS ,FACTORIZATION ,STOCHASTIC convergence ,MATHEMATICAL functions ,MATHEMATICAL optimization ,DIVERGENCE theorem - Abstract
Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been well resolved. This paper first discusses the convergence of existing multiplicative update algorithms. We then propose a new multiplicative update algorithm for minimizing the Euclidean distance between approximate and true values. Based on the optimization principle and the auxiliary function method, we prove that our new algorithm not only converges to a stationary point, but also does faster than existing ones. To verify our theoretical results, the experiments on three data sets have been conducted by comparing our proposed algorithm with other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
20. A nonmonotone line search method and its convergence for unconstrained optimization.
- Author
-
Cui, Zhaocheng and Yang, Zhenqi
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL functions ,ITERATIVE methods (Mathematics) ,STOCHASTIC convergence ,NEWTON-Raphson method ,ALGORITHMS - Abstract
In this paper, by using a forcing function and the reverse modulus of continuity of gradient, we propose a generalization and development of the nonmonotone line search method for unconstrained optimization problems. The global convergence property of the new method is established under some reasonable conditions that are weaker than those of the existing nonmonotone line search methods. Furthermore, the proof of convergence is simpler than other methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
21. Single Directional SMO Algorithm for Least Squares Support Vector Machines.
- Author
-
Xigao Shao, Kun Wu, and Bifeng Liao
- Subjects
ALGORITHMS ,LEAST squares ,SUPPORT vector machines ,MATHEMATICAL decomposition ,STOCHASTIC convergence ,PROOF theory ,MATHEMATICAL optimization - Abstract
Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
22. Trust, But Verify: Fast and Accurate Signal Recovery From 1-Bit Compressive Measurements.
- Author
-
Laska, Jason N., Wen, Zaiwen, Yin, Wotao, and Baraniuk, Richard G.
- Subjects
SIGNAL processing ,DATA compression ,ROBUST control ,ALGORITHMS ,COMPARATOR circuits ,SIGNAL-to-noise ratio ,MATHEMATICAL optimization ,STOCHASTIC convergence - Abstract
The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical Shannon-Nyquist rate. To date, the CS theory has assumed primarily real-valued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analog-to-digital converter; moreover, it ensures robustness to gross nonlinearities applied to the measurements. In this paper we introduce a new algorithm—restricted-step shrinkage (RSS)—to recover sparse signals from 1-bit CS measurements. In contrast to previous algorithms for 1-bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio. RSS is similar in spirit to trust-region methods for nonconvex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
23. Restoration of Poissonian Images Using Alternating Direction Optimization.
- Author
-
Figueiredo, Mário A. T. and Bioucas-Dias, José M.
- Subjects
IMAGE reconstruction ,POISSON processes ,MATHEMATICAL optimization ,STOCHASTIC convergence ,CONVEX functions ,LAGRANGIAN functions ,ALGORITHMS ,POISSON'S equation ,SPECTRUM analysis ,DECONVOLUTION (Mathematics) - Abstract
Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
24. Automatic Ground-Truth Validation With Genetic Algorithms for Multispectral Image Classification.
- Author
-
Ghoggali, Noureddine and Melgani, Farid
- Subjects
ALGORITHMS ,NOISE ,LEARNING ,MATHEMATICAL optimization ,STATISTICS ,STOCHASTIC convergence - Abstract
In this paper, we propose a novel method that aims at assisting the ground-truth expert through an automatic detection of potentially mislabeled learning samples. This method is based on viewing the mislabeled sample detection issue as an optimization problem where it is looked for the best subset of learning samples in terms of statistical separability between classes. This problem is formulated within a genetic optimization framework, where each chromosome represents a candidate solution for validating/invalidating the learning samples collected by the ground-truth expert. The genetic optimization process guided by the joint optimization of two different criteria which are the maximization of a between-class statistical distance and the minimization of the number of invalidated samples. Experiments conducted on both simulated and real data sets show that the proposed ground-truth validation method succeeds in the following: 1) in detecting the mislabeled samples with a high accuracy, even when up to 30% of the learning samples are mislabeled, and 2) strongly limiting the negative impact of the mislabeling issue on the accuracy of the classification process. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
25. An Improved Particle Swarm Optimization Algorithm Mimicking Territorial Dispute Between Groups for Multimodal Function Optimization Problems.
- Author
-
Jang-Ho Seo, Chang-Hwan Im, Sang-Yeop Kwak, Cheol-Gyun Lee, and Hyun-Kyo Jung
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL functions ,STOCHASTIC convergence ,ELECTROMAGNETISM - Abstract
In the present paper, an improved particle swarm optimization (PSO) algorithm for multimodal function optimization is proposed. The new algorithm, named auto-tuning multigrouped PSO (AT-MGPSO) algorithm mimics natural phenomena in ecosystem such as territorial dispute between different group members and immigration of weak groups, resulting in automatic determination of the size of each group's territory and robust convergence. The usefulness of the proposed algorithm is verified by the application to a specially designed test function and a practical electromagnetic optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
26. CONVERGENCE ANALYSIS OF A DISCRETE HOPFIELD NEURAL NETWORK WITH DELAY AND ITS APPLICATION TO KNOWLEDGE REFINEMENT.
- Author
-
TSANG, ERIC C. C., QIU, S. S., and YEUNG, DANIEL S.
- Subjects
STOCHASTIC convergence ,ARTIFICIAL neural networks ,ALGORITHMS ,SYMMETRIC matrices ,MATHEMATICAL optimization - Abstract
This paper investigates the convergence theorems that are associated with a Discrete Hopfield Neural Network (DHNN) with delay. We present two updating rules, one for serial mode and the other for parallel mode. The speed of convergence of these proposed updating rules is faster than all of the existing updating rules. It has been proved in this paper that a DHNN with delay will converge to a stable state when operating in a serial mode if the matrix of weights of the no-delay term is symmetric. In addition, it has been proved that they will converge to a stable state when operating in a parallel mode if the matrix of weights of the no-delay term is a symmetric and non-negative definite matrix. The condition for convergence of a DHNN without delay can been relaxed from the need to have a symmetric matrix to an even weaker condition of having a quasi-symmetric matrix. The results in this paper extend both the existing results concerning the convergence of a DHNN without delay and our previous findings. By means of the new network structure and its convergence theorems, we propose a local searching algorithm for combinatorial optimization. We also relate the maximum value of a bivariate energy function to the stable states of a DHNN with delay, which generalizes Hopfield's energy function. Moreover, for the serial model we give the relationship between the convergence of the energy function and the convergence of the corresponding network. One application is presented to demonstrate the higher rate of convergence and the accuracy of the classification using our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
27. A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization.
- Author
-
Dasheng Liu, Tan, K. C., Goh, C. K., and Ho, W. K.
- Subjects
ALGORITHMS ,MEMETICS ,STOCHASTIC convergence ,MATHEMATICAL optimization ,OPERATIONS research - Abstract
In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
28. Resource Allocation in Multiantenna Systems Achieving Max—Mm Fairness by Optimizing a Sum of Inverse SIR.
- Author
-
Boche, Holger and Schubert, Martin
- Subjects
ANTENNAS (Electronics) ,RESOURCE allocation ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,BROADBAND communication systems - Abstract
We address the problem of jointly optimizing transmit powers and linear preequalizers (e.g., beamformers) for a multiuser downlink channel. All results can be transferred to the dual uplink channel. The link quality of each user is determined by the signal-to-interference ratio (SIR). Two common strategies for distributing the system resources by joint optimization of transmit powers and beamformers are 1) maximize the minimum SIR, to which we refer as max-mm fairness, and 2) minimize the sum of weighted inverse SIR. Both strategies are generally not equivalent. In this paper, it is shown how the weighting factors should be chosen so that the sum optimization approach achieves optimal max-mm fairness. An efficient iterative algorithm is proposed for solving this nonlinear optimization problem. Monotonicity and convergence are proved. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
29. A NEW MEMORY GRADIENT METHOD UNDER EXACT LINE SEARCH.
- Author
-
Zhen-Jun Shi, C.
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis ,OPERATIONS research ,ALGORITHMS - Abstract
The paper presents a new memory gradient method for unconstrained optimization problem, and proves the convergence of the algorithm under exact line searches. The linear convergence rate is investigated when the objective function is uniformly convex. Numerical experiments show that the new algorithm is very effective in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2003
30. MSO: a framework for bound-constrained black-box global optimization algorithms.
- Author
-
Al-Dujaili, Abdullah, Suresh, S., and Sundararajan, N.
- Subjects
GLOBAL optimization ,ALGORITHMS ,MULTISCALE modeling ,STOCHASTIC convergence ,MATHEMATICAL optimization - Abstract
This paper addresses a class of algorithms for solving bound-constrained black-box global optimization problems. These algorithms partition the objective function domain over multiple scales in search for the global optimum. For such algorithms, we provide a generic procedure and refer to as multi-scale optimization ( MSO). Furthermore, we propose a theoretical methodology to study the convergence of MSO algorithms based on three basic assumptions: (a) local Hölder continuity of the objective function f, (b) partitions boundedness, and (c) partitions sphericity. Moreover, the worst-case finite-time performance and convergence rate of several leading MSO algorithms, namely, Lipschitzian optimization methods, multi-level coordinate search, dividing rectangles, and optimistic optimization methods have been presented. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. Optimum design of large-scale truss towers using cascade optimization.
- Author
-
Kaveh, A. and Ilchi Ghazaan, M.
- Subjects
OPTIMAL designs (Statistics) ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL optimization ,TRUSSES - Abstract
High number of design variables, large size of the search space, and control of a great number of design constraints are major preventive factors in performing optimum design of real-world structures in a reasonable time. The present study intends to examine the computational performance of cascading in design optimization of truss towers with large numbers of design variables. The cascade optimization procedure utilized in this paper reduces the objective function value over a number of optimization stages by initially operating on a small number of design variables, which is gradually increased stage after stage. In fact, the early stages of optimization in the cascade procedure make use of the coarsest configurations with small numbers of design variables and the later stages exploit finer configurations with larger numbers of design variables. The optimization algorithm utilized in each stage of a cascade process is enhanced colliding bodies optimization. High solution accuracy and convergence speed of the proposed method are shown through three test examples. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
32. On the Convergence of an Efficient Algorithm for Kullback–Leibler Approximation of Spectral Densities.
- Author
-
Ferrante, Augusto, Ramponi, Federico, and Ticozzi, Francesco
- Subjects
SPECTRAL energy distribution ,STOCHASTIC convergence ,APPROXIMATION theory ,MATHEMATICAL optimization ,VARIATIONAL principles ,DENSITY functionals ,ROBUST control ,ALGORITHMS ,INTERPOLATION - Abstract
This paper deals with a method for the approximation of a spectral density function among the solutions of a generalized moment problem à la Byrnes/Georgiou/Lindquist. The approximation is pursued with respect to the Kullback–Leibler pseudo-distance, which gives rise to a convex optimization problem. After developing the variational analysis, we discuss the properties of an efficient algorithm for the solution of the corresponding dual problem, based on the iteration of a nonlinear map in a bounded subset of the dual space. Our main result is the proof of local convergence of the latter, established as a consequence of the central manifold theorem. Supported by numerical evidence, we conjecture that, in the mentioned bounded set, the convergence is actually global. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
33. An Efficient Point Algorithm for a Linear Two-Stage Optimization Problem.
- Author
-
Bard, Jonathan F.
- Subjects
ALGORITHMS ,SENSITIVITY theory (Mathematics) ,MATHEMATICAL optimization ,LINEAR programming ,STOCHASTIC convergence ,OPERATIONS research - Abstract
This paper presents an algorithm using sensitivity analysis to solve a linear two-stage optimization problem. The underlying theory rests on a set of first order optimality conditions that parallel the Kuhn-Tucker conditions associated with a one-dimensional parametric linear program. The solution to the original problem is uncovered by systematically varying the parameter over the unit interval and solving the corresponding linear program. Finite convergence is established under nondegenerate assumptions. The paper also discusses other solution techniques including branch and bound and vertex enumeration and gives an example highlighting their computational and storage requirements. By these measures, the algorithm presented here has an overall advantage. Finally, a comparison is drawn between bicriteria and bilevel programming, and underscored by way of an example. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
34. ON THE CONVERGENCE PROPERTY OF THE DFP ALGORITHM.
- Author
-
Dingguo Pu and Wenci Yu
- Subjects
ALGORITHMS ,ALGEBRA ,STOCHASTIC convergence ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,OPERATIONS research ,SIMULATION methods & models - Abstract
The DFP algorithm of unconstrained optimization possesses excellent properties of convergence for convex functions. However, a convergence theory of the DFP algorithm without the convexity assumption has not yet been established. This paper gives the following result: If the objective function is suitably smooth, and if the DFP algorithm produces a convergent point sequence, then the limit point of the sequence is a critical point of the objective function. Also, some open questions are mentioned. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
35. Higher Order Sliding Mode Controllers With Optimal Reaching.
- Author
-
Dinuzzo, Francesco and Ferrara, Antonella
- Subjects
SLIDING mode control ,ROBUST control ,ALGORITHMS ,STOCHASTIC convergence ,SIMULATION methods & models ,MATHEMATICAL optimization - Abstract
Higher order sliding mode (HOSM) control design is considered for systems with a known permanent relative degree. In this paper, we introduce the Robust Fuller's Problem that is a robust generalization of the Fuller's problem, a standard optimal control problem for a chain of integrators with bounded control. By solving the Robust Fuller's Problem it is possible to obtain feed. back laws that are HOSM algorithms of generic order and, in addition, provide optimal finite-time reaching of the sliding manifold. A common difficulty in the use of existing HOSM algorithms is the tuning of design parameters: our methodology proves useful for the tuning of HOSM controller parameters in order to assure desired performances and prevent instabilities. The convergence and stability properties of the proposed family of controllers are theoretically analyzed. Simulation evidence demonstrates their effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
36. A CLASS OF INSIDE-OUT ALGORITHMS FOR GENERAL PROGRAMS.
- Author
-
Gould, F. J.
- Subjects
ALGORITHMS ,STOCHASTIC convergence ,NONLINEAR programming ,MATHEMATICAL programming ,MATHEMATICAL optimization ,GEOMETRICAL constructions ,MANAGEMENT science ,MATHEMATICAL models ,MATHEMATICAL analysis ,OPERATIONS research ,MATHEMATICAL functions ,PROBLEM solving - Abstract
In this paper the Fiacco-McCormick SUMT technique is embedded in a class of inside-out algorithms. Convergence is demonstrated for the nonlinear programming problem under fairly general conditions and the algorithms are interpreted in a geometric structure. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
37. Short-term unit consignment solution using real-coded grey wolf algorithm.
- Author
-
Rameshkumar, J., Ganesan, S., Subramanian, S., and Abirami, M.
- Subjects
ELECTRIC power systems ,MATHEMATICAL optimization ,OPERATING costs ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL variables - Abstract
This paper presents a new real-coded grey wolf optimization (RCGWO) algorithm for power system unit commitment (UC). RCGWO is an expansion of the grey wolf algorithm. This appears to be potential for solving the UC. The fitness function is constructed from the total operating cost of the generating units. Here, the scheduling variables are coded as real variables. Therefore, the minimum up/down time constraints can be handled directly. Different test case contains 3 units, 38 units, 10 units and multiples of 10 units (large scale system) are presented. The most important advantage of the proposed method is highly robust and good convergence. The proposed algorithm shows competitive performance with the best of the other methods reported for the past two decades. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
38. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.
- Author
-
Li, Xiangrong, Zhao, Xupei, Duan, Xiabin, and Wang, Xiaoliang
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL optimization ,MATHEMATICAL formulas ,APPROXIMATION theory - Abstract
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization.
- Author
-
Patrascu, Andrei and Necoara, Ion
- Subjects
STOCHASTIC convergence ,GLOBAL optimization ,MATHEMATICAL optimization ,ALGORITHMS ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function consisting of a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of the above structure, we propose random coordinate descent algorithms and analyze their convergence properties. For the general case, when the objective function is nonconvex and composite we prove asymptotic convergence for the sequences generated by our algorithms to stationary points and sublinear rate of convergence in expectation for some optimality measure. Additionally, if the objective function satisfies an error bound condition we derive a local linear rate of convergence for the expected values of the objective function. We also present extensive numerical experiments for evaluating the performance of our algorithms in comparison with state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. Natural Evolution Strategies.
- Author
-
Wierstra, Daan, Schaul, Tom, Glasmachers, Tobias, Yi Sun, Peters, Jan, and Schmidhuber, Jürgen
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *PARAMETERIZATION , *DISTRIBUTION (Probability theory) , *STOCHASTIC convergence , *COMPUTATIONAL complexity , *STOCHASTIC processes - Abstract
This paper presents Natural Evolution Strategies (NES), a recent family of black-box optimization algorithms that use the natural gradient to update a parameterized search distribution in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complexity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, such as general-purpose multi-variate normal distributions and separable distributions tailored towards search in high dimensional spaces. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others. [ABSTRACT FROM AUTHOR]
- Published
- 2014
41. THREE OPTIMIZATION MODELS FOR MULTISPLITTING PRECONDITIONER.
- Author
-
WEI, LIANG and WANG, CHUAN-LONG
- Subjects
MATHEMATICAL optimization ,STATISTICAL weighting ,SPLITTING extrapolation method ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL programming - Abstract
In this paper, we use matrix multisplitting with weighting parameters as the preconditioner of A. The optimal weighting parameters are determined by the approaching theory, and the scale of approaching is defined by F-norm, 2-norm, and ∞-norm, respectively. Base on these three minimize models, three algorithms are presented and the convergence theories are established. Finally, numerical examples show that the preconditioner with the optimal weighting parameters, which are obtained from minimize F-norm and 2-norm models, can improve the condition number of A effectively. Besides, general weighting parameters are more effective than non-negative weighting parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. ALGORITMO PARA LA SOLUCIÓN NUMÉRICA DE SISTEMAS DE ECUACIONES NO LINEALES MEDIANTE UNA ESTRATEGIA DE OPTIMIZACIÓN GLOBAL BASADA EN ANÁLISIS DE INTERVALOS.
- Author
-
GÓMEZ, LUIS ANTONIO, REYES, EDILBERTO JOSÉ, and CORREA, CARLOS RODRIGO
- Subjects
NONLINEAR equations ,GLOBAL optimization ,ALGORITHMS ,INTERVAL analysis ,STOCHASTIC convergence ,MATHEMATICAL optimization ,STOCHASTIC processes - Abstract
Copyright of Revista EIA is the property of Escuela de Ingenieria de Antioquia and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2012
43. A New CG-Algorithm with Self-Scaling VM-Update for Unconstraint Optimization.
- Author
-
Al-Bayati, Abbas Y. and Latif, Ivan S.
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL functions - Abstract
In this paper, a new combined extended Conjugate-Gradient (CG) and Variable-Metric (VM) methods is proposed for solving unconstrained large-scale numerical optimization problems. The basic idea is to choose a combination of the current gradient and some pervious search directions as a new search direction updated by Al-Bayati's SCVM-method to fit a new step-size parameter using Armijo Inexact Line Searches (ILS). This method is based on the ILS and its numerical properties are discussed using different non-linear test functions with various dimensions. The global convergence property of the new algorithm is investigated under few weak conditions. Numerical experiments show that the new algorithm seems to converge faster and is superior to some other similar methods in many situations. [ABSTRACT FROM AUTHOR]
- Published
- 2012
44. HYBRID-SURROGATE-MODEL-BASED EFFICIENT GLOBAL OPTIMIZATION FOR HIGH-DIMENSIONAL ANTENNA DESIGN.
- Author
-
Chen, L.-L., Liao, C., Lin, W.-B., Chang, L., and Zhong, X.-M.
- Subjects
MATHEMATICAL optimization ,ANTENNAS (Electronics) ,GENETIC algorithms ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL models - Abstract
Effcient global optimization has been extensively used in problems with expensive cost functions. However, this method is not suitable for high-dimensional problems. In this paper, the radial basis function network is introduced into the effcient global optimization, to avoid local optima and achieve a fast convergence for high-dimensional optimization. Our algorithm is applied to a 12-dimensional optimization of a transmitting antenna. Compared to the genetic-algorithm-based effcient global optimization and the differential evolution strategy, our algorithm converges to the global optimal value more effciently. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. Interior proximal methods for quasiconvex optimization.
- Author
-
Langenberg, Nils and Tichatschke, Rainer
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MULTIPLE integrals ,INTEGRALS ,MATHEMATICAL optimization - Abstract
A generalized proximal point algorithm for the minimization of a nonconvex function on a feasible set is investigated. It is known that if the objective function of the given problem is (lower semicontinuous, proper and) convex, well-definedness of the method as well as convergence of the generated iterates, being the solutions of better conditioned and uniquely solvable subproblems, are known. The present paper contributes to the discussion of the methods' behaviour when the objective is not convex. This gives rise to questions, among others, of well-definedness and convergence of the generated sequence. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. Chaos Immune Optimization Algorithm in Distribution Service Restoration after Faults.
- Author
-
Lei, Shaolan, Yang, Jing, Yang, Jia, Jiang, Dongrong, and Li, Shan
- Subjects
DISTRIBUTION (Probability theory) ,CHAOS theory ,MATHEMATICAL optimization ,ALGORITHMS ,AUTOMATION ,STOCHASTIC convergence ,PRECISION (Information retrieval) - Abstract
Abstract: Chaos immune optimization algorithm on distribution service restoration after faults is presented to improve the probability for optimal solution in this paper, which is integrated the characteristic of chaotic optimization with artificial immunity algorithm. In the proposed algorithm, excellent global search ability of chaotic optimization is regarded as “thick search” to initialize the antibody of immune algorithm, and local search of artificial immune algorithm is applied to replace “thin search” in the chaotic optimization, which assures the population diversity and avoids getting to stuck in local minima to a certain degree. A living example for a certain district distribution in Chongqing is analyzed by the method presented in this paper. The result shows that chaos immune optimization algorithm in the paper might improve convergence speed and precision of restoration and avoid premature convergence. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
47. A feasible direction method for the semidefinite program with box constraints
- Author
-
Xu, Yi, Sun, Wenyu, and Qi, Liqun
- Subjects
- *
FEASIBILITY studies , *SEMIDEFINITE programming , *LINEAR statistical models , *NONLINEAR theories , *STOCHASTIC convergence , *ALGORITHMS , *NUMERICAL analysis , *MATHEMATICAL optimization - Abstract
Abstract: In this paper, we try to solve the semidefinite program with box constraints. Since the traditional projection method for constrained optimization with box constraints is not suitable to the semidefinite constraints, we present a new algorithm based on the feasible direction method. In the paper, we discuss two cases: the objective function in semidefinite programming is linear and nonlinear, respectively. We establish the convergence of our algorithm, and report the numerical experiments which show the effectiveness of the algorithm. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
48. Fast Consensus by the Alternating Direction Multipliers Method.
- Author
-
Erseghe, Tomaso, Zennaro, Davide, Dall'Anese, Emiliano, and Vangelista, Lorenzo
- Subjects
MULTIPLIERS (Mathematical analysis) ,ALGORITHMS ,MATHEMATICAL optimization ,SENSOR networks ,EIGENVALUES ,EIGENFUNCTIONS ,STOCHASTIC convergence ,PERFORMANCE evaluation - Abstract
The alternating direction multipliers method (ADMM) has been recently proposed as a practical and efficient algorithm for distributed computing. We discuss its applicability to the average consensus problem in this paper. By carefully relaxing ADMM augmentation coefficients we are able to analytically investigate its properties, and to propose simple and strict analytical bounds. These provide a clear indication on how to choose system parameters for optimized performance. We prove both analytically and via simulations that the proposed approach exhibits convergence speed between the best in the literature (classical and optimized solutions), while providing the most powerful resilience to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. Hybrid differential evolution with biogeography-based optimization algorithm for solution of economic emission load dispatch problems
- Author
-
Bhattacharya, Aniruddha and Chattopadhyay, P.K.
- Subjects
- *
MATHEMATICAL optimization , *BIOGEOGRAPHY , *ALGORITHMS , *LOAD dispatching in electric power systems , *ELECTRIC generators , *ELECTRIC power systems , *OXIDES , *CONSTRAINT satisfaction , *EVOLUTIONARY computation , *STOCHASTIC convergence - Abstract
Abstract: This paper presents combination of differential evolution (DE) and biogeography-based optimization (BBO) algorithm to solve complex economic emission load dispatch (EELD) problems of thermal generators of power systems. Emission substances like NO X , SO X , COX, Power demand equality constraint and operating limit constraint are considered here. Differential evolution (DE) is one of the very fast and robust, accurate evolutionary algorithms for global optimization and solution of EELD problems. Biogeography-based optimization (BBO) is another new biogeography inspired algorithm. Biogeography deals with the geographical distribution of different biological species. This algorithm searches for the global optimum mainly through two steps: migration and mutation. In this paper combination of DE and BBO (DE/BBO) is proposed to accelerate the convergence speed of both the algorithm and to improve solution quality. To show the advantages of the proposed algorithm, it has been applied for solving multi-objective EELD problems in a 3-generator system with NO X and SO X emission, in a 6-generators system considering NO X emission, in a 6-generator system addressing both valve-point loading and NO X emission. The current proposal is found better in terms of quality of the compromising and individual solution obtained. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
50. COMBINATION ADAPTIVE TRUST REGION METHOD BY NON-MONOTONE STRATEGY FOR UNCONSTRAINED NONLINEAR PROGRAMMING.
- Author
-
AMINI, KEYVAN and AHOOKHOSH, MASOUD
- Subjects
NONLINEAR programming ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,NUMERICAL analysis ,ITERATIVE methods (Mathematics) ,MATHEMATICAL functions - Abstract
In this paper, we present a new trust region method for unconstrained nonlinear programming in which we blend adaptive trust region algorithm by non-monotone strategy to propose a new non-monotone trust region algorithm with automatically adjusted radius. Both non-monotone strategy and adaptive technique can help us introduce a new algorithm that reduces the number of iterations and function evaluations. The new algorithm preserves the global convergence and has local superlinear and quadratic convergence under suitable conditions. Numerical experiments exhibit that the new trust region algorithm is very efficient and robust for unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.