46 results
Search Results
2. Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex.
- Author
-
Gasnikov, A., Lagunovskaya, A., Usmanova, I., and Fedorenko, F.
- Subjects
SIMPLEXES (Mathematics) ,MATHEMATICAL optimization ,CONVEX sets ,MATHEMATICAL functions ,STOCHASTIC convergence - Abstract
In this paper we propose a modification of the mirror descent method for non-smooth stochastic convex optimization problems on the unit simplex. The optimization problems considered differ from the classical ones by availability of function values realizations. Our purpose is to derive the convergence rate of the method proposed and to determine the level of noise that does not significantly affect the convergence rate. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
3. A One-Layer Recurrent Neural Network for Constrained Nonsmooth Optimization.
- Author
-
Liu, Qingshan and Wang, Jun
- Subjects
ARTIFICIAL neural networks ,MATHEMATICAL optimization ,COMPUTER simulation ,STOCHASTIC convergence ,MATHEMATICAL models ,LYAPUNOV functions ,DIFFERENTIAL inclusions ,MATHEMATICAL functions - Abstract
This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
4. Guaranteeing Practical Convergence in Algorithms for Sensor and Source Localization.
- Author
-
Fidan, Bariş, Dasgupta, Soura, and Anderson, Brian D. O.
- Subjects
STOCHASTIC convergence ,DETECTORS ,ALGORITHMS ,SENSOR networks ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,ENGINEERING instruments ,MATHEMATICAL functions - Abstract
This paper considers localization of a source or a sensor from distance measurements. We argue that linear algorithms proposed for this purpose are susceptible to poor noise performance. Instead given a set of sensors/anchors of known positions and measured distances of the source/sensor to be localized from them we propose a potentially nonconvex weighted cost function whose global minimum estimates the location of the source/sensor one seeks. The contribution of this paper is to provide nontrivial ellipsoidal and polytopic regions surrounding these sensors/anchors of known positions, such that if the object to be localized is in this region, localization occurs by globally exponentially convergent gradient descent in the noise free case. Exponential convergence in the noise free case represents practical convergence as it ensures graceful performance degradation in the presence of noise. These results guide the deployment of sensors/anchors so that small subsets can be made responsible for practical localization in geographical areas determined by our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. Calculus of the Exponent of Kurdyka-Łojasiewicz Inequality and Its Applications to Linear Convergence of First-Order Methods.
- Author
-
Li, Guoyin and Pong, Ting Kei
- Subjects
EXPONENTS ,STOCHASTIC convergence ,MATHEMATICAL functions ,CALCULUS ,MATHEMATICAL optimization - Abstract
In this paper, we study the Kurdyka-Łojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo-Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is 12
. The Luo-Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is 12 . This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with ℓ1 regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery. [ABSTRACT FROM AUTHOR] - Published
- 2018
- Full Text
- View/download PDF
6. Implementation of Cartesian grids to accelerate Delaunay-based derivative-free optimization.
- Author
-
Beyhaghi, Pooriya and Bewley, Thomas
- Subjects
DERIVATIVES (Mathematics) ,MATHEMATICAL optimization ,ANALYTIC geometry ,MATHEMATICAL functions ,STOCHASTIC convergence - Abstract
This paper introduces a modification of our original Delaunay-based optimization algorithm (developed in JOGO DOI:) that reduces the number of function evaluations on the boundary of feasibility as compared with the original algorithm. A weaknesses we have identified with the original algorithm is the sometimes faulty behavior of the generated uncertainty function near the boundary of feasibility, which leads to more function evaluations along the boundary of feasibility than might otherwise be necessary. To address this issue, a second search function is introduced which has improved behavior near the boundary of the search domain. Additionally, the datapoints are quantized onto a Cartesian grid, which is successively refined, over the search domain. These two modifications lead to a significant reduction of datapoints accumulating on the boundary of feasibility, and faster overall convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. A Fast Algorithm for Nonnegative Matrix Factorization and Its Convergence.
- Author
-
Li, Li-Xin, Wu, Lin, Zhang, Hui-Sheng, and Wu, Fang-Xiang
- Subjects
ALGORITHMS ,FACTORIZATION ,STOCHASTIC convergence ,MATHEMATICAL functions ,MATHEMATICAL optimization ,DIVERGENCE theorem - Abstract
Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been well resolved. This paper first discusses the convergence of existing multiplicative update algorithms. We then propose a new multiplicative update algorithm for minimizing the Euclidean distance between approximate and true values. Based on the optimization principle and the auxiliary function method, we prove that our new algorithm not only converges to a stationary point, but also does faster than existing ones. To verify our theoretical results, the experiments on three data sets have been conducted by comparing our proposed algorithm with other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
8. A nonmonotone line search method and its convergence for unconstrained optimization.
- Author
-
Cui, Zhaocheng and Yang, Zhenqi
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL functions ,ITERATIVE methods (Mathematics) ,STOCHASTIC convergence ,NEWTON-Raphson method ,ALGORITHMS - Abstract
In this paper, by using a forcing function and the reverse modulus of continuity of gradient, we propose a generalization and development of the nonmonotone line search method for unconstrained optimization problems. The global convergence property of the new method is established under some reasonable conditions that are weaker than those of the existing nonmonotone line search methods. Furthermore, the proof of convergence is simpler than other methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
9. An Improved Particle Swarm Optimization Algorithm Mimicking Territorial Dispute Between Groups for Multimodal Function Optimization Problems.
- Author
-
Jang-Ho Seo, Chang-Hwan Im, Sang-Yeop Kwak, Cheol-Gyun Lee, and Hyun-Kyo Jung
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL functions ,STOCHASTIC convergence ,ELECTROMAGNETISM - Abstract
In the present paper, an improved particle swarm optimization (PSO) algorithm for multimodal function optimization is proposed. The new algorithm, named auto-tuning multigrouped PSO (AT-MGPSO) algorithm mimics natural phenomena in ecosystem such as territorial dispute between different group members and immigration of weak groups, resulting in automatic determination of the size of each group's territory and robust convergence. The usefulness of the proposed algorithm is verified by the application to a specially designed test function and a practical electromagnetic optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
10. A CLASS OF INSIDE-OUT ALGORITHMS FOR GENERAL PROGRAMS.
- Author
-
Gould, F. J.
- Subjects
ALGORITHMS ,STOCHASTIC convergence ,NONLINEAR programming ,MATHEMATICAL programming ,MATHEMATICAL optimization ,GEOMETRICAL constructions ,MANAGEMENT science ,MATHEMATICAL models ,MATHEMATICAL analysis ,OPERATIONS research ,MATHEMATICAL functions ,PROBLEM solving - Abstract
In this paper the Fiacco-McCormick SUMT technique is embedded in a class of inside-out algorithms. Convergence is demonstrated for the nonlinear programming problem under fairly general conditions and the algorithms are interpreted in a geometric structure. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
11. Sl1QP Based Algorithm with Trust Region Technique for Solving Nonlinear Second-Order Cone Programming Problems.
- Author
-
OKUNO, Takayuki, YASUDA, Kohei, and HAYASHI, Shunsuke
- Subjects
QUADRATIC programming ,NONLINEAR theories ,MATHEMATICAL optimization ,MATHEMATICAL functions ,NUMERICAL analysis ,STOCHASTIC convergence - Abstract
In this paper, we propose an algorithm based on Fletcher's Sl
1 QP method and the trust region technique for solving Nonlinear Second-Order Cone Programming (NSOCP) problems. The Sl1 QP method was originally developed for nonlinear optimization problems with inequality constraints. It converts a constrained optimization problem into an unconstrained problem by using the l1 exact penalty function, and then finds an optimum by solving approximate quadratic programming subproblems successively. In order to apply the Sl1 QP method to the NSOCP problem, we introduce an exact penalty function with respect to second-order cone constraints and reformulate the NSOCP problem as an unconstrained optimization problem. However, since each subproblem generated by the Sl1 QP method is not differentiable, we reformulate it as a second-order cone programming problem whose objective function is quadratic and constraint functions are affine. We analyze the convergence property of the proposed algorithm, and show that the generated sequence converge to a stationary point of the NSOCP problem under mild assumptions. We also confirm the efficiency of the algorithm by means of numerical experiments. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
12. Convergence of a data-driven time–frequency analysis method.
- Author
-
Hou, Thomas Y., Shi, Zuoqiang, and Tavallali, Peyman
- Subjects
- *
STOCHASTIC convergence , *TIME-frequency analysis , *NONLINEAR theories , *MATHEMATICAL optimization , *ITERATIVE methods (Mathematics) , *MATHEMATICAL functions - Abstract
Abstract: In a recent paper [11], Hou and Shi introduced a new adaptive data analysis method to analyze nonlinear and non-stationary data. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form , where , consists of the functions that are less oscillatory than and . This problem was formulated as a nonlinear optimization problem and an iterative nonlinear matching pursuit method was proposed to solve this nonlinear optimization problem. In this paper, we prove the convergence of this nonlinear matching pursuit method under some scale separation assumptions on the signal. We consider both well-resolved and poorly sampled signals, as well as signals with noise. In the case without noise, we prove that our method gives exact recovery of the original signal. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
13. A New CG-Algorithm with Self-Scaling VM-Update for Unconstraint Optimization.
- Author
-
Al-Bayati, Abbas Y. and Latif, Ivan S.
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL functions - Abstract
In this paper, a new combined extended Conjugate-Gradient (CG) and Variable-Metric (VM) methods is proposed for solving unconstrained large-scale numerical optimization problems. The basic idea is to choose a combination of the current gradient and some pervious search directions as a new search direction updated by Al-Bayati's SCVM-method to fit a new step-size parameter using Armijo Inexact Line Searches (ILS). This method is based on the ILS and its numerical properties are discussed using different non-linear test functions with various dimensions. The global convergence property of the new algorithm is investigated under few weak conditions. Numerical experiments show that the new algorithm seems to converge faster and is superior to some other similar methods in many situations. [ABSTRACT FROM AUTHOR]
- Published
- 2012
14. COMBINATION ADAPTIVE TRUST REGION METHOD BY NON-MONOTONE STRATEGY FOR UNCONSTRAINED NONLINEAR PROGRAMMING.
- Author
-
AMINI, KEYVAN and AHOOKHOSH, MASOUD
- Subjects
NONLINEAR programming ,MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,NUMERICAL analysis ,ITERATIVE methods (Mathematics) ,MATHEMATICAL functions - Abstract
In this paper, we present a new trust region method for unconstrained nonlinear programming in which we blend adaptive trust region algorithm by non-monotone strategy to propose a new non-monotone trust region algorithm with automatically adjusted radius. Both non-monotone strategy and adaptive technique can help us introduce a new algorithm that reduces the number of iterations and function evaluations. The new algorithm preserves the global convergence and has local superlinear and quadratic convergence under suitable conditions. Numerical experiments exhibit that the new trust region algorithm is very efficient and robust for unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
15. A Nonfeasible Gradient Projection Recurrent Neural Network for Equality-Constrained Optimization Problems.
- Author
-
Barbarosou, Maria P. and Maratos, Nicholas G.
- Subjects
STOCHASTIC convergence ,MATHEMATICAL optimization ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MATHEMATICAL functions ,NETS (Mathematics) - Abstract
In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t → ∞. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
16. Partially linear hazard regression with varying coefficients for multivariate survival data.
- Author
-
Jianwen Cai, Fan, Jianqing, Jiancheng Jiang, and Haibo Zhou
- Subjects
REGRESSION analysis ,MATHEMATICAL models ,ESTIMATION theory ,MATHEMATICAL functions ,STOCHASTIC convergence ,MATHEMATICAL optimization - Abstract
The paper studies estimation of partially linear hazard regression models with varying coefficients for multivariate survival data. A profile pseudo-partial-likelihood estimation method is proposed. The estimation of the parameters of the linear part is accomplished via maximization of the profile pseudo-partial-likelihood, whereas the varying-coefficient functions are considered as nuisance parameters that are profiled out of the likelihood. It is shown that the estimators of the parameters are root n consistent and the estimators of the non-parametric coefficient functions achieve optimal convergence rates. Asymptotic normality is obtained for the estimators of the finite parameters and varying-coefficient functions. Consistent estimators of the asymptotic variances are derived and empirically tested, which facilitate inference for the model. We prove that the varying-coefficient functions can be estimated as well as if the parametric components were known and the failure times within each subject were independent. Simulations are conducted to demonstrate the performance of the estimators proposed. A real data set is analysed to illustrate the methodology proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
17. A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration.
- Author
-
Bioucas-Dias, José M. and Figueiredo, Mário A. T.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,IMAGE reconstruction ,IMAGE processing ,STOCHASTIC convergence ,MATHEMATICAL functions ,DIFFERENTIAL equations ,MATHEMATICAL analysis ,SYSTEM analysis - Abstract
Iterative shrinkage/thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l
P norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over 1ST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
18. DESCENT DIRECTIONS OF QUASI-NEWTON METHODS FOR SYMMETRIC NONLINEAR EQUATIONS.
- Author
-
Guang-Ze Gu, Dong-Hui Li, Liqun Qi, and Shu-Zi Zhou
- Subjects
NEWTON-Raphson method ,EQUATIONS ,MATHEMATICAL functions ,STOCHASTIC convergence ,MATHEMATICAL optimization - Abstract
In general, when a quasi-Newton method is applied to solve a system of nonlinear equations, the quasi-Newton direction is not necessarily a descent direction for the norm function. In this paper, we show that when applied to solve symmetric nonlinear equations, a quasi-Newton method with positive definite iterative matrices may generate descent directions for the norm function. On the basis of a Gauss-Newton based BFGS method [D. H. Li and M. Fukushima, SIAM J. Numer. Anal., 37 (1999), pp. 152-172], we develop a norm descent BFGS method for solving symmetric nonlinear equations. Under mild conditions, we establish the global and superlinear convergence of the method. The proposed method shares some favorable properties of the BFGS method for solving unconstrained optimization problems: (a) the generated sequence of the quasi-Newton matrices is positive definite; (b) the generated sequence of iterates is norm descent; (c) a global convergence theorem is established without nonsingularity assumption on the Jacobian. Preliminary numerical results are reported, which positively support the method. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
19. Seismogram registration via Markov chainMonte Carlo optimization and its applications in full waveform inversion.
- Author
-
Hejun Zhu
- Subjects
MONTE Carlo method ,SEISMOGRAMS ,WAVE analysis ,MATHEMATICAL functions ,MATHEMATICAL optimization ,STOCHASTIC convergence - Abstract
Cycle skipping is a serious issue in fullwaveforminversion (FWI) since it leads to local minima. To date, most FWI algorithms depend on local gradient based optimization approaches, which cannot guarantee convergence towards the global minimum if the misfit function involves local minima and the starting model is far from the true solution. In this study, I propose a misfit function based on non-stationary time warping functions, which can be calculated by solving a seismogram registration problem. Considering the inherent cycle skipping and local minima issues of the registration problem, I use a Markov chain Monte Carlo (MCMC) method to solve it. With this global optimization approach, I am able to directly sample the global minimum and measure non-stationary traveltime differences between observed and predicted seismograms. Theapriori constraint about the sparsity of the localwarping functions is incorporated to eliminate unreasonable solutions. No window selections are required in this procedure. In comparison to other approaches for measuring traveltime differences, the proposed method enables us to align signals with different numbers of events. This property is a direct consequence of the usage of MCMC optimization and sparsity constraints. Several numerical examples demonstrate that the proposed misfit function allows us to tackle the cycle skipping problem and construct accurate long-wavelength velocity models even without low frequency data and good starting models. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Active-set Methods for Submodular Minimization Problems.
- Author
-
Kumar, K. S. Sesh and Bach, Francis
- Subjects
- *
MATHEMATICAL functions , *MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *MACHINE learning - Abstract
We consider the submodular function minimization (SFM) and the quadratic minimization problems regularized by the Lovász extension of the submodular function. These optimization problems are intimately related; for example, min-cut problems and total variation denoising problems, where the cut function is submodular and its Lovász extension is given by the associated total variation. When a quadratic loss is regularized by the total variation of a cut function, it thus becomes a total variation denoising problem and we use the same terminology in this paper for "general" submodular functions. We propose a new active-set algorithm for total variation denoising with the assumption of an oracle that solves the corresponding SFM problem. This can be seen as local descent algorithm over ordered partitions with explicit convergence guarantees. It is more flexible than the existing algorithms with the ability for warm-restarts using the solution of a closely related problem. Further, we also consider the case when a submodular function can be decomposed into the sum of two submodular functions F1 and F2 and assume SFM oracles for these two functions. We propose a new active-set algorithm for total variation denoising (and hence SFM by thresholding the solution at zero). This algorithm also performs local descent over ordered partitions and its ability to warm start considerably improves the performance of the algorithm. In the experiments, we compare the performance of the proposed algorithms with state-of-the-art algorithms, showing that it reduces the calls to SFM oracles. [ABSTRACT FROM AUTHOR]
- Published
- 2017
21. ESTIMATION AND CONTROL IN MULTICHAIN PROCESSES.
- Author
-
Girlich, Hans-Joachim and Sokolichin, A. A.
- Subjects
MARKOV spectrum ,STOCHASTIC convergence ,MATHEMATICAL functions ,PARAMETER estimation ,MATHEMATICAL optimization ,FINITE, The - Abstract
This paper considers Markovian decision processes in discrete time with transition probabilities depending on an unknown parameter which may change step by step. In the case of the convergence of such a parameter sequence, a policy maximizing the average expected reward over an infinite future is looked for. Under continuity conditions, the uniform optimality of a policy based on ‘estimation and control’ for some multichain models is shown. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
22. Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem.
- Author
-
Houyuan Jiang and Huifu Xu
- Subjects
STOCHASTIC approximation ,VARIATIONAL inequalities (Mathematics) ,STOCHASTIC inequalities ,MATHEMATICAL optimization ,SIMULATION methods & models ,APPROXIMATION theory ,MATHEMATICAL functions ,STOCHASTIC convergence - Abstract
Abstract-Stochastic approximation methods have been extensively studied in the literature for solving systems of stochastic equations and stochastic optimization problems where function values and first order derivatives are not observable but can be approximated through simulation. In this paper, we investigate stochastic approximation methods for solving stochastic variational inequality problems (SVIP) where the underlying functions are the expected value of stochastic functions. Two types of methods are proposed: stochastic approximation methods based on projections and stochastic approximation methods based on reformulations of SVIP. Global convergence results of the proposed methods are obtained under appropriate conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
23. Global convergence of BFGS and PRP methods under a modified weak Wolfe–Powell line search.
- Author
-
Yuan, Gonglin, Wei, Zengxin, and Lu, Xiwen
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL optimization , *SEARCH algorithms , *MATHEMATICAL functions , *QUASI-Newton methods - Abstract
The BFGS method is one of the most effective quasi-Newton algorithms for optimization problems. However, its global convergence for general functions is still open. In this paper, under a new line search technique, this problem is solved, and it is shown that other methods in the Broyden class also possess this property. Moreover, the global convergence of the PRP method is established in the case of this new line search. Numerical results are reported to show that the new line search technique is competitive to that of the normal line search. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
24. A feasible-side globally convergent modifier-adaptation scheme.
- Author
-
Marchetti, Alejandro G., Faulwasser, Timm, and Bonvin, Dominique
- Subjects
- *
REAL-time computing , *MATHEMATICAL optimization , *STOCHASTIC convergence , *MATHEMATICAL functions , *UNCERTAINTY - Abstract
In the context of static real-time optimization (RTO) of uncertain plants, the standard modifier-adaptation scheme consists in adding first-order correction terms to the cost and constraint functions of a model-based optimization problem. If the algorithm converges, the limit is guaranteed to be a KKT point of the plant. This paper presents a general RTO formulation, wherein the cost and constraint functions belong to a certain class of convex upper-bounding functions. It is demonstrated that this RTO formulation enforces feasible-side global convergence to a KKT point of the plant. Based on this result, a novel modifier-adaptation scheme with guaranteed feasible-side global convergence is proposed. In addition to the first-order correction terms, quadratic terms are added in order to convexify and upper bound the cost and constraint functions. The applicability of the approach is demonstrated on a constrained variant of the Williams–Otto reactor for which standard modifier adaptation fails to converge in the presence of plant-model mismatch. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Modification of Nonlinear Conjugate Gradient Method with Weak Wolfe-Powell Line Search.
- Author
-
Alhawarat, Ahmad and Salleh, Zabidin
- Subjects
- *
CONJUGATE gradient methods , *MATHEMATICAL functions , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Conjugate gradient (CG) method is used to find the optimum solution for the large scale unconstrained optimization problems. Based on its simple algorithm, low memory requirement, and the speed of obtaining the solution, this method is widely used in many fields, such as engineering, computer science, and medical science. In this paper, we modified CG method to achieve the global convergence with various line searches. In addition, it passes the sufficient descent condition without any line search. The numerical computations under weak Wolfe-Powell line search shows that the efficiency of the new method is superior to other conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Opposition and dimensional based modified firefly algorithm.
- Author
-
Verma, Om Prakash, Aggarwal, Deepti, and Patodi, Tejna
- Subjects
- *
DIMENSION theory (Topology) , *EVOLUTIONARY algorithms , *STOCHASTIC convergence , *COMPUTATIONAL complexity , *MATHEMATICAL functions , *MATHEMATICAL optimization - Abstract
This paper presents the modified Firefly Algorithm (FA) originally proposed by Yang. Firefly Algorithm is based on the idealized behavior of the flashing characteristics of the fireflies. Though firefly is powerful in local search, it does not search well globally due to being trapped in local optimum. Due to this reason, the convergence is generally slow. The FA also doesn't give efficient solution in high dimensional problems. The proposed approach gives more efficient solution with reduced time complexity in comparison to original FA. Two modifications made are: (1) Opposition-based methodology is deployed where initialization of candidate solutions is done using opposition based learning to improve convergence rate of original FA, which includes initializing the opposite number of positions of each firefly. This also ensures efficient searching of the whole search space, (2) The dimensional-based approach is employed in which the position of each firefly is updated along different dimensions. This results in more optimal solution. This algorithm works for High Dimensionality problems, especially in terms of accuracy in finding the best optimal solution and in terms of fast convergence speed as well. Several complex multidimensional standard functions are employed for experimental verification. Experimental results include comparison with other Evolutionary algorithms which show that the Opposition and Dimensional based FA (ODFA) gives more accurate optimal solution with high convergence speed than the original FA and those achieved by existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. Ions motion algorithm for solving optimization problems.
- Author
-
Javidy, Behzad, Hatamlou, Abdolreza, and Mirjalili, Seyedali
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,PROBLEM solving ,STOCHASTIC convergence ,MATHEMATICAL functions - Abstract
This paper proposes a novel optimization algorithm inspired by the ions motion in nature. In fact, the proposed algorithm mimics the attraction and repulsion of anions and cations to perform optimization. The proposed algorithm is designed in such a way to have the least tuning parameters, low computational complexity, fast convergence, and high local optima avoidance. The performance of this algorithm is benchmarked on 10 standard test functions and compared to four well-known algorithms in the literature. The results demonstrate that the proposed algorithm is able to show very competitive results and has merits in solving challenging optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
28. Adaptive Randomness: A New Population Initialization Method.
- Author
-
Weifeng Pan, Kangshun Li, Muchou Wang, Jing Wang, and Bo Jiang
- Subjects
- *
MATHEMATICAL optimization , *INFORMATION theory , *RANDOM numbers , *MATHEMATICAL functions , *STOCHASTIC convergence , *GENERALIZATION - Abstract
Population initialization is a crucial task in population-based optimization methods, which can affect the convergence speed and also the quality of the final solutions. Generally, if no a priori information about the solutions is available, the initial population is often selected randomly using random numbers. This paper presents a new initialization method by applying the concept of adaptive randomness (AR) to distribute the individuals as spaced out as possible over the search space. To verify the performance of AR, a comprehensive set of 34 benchmark functions with a wide range of dimensions is utilized. Conducted experiments demonstrate that AR-based population initialization performs better than other population initialization methods such as random population initialization, opposition-based population initialization, and generalized opposition-based population initialization in the convergence speed and the quality of the final solutions. Further, the influences of the problem dimensionality, the new control parameter, and the number of trial individuals are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Sufficient Conditions for Global Convergence of Differential Evolution Algorithm.
- Author
-
Zhongbo Hu, Shengwu Xiong, Qinghua Su, and Xiaowei Zhang
- Subjects
- *
EVOLUTIONARY algorithms , *STOCHASTIC convergence , *DIFFERENTIAL equations , *PARAMETERS (Statistics) , *MATHEMATICAL optimization , *MATHEMATICAL functions - Abstract
The differential evolution algorithm (DE) is one of the most powerful stochastic real-parameter optimization algorithms. The theoretical studies on DE have gradually attracted the attention of more and more researchers. However, few theoretical researches have been done to deal with the convergence conditions for DE. In this paper, a sufficient condition and a corollary for the convergence of DE to the global optima are derived by using the infinite product. A DE algorithm framework satisfying the convergence conditions is then established. It is also proved that the two common mutation operators satisfy the algorithm framework. Numerical experiments are conducted on two parts. One aims to visualize the process that five convergent DE based on the classical DE algorithms escape from a local optimal set on two low dimensional functions. The other tests the performance of a modified DE algorithm inspired of the convergent algorithm framework on the benchmarks of the CEC2005. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
30. Lévy-Flight Krill Herd Algorithm.
- Author
-
Gaige Wang, Lihong Guo, Gandomi, Amir Hossein, Lihua Cao, Alavi, Amir Hossein, Hong Duan, and Jiang Li
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *PROBLEM solving , *MATHEMATICAL functions , *METAHEURISTIC algorithms , *OPTIMAL control theory - Abstract
To improve the performance of the krill herd (KH) algorithm, in this paper, a L'evy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local L'evy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimizationmethods. Especially, this newmethod can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
31. A Simple and Efficient Artificial Bee Colony Algorithm.
- Author
-
Yunfeng Xu, Ping Fan, and Ling Yuan
- Subjects
- *
BEES algorithm , *MATHEMATICAL optimization , *STOCHASTIC analysis , *STOCHASTIC convergence , *PERFORMANCE evaluation , *MATHEMATICAL functions , *SIMULATION methods & models , *POPULATION research - Abstract
Artificial bee colony (ABC) is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC) algorithm, which modifies the search pattern of both employed and on looker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
32. An augmented Lagrangian fish swarm based method for global optimization
- Author
-
Rocha, Ana Maria A.C., Martins, Tiago F.M.C., and Fernandes, Edite M.G.P.
- Subjects
- *
LAGRANGE equations , *MATHEMATICAL optimization , *STOCHASTIC analysis , *ALGORITHMS , *CONSTRAINED optimization , *PROBLEM solving , *STOCHASTIC convergence , *MATHEMATICAL functions - Abstract
Abstract: This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
33. Process optimization via constraints adaptation
- Author
-
Chachuat, B., Marchetti, A., and Bonvin, D.
- Subjects
- *
MATHEMATICAL optimization , *STOCHASTIC convergence , *MATHEMATICAL functions , *ATMOSPHERIC temperature , *CONSTRAINTS (Physics) - Abstract
Abstract: In the framework of real-time optimization, measurement-based schemes have been developed to deal with plant-model mismatch and process variations. These schemes differ in how the feedback information from the plant is used to adapt the inputs. A recent idea therein is to use the feedback information to adapt the constraints of the optimization problem instead of updating the model parameters. These methods are based on the observation that, for many problems, most of the optimization potential arises from activating the correct set of constraints. In this paper, we provide a theoretical justification of these methods based on a variational analysis. Then, various aspects of the constraint-adaptation algorithm are discussed, including the detection of active constraints and convergence issues. Finally, the applicability and suitability of the constraint-adaptation algorithm is demonstrated with the case study of an isothermal stirred-tank reactor. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
34. An SQP feasible descent algorithm for nonlinear inequality constrained optimization without strict complementarity
- Author
-
Jian, Jin-Bao and Tang, Chun-Ming
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *MATHEMATICAL functions , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
Abstract: In this paper, a kind of nonlinear optimization problems with nonlinear inequality constraints are discussed, and a new SQP feasible descent algorithm for solving the problems is presented. At each iteration of the new algorithm, a convex quadratic program (QP) which always has feasible solution is solved and a master direction is obtained, then, an improved (feasible descent) direction is yielded by updating the master direction with an explicit formula, and in order to avoid the Maratos effect, a height-order correction direction is computed by another explicit formula of the master direction and the improved direction. The new algorithm is proved to be globally convergent and superlinearly convergent under mild conditions without the strict complementarity. Furthermore, the quadratic convergence rate of the algorithm is obtained when the twice derivatives of the objective function and constrained functions are adopted. Finally, some numerical tests are reported. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
35. ON THE CONVERGENCE OF ALGORITHMS WITH IMPLICATIONS FOR STOCHASTIC AND NONDIFFERENTIABLE OPTIMIZATION.
- Author
-
Higle, Julia L. and Sen, Suvrajeet
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL functions ,MATHEMATICAL analysis ,MATHEMATICAL optimization ,DIFFERENTIAL equations ,OPERATIONS research ,SIMULATION methods & models - Abstract
Studies of the convergence of algorithms often revolve around the existence of a function with respect to which monotonic descent is required. In this paper, we show that under relatively lenient conditions, "stage-dependent descent" (not necessarily monotonic) is sufficient to guarantee convergence. This development also provides the impetus to examine optimization algorithms. We show that one of the important avenues in the study of convergence, namely, the theory of epi-convergence imposes stronger conditions than are necessary to establish the convergence of an optimization algorithm. Working from a relaxation of epi-convergence, we introduce the notion of ∂-compatibility, and prove several results that permit relaxations of conditions imposed by previous approaches to algorithmic convergence. Finally, to illustrate the usefulness of the concepts, we combine stage-dependent descent with results derivable from ∂-compatibility to provide a basis for the convergence of a general algorithmic statement that might be used for stochastic and nondifferentiable optimization. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
36. Fast First-Order Methods for Composite Convex Optimization with Backtracking.
- Author
-
Scheinberg, Katya, Goldfarb, Donald, and Bai, Xi
- Subjects
MATHEMATICAL optimization ,COMPRESSED sensing ,ELECTRONIC linearization ,STOCHASTIC convergence ,MATHEMATICAL functions ,ALGORITHMS - Abstract
We propose new versions of accelerated first-order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular, we show that a full backtracking strategy can be used within the FISTA and FALM algorithms while preserving their worst-case iteration complexities of $$O(\sqrt{L(f)/\epsilon })$$ . In the original versions of FISTA and FALM the prox parameter value on each iteration must be bounded from above by its value on prior iterations. The complexity of the algorithm then depends on the smallest value of the prox parameter encountered by the algorithm. The theoretical implications of using full backtracking in the framework of accelerated first-order and alternating linearization methods allow for better complexity estimates that depend on the 'average' prox parameter value. Moreover, we show that in the case of compressed sensing problem and Lasso, the additional cost of the new backtracking strategy is negligible compared to the cost of the original FISTA iteration. Our computational results show the benefit of the new algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
37. Belief Propagation for Min-Cost Network Flow: Convergence and Correctness.
- Author
-
Gamarnik, David, Shah, Devavrat, and Wei, Yehua
- Subjects
ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL functions ,POLYNOMIAL approximation ,MATHEMATICAL optimization ,OPERATIONS research - Abstract
The article discusses a study on the correctness and convergence properties of the mini-sum version of Belief Progagation (BP) when applied to minimum-cost network flow problems. It aims to show that an exact version of BP can be implemented for the mini-cost flow problems by encoding the message in BP as a piecewise-liner convex function, proves that BP finds the optimal solution in pseudopolynomial time if the optimal solution is unique and provides a simple modification of the BP algorithm that gives a fully polynomial-time randomized approximation scheme. It suggests that BP is a general purpose distributed heuristic widely application and easy to formulate and implement for a wide class of constrained optimization problems.
- Published
- 2012
- Full Text
- View/download PDF
38. Global solution of bilevel programs with a nonconvex inner program.
- Author
-
Mitsos, Alexander, Lemonidis, Panayiotis, and Barton, Paul
- Subjects
STOCHASTIC convergence ,MATHEMATICAL functions ,NONCONVEX programming ,MATHEMATICAL programming ,HEURISTIC ,MATHEMATICAL optimization ,MATHEMATICS - Abstract
A bounding algorithm for the global solution of nonlinear bilevel programs involving nonconvex functions in both the inner and outer programs is presented. The algorithm is rigorous and terminates finitely to a point that satisfies ε-optimality in the inner and outer programs. For the lower bounding problem, a relaxed program, containing the constraints of the inner and outer programs augmented by a parametric upper bound to the parametric optimal solution function of the inner program, is solved to global optimality. The optional upper bounding problem is based on probing the solution obtained by the lower bounding procedure. For the case that the inner program satisfies a constraint qualification, an algorithmic heuristic for tighter lower bounds is presented based on the KKT necessary conditions of the inner program. The algorithm is extended to include branching, which is not required for convergence but has potential advantages. Two branching heuristics are described and analyzed. Convergence proofs are provided and numerical results for original test problems and for literature examples are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
39. Differentiated Meta-PSO Methods for Array Optimization.
- Author
-
Selleri, Stefano, Mussetta, Marco, Pirinoli, Paola, Zich, Riccardo E., and Matekovits, Ladislau
- Subjects
PARTICLES ,ANTENNA arrays ,ANTENNAS (Electronics) ,ELECTROMAGNETISM ,STOCHASTIC convergence ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL functions ,BENCHMARKING (Management) - Abstract
The particle swarm optimization (PSO) method has been successfully applied to different electromagnetic optimization problems. Because of the complexity of this kind of problems, the associated cost function is in general computationally expensive. A fast convergence of the optimization algorithm is hence required to attain results in short time. Here few variations over the standard algorithm, referred to as differentiated meta-PSO, aimed to enhance the global search capability, and to improve the algorithm convergence, are introduced. In order to verify their effectiveness the different techniques have been first applied to benchmark test functions and then used for the optimization of a planar array. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
40. INTERTEMPORAL COMPLEMENTARITY AND OPTIMALITY: A STUDY OF A TWO-DIMENSIONAL DYNAMICAL SYSTEM.
- Author
-
Mitra, Tapan and Nishimura, Kazuo
- Subjects
MATHEMATICAL optimization ,RAMSEY theory ,ECONOMICS ,STOCHASTIC convergence ,RANDOM dynamical systems ,MATHEMATICAL functions - Abstract
We study the underlying structure of the two-dimensional dynamical system generated by a class of dynamic optimization models that allow for intertemporal complementarity between adjacent periods, but preserve the time-additively separable framework of Ramsey models. Specifically, we identify conditions under which the results of the traditional Ramsey-type theory are preserved even when the intertemporal independence assumption is relaxed. Local analysis of this theme has been presented by Samuelson (Western Economic Journal9 (1971), 21–26). We establish global convergence results and relate them to the local analysis, by using the mathematical theory of two-dimensional dynamical systems. We also relate the local stability property of the stationary optimal stock to the differentiability of the optimal policy function near the stationary optimal stock, by using the Stable Manifold Theorem. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
41. On Convergence of a Stochastic Quasigradient Algorithm of Quantile Optimization.
- Author
-
Kan, Yu. S.
- Subjects
ALGORITHMS ,STOCHASTIC processes ,MATHEMATICAL optimization ,MATHEMATICAL functions ,GAME theory ,STOCHASTIC convergence - Abstract
For the nonantagonistic two-person game which is equivalent to the problem of minimizing the quantile function, a modification of the stochastic quasigradient algorithm to seek the Nash point was proposed. The Nash point defines both the optimal strategy minimizing the quantile function and the minimum value of this function. Convergence of the algorithm with the probability 1 was proved. The question of choosing the starting point was discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
42. NESTED PARTITIONS METHOD FOR GLOBAL OPTIMIZATION.
- Author
-
Shi, Leyuan and Ólafsson, Sigurdur
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL programming ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL functions ,MAXIMA & minima - Abstract
We propose a new randomized method for solving global optimization problems. This method, the Nested Partitions (NP) method, systematically partitions the feasible region and concentrates the search in regions that are the most promising. The most promising region is selected in each iteration based on information obtained from random sampling of the entire feasible region and local search. The method hence combines global and local search. We first develop the method for discrete problems and then show that the method can be extended to continuous global optimization. The method is shown to converge with probability one to a global optimum in finite time. In addition, we provide bounds on the expected number of iterations required for convergence, and we suggest two stopping criteria. Numerical examples are also presented to demonstrate the effectiveness of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
43. SENSITIVITY OF CONSTRAINED MARKOV DECISION PROCESSES.
- Author
-
Altman, Eitan and Shwartz, Adam
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL analysis ,FINITE, The ,DISCOUNT prices ,COST ,MATRICES (Mathematics) ,STOCHASTIC convergence ,MATHEMATICAL functions - Abstract
We consider the optimization of finite-state, finite-action Markov decision processes under constraints. Costs and constraints are of the discounted or average type, and possibly finite-horizon. We investigate the sensitivity of the optimal cost and optimal policy to changes in various parameters. We relate several optimization problems to a generic linear program, through which we investigate sensitivity issues. We establish conditions for the continuity of the optimal value in the discount factor. In particular, the optimal value and optimal policy for the expected average cost are obtained as limits of the dicounted case, as the discount factor goes to one. This generalizes a well-known result for the unconstrained case. We also establish the continuity in the discount factor for certain non-stationary policies. We then discuss the sensitivity of optimal policies and optimal values to small changes in the transition matrix and in the instantaneous cost functions. The importance of the last two results is related to the performance of adaptive policies for constrained MDP under various cost criteria [3,5]. Finally, we establish the convergence of the optimal value for the discounted constrained finite horizon problem to the optimal value of the corresponding infinite horizon problem. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
44. A SUCCESSIVE APPROXIMATIONS METHOD FOR A CELLULAR MANUFACTURING PROBLEM.
- Author
-
Stanfel, Larry E.
- Subjects
MATHEMATICAL optimization ,APPROXIMATION theory ,MATHEMATICS ,MATHEMATICAL functions ,STOCHASTIC convergence ,OPERATIONS research - Abstract
The problem of interest is to partition a collection of machines into production cells so that a given set of part-manufacturing requirements may be carried out optimally. In the present case transitions of parts between different cells is the only measure of machine partition goodness. The present formulation and approximate solution of this optimization problem is best described as one of successive approximations or as a one-at-a-time method. An initial cellular structure is taken and an easy part assignment optimization routine executed. With the part assignment fixed, a heuristic is employed to find an improved cell structure. These bipartite iterations continue until a convergence criterion is satisfied. Several small computer examples are provided and the straightforward requirements for large problem adaptation. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
45. H∞ Observer Design for Lipschitz Nonlinear Systems.
- Author
-
Pertew, A. M., Marquez, H. J., and Zhao, Q.
- Subjects
NONLINEAR systems ,SYSTEMS theory ,STOCHASTIC convergence ,ASYMPTOTIC expansions ,ESTIMATION theory ,MATHEMATICAL analysis ,MATHEMATICAL optimization ,ESTIMATES ,MATHEMATICAL functions - Abstract
The problem of observer design for Lipschitz nonlinear systems is considered. A new dynamic framework which is a generalization of previously used Lipschitz observers is introduced and the generalized sufficient condition that ensures asymptotic convergence of the state estimates is presented. The equivalence between this condition and an H
∞ , optimal control problem which satisfies the standard regularity assumptions in H∞ , optimization theory is shown and a parameterization of all possible observers is also presented. A design procedure which is less restrictive than the existing design approaches is proposed, and a simulation example is given to illustrate the observer design. [ABSTRACT FROM AUTHOR]- Published
- 2006
- Full Text
- View/download PDF
46. BRANCH-AND-BOUND METHODS: GENERAL FORMULATION AND PROPERTIES.
- Author
-
Mitten, L. G.
- Subjects
MATHEMATICAL optimization ,DYNAMIC programming ,NONLINEAR programming ,MATHEMATICAL analysis ,STOCHASTIC convergence ,MATHEMATICAL functions - Abstract
The branch-and-bound procedure is formulated m rather general terms and necessary conditions for the branching and bounding functions are precisely specified Results include the standard properties for finite procedures, pins several convergence conditions for infinite procedures Discrete programming which includes integer programming and combinatorial optimization problems, is discussed and Fibonacci search is presented as an example of a nonfinite branch-and-bound procedure employing an optimal convergence rule. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.