22 results
Search Results
2. Guaranteeing Practical Convergence in Algorithms for Sensor and Source Localization.
- Author
-
Fidan, Bariş, Dasgupta, Soura, and Anderson, Brian D. O.
- Subjects
STOCHASTIC convergence ,DETECTORS ,ALGORITHMS ,SENSOR networks ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,SIMULATION methods & models ,ENGINEERING instruments ,MATHEMATICAL functions - Abstract
This paper considers localization of a source or a sensor from distance measurements. We argue that linear algorithms proposed for this purpose are susceptible to poor noise performance. Instead given a set of sensors/anchors of known positions and measured distances of the source/sensor to be localized from them we propose a potentially nonconvex weighted cost function whose global minimum estimates the location of the source/sensor one seeks. The contribution of this paper is to provide nontrivial ellipsoidal and polytopic regions surrounding these sensors/anchors of known positions, such that if the object to be localized is in this region, localization occurs by globally exponentially convergent gradient descent in the noise free case. Exponential convergence in the noise free case represents practical convergence as it ensures graceful performance degradation in the presence of noise. These results guide the deployment of sensors/anchors so that small subsets can be made responsible for practical localization in geographical areas determined by our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
3. ON THE CONVERGENCE PROPERTY OF THE DFP ALGORITHM.
- Author
-
Dingguo Pu and Wenci Yu
- Subjects
ALGORITHMS ,ALGEBRA ,STOCHASTIC convergence ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,OPERATIONS research ,SIMULATION methods & models - Abstract
The DFP algorithm of unconstrained optimization possesses excellent properties of convergence for convex functions. However, a convergence theory of the DFP algorithm without the convexity assumption has not yet been established. This paper gives the following result: If the objective function is suitably smooth, and if the DFP algorithm produces a convergent point sequence, then the limit point of the sequence is a critical point of the objective function. Also, some open questions are mentioned. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
4. A CLASS OF INSIDE-OUT ALGORITHMS FOR GENERAL PROGRAMS.
- Author
-
Gould, F. J.
- Subjects
ALGORITHMS ,STOCHASTIC convergence ,NONLINEAR programming ,MATHEMATICAL programming ,MATHEMATICAL optimization ,GEOMETRICAL constructions ,MANAGEMENT science ,MATHEMATICAL models ,MATHEMATICAL analysis ,OPERATIONS research ,MATHEMATICAL functions ,PROBLEM solving - Abstract
In this paper the Fiacco-McCormick SUMT technique is embedded in a class of inside-out algorithms. Convergence is demonstrated for the nonlinear programming problem under fairly general conditions and the algorithms are interpreted in a geometric structure. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
5. Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization.
- Author
-
Patrascu, Andrei and Necoara, Ion
- Subjects
STOCHASTIC convergence ,GLOBAL optimization ,MATHEMATICAL optimization ,ALGORITHMS ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function consisting of a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of the above structure, we propose random coordinate descent algorithms and analyze their convergence properties. For the general case, when the objective function is nonconvex and composite we prove asymptotic convergence for the sequences generated by our algorithms to stationary points and sublinear rate of convergence in expectation for some optimality measure. Additionally, if the objective function satisfies an error bound condition we derive a local linear rate of convergence for the expected values of the objective function. We also present extensive numerical experiments for evaluating the performance of our algorithms in comparison with state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. An inexact proximal point method for solving generalized fractional programs.
- Author
-
Jean-Jacques Strodiot, Jean-Pierre Crouzeix, Jacques Ferland, and Van Nguyen
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL models ,FRACTIONS ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL analysis - Abstract
Abstract In this paper, we present several new implementable methods for solving a generalized fractional program with convex data. They are Dinkelbach-type methods where a prox-regularization term is added to avoid the numerical difficulties arising when the solution of the problem is not unique. In these methods, at each iteration a regularized parametric problem is solved inexactly to obtain an approximation of the optimal value of the problem. Since the parametric problem is nonsmooth and convex, we propose to solve it by using a classical bundle method where the parameter is updated after each ‘serious step’. We mainly study two kinds of such steps, and we prove the convergence and the rate of convergence of each of the corresponding methods. Finally, we present some numerical experience to illustrate the behavior of the proposed algorithms, and we discuss the practical efficiency of each one. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
7. A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration.
- Author
-
Bioucas-Dias, José M. and Figueiredo, Mário A. T.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,IMAGE reconstruction ,IMAGE processing ,STOCHASTIC convergence ,MATHEMATICAL functions ,DIFFERENTIAL equations ,MATHEMATICAL analysis ,SYSTEM analysis - Abstract
Iterative shrinkage/thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l
P norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over 1ST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
8. CONVERGENCE CONDITIONS FOR NONLINEAR PROGRAMMING ALGORITHMS.
- Author
-
Zangwill, Willard I.
- Subjects
NONLINEAR programming ,DYNAMIC programming ,STOCHASTIC convergence ,LINEAR programming ,MATHEMATICAL programming ,ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,OPERATIONS research - Abstract
Conditions which are necessary and sufficient for convergence of a nonlinear programming algorithm are stated. It is also shown that the convergence conditions can be easily applied to most programming algorithms. As examples, algorithms by Arrow, Hurwicz and Uzawa; Cauchy; Frank and Wolfe; and Newton-Raphson are proven to converge by direct application of the convergence conditions. Also the Topkis-Veinott convergence conditions for feasible direction algorithms are shown to be a special case of the conditions stated in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
9. A new accelerated firefly algorithm for size optimization of truss structures.
- Author
-
Baghlani, A., Makiabadi, M. H., and Rahnema, H.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,STOCHASTIC convergence ,TRUSS bridges - Abstract
An Accelerated Firey Algorithm (AFA) for fast size optimization of truss structures is proposed in this paper. Metaheuristic Firefly algorithm has been recently developed and its effectiveness in solving practical problems such as sizing optimization of truss structures has not been thoroughly explored. The numerical experiments show that although the standard Firey Algorithm (FA) is a powerful approach for truss optimization, it suffers from slow rate of convergence, and hence it should be modified to solve real-life problems. The proposed AFA imposes some improvements on the searching procedure by both reduction of randomness and scaling the random term in fireflie's motion. The effectiveness and robustness of the algorithm are investigated by solving some benchmark problems. The results revealed that the proposed AFA remarkably enhances the rate of convergence and stability of standard Firefly algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2013
10. Directed searching optimization algorithm for constrained optimization problems
- Author
-
Zou, Dexuan, Liu, Haikuan, Gao, Liqun, and Li, Steven
- Subjects
- *
MATHEMATICAL optimization , *CONSTRAINED optimization , *STOCHASTIC convergence , *FUNCTIONAL analysis , *ALGORITHMS , *GENETIC mutation , *VECTOR analysis , *BIODIVERSITY , *MATHEMATICAL analysis - Abstract
Abstract: A directed searching optimization algorithm (DSO) is proposed to solve constrained optimization problems in this paper. The proposed algorithm includes two important operations — position updating and genetic mutation. Position updating enables the non-best solution vectors to mimic the best one, which is beneficial to the convergence of the DSO; genetic mutation can increase the diversity of individuals, which is beneficial to preventing the premature convergence of the DSO. In addition, we adopt the penalty function method to balance objective and constraint violations. We can obtain satisfactory solutions for constrained optimization problems by combining the DSO and the penalty function method. Experimental results indicate that the proposed algorithm can be an efficient alternative on solving constrained optimization problems. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
11. High-dimensional objective optimizer: An evolutionary algorithm and its nonlinear analysis
- Author
-
Huang, Jun, Huang, Xiaohong, Ma, Yan, and Liu, Yanbing
- Subjects
- *
MATHEMATICAL optimization , *ARTIFICIAL intelligence , *NONLINEAR statistical models , *MARTINGALES (Mathematics) , *STOCHASTIC convergence , *MATHEMATICAL analysis , *ALGORITHMS , *COMPUTER science - Abstract
Abstract: Last few years have witnessed the development of various multi-objective evolutionary algorithms since they allow the generation of the overall Pareto front for multi-objective optimizations. With the problems in the real-world becoming more and more complex, however, no reported work in the literature focuses on the high-dimensional objective optimizations (HOPs). In this paper, we propose an evolutionary algorithm named HOEA (high-dimensional objective evolutionary algorithm) for HOPs. By adopting the concept of nonlinear definition for optimizing object, HOPs can be solved by HOEA, while the well-known multi-objective evolutionary algorithms work poorly on HOPs. We further analyze the nonlinear dynamic properties of HOEA on the basis of martingale theoretical framework. The theoretical results indicate that this new algorithm is indeed capable of achieving convergence. We also conduct experiments on HOEA with two representative test instances. The experimental results either confirm our theoretical results or show that the proposed algorithm is efficient and effective for HOPs. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
12. Chaotic-Search-Based Cultural Algorithm for Solving Unconstrained Optimization Problem.
- Author
-
Jianjia He and Fuyuan Xu
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,SIMULATION methods & models ,CHAOS theory ,MATHEMATICAL optimization ,MATHEMATICAL analysis - Abstract
For premature convergence and instability of cultural algorithm in solving function optimization problem, based on cultural algorithm and chaos search optimization, a chaos cultural algorithm (CCA) is proposed. The algorithm model consists of a chaos-based population space and a knowledge-storing belief space, uses normative knowledge and situational knowledge for chaos search and chaos perturbation, respectively, effectively avoids premature convergence of cultural algorithm, and overcomes chaos search optimization's sensitivity to initial values and poor efficiency. Test results show that this algorithm is strong in global search and has good performance in searching efficiency, precision, and stability, especially in solving high-dimensional optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
13. A superlinearly convergent strongly sub-feasible SSLE-type algorithm with working set for nonlinearly constrained optimization
- Author
-
Jian, Jin-bao and Cheng, Wei-xin
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICAL inequalities , *LINEAR systems , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, by means of a new efficient identification technique of active constraints and the method of strongly sub-feasible direction, we propose a new sequential system of linear equations (SSLE) algorithm for solving inequality constrained optimization problems, in which the initial point is arbitrary. At each iteration, we first yield the working set by a pivoting operation and a generalized projection; then, three or four reduced linear equations with a same coefficient are solved to obtain the search direction. After a finite number of iterations, the algorithm can produced a feasible iteration point, and it becomes the method of feasible directions. Moreover, after finitely many iterations, the working set becomes independent of the iterates and is essentially the same as the active set of the KKT point. Under some mild conditions, the proposed algorithm is proved to be globally, strongly and superlinearly convergent. Finally, some preliminary numerical experiments are reported to show that the algorithm is practicable and effective. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
14. A nonmonotone conic trust region method based on line search for solving unconstrained optimization
- Author
-
Qu, Shao-Jian, Zhang, Qing-Pu, and Yang, Yue-Ting
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *MATHEMATICAL analysis , *LINEAR programming - Abstract
Abstract: In this paper, we present a nonmonotone conic trust region method based on line search technique for unconstrained optimization. The new algorithm can be regarded as a combination of nonmonotone technique, line search technique and conic trust region method. When a trial step is not accepted, the method does not resolve the trust region subproblem but generates an iterative point whose steplength satisfies some line search condition. The function value can only be allowed to increase when trial steps are not accepted in close succession of iterations. The local and global convergence properties are proved under reasonable assumptions. Numerical experiments are conducted to compare this method with the existing methods. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
15. An SQP feasible descent algorithm for nonlinear inequality constrained optimization without strict complementarity
- Author
-
Jian, Jin-Bao and Tang, Chun-Ming
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *MATHEMATICAL functions , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
Abstract: In this paper, a kind of nonlinear optimization problems with nonlinear inequality constraints are discussed, and a new SQP feasible descent algorithm for solving the problems is presented. At each iteration of the new algorithm, a convex quadratic program (QP) which always has feasible solution is solved and a master direction is obtained, then, an improved (feasible descent) direction is yielded by updating the master direction with an explicit formula, and in order to avoid the Maratos effect, a height-order correction direction is computed by another explicit formula of the master direction and the improved direction. The new algorithm is proved to be globally convergent and superlinearly convergent under mild conditions without the strict complementarity. Furthermore, the quadratic convergence rate of the algorithm is obtained when the twice derivatives of the objective function and constrained functions are adopted. Finally, some numerical tests are reported. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
16. FRACTIONAL PROGRAMMING. II, ON DINKELBACH'S ALGORITHM.
- Author
-
Schaible, Siegfried
- Subjects
MATHEMATICAL programming ,ALGORITHMS ,DUALITY theory (Mathematics) ,MANAGEMENT science ,STOCHASTIC convergence ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,CONVEX programming ,OPERATIONS research - Abstract
Dinkelbach's algorithm [2] solving the parametric equivalent of a fractional program is investigated. It is shown that the algorithm converges superlinearly and often (locally) quadratically. A priori and a posteriori error estimates are derived. Using those estimates and duality as introduced in Part I, a revised version of the algorithm is proposed. In addition, a similar algorithm is presented where, in contrast to Dinkelbach's procedure, the rate of convergence is still controllable. Error estimates are derived also for this algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
17. ON THE CONVERGENCE OF ALGORITHMS WITH IMPLICATIONS FOR STOCHASTIC AND NONDIFFERENTIABLE OPTIMIZATION.
- Author
-
Higle, Julia L. and Sen, Suvrajeet
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL functions ,MATHEMATICAL analysis ,MATHEMATICAL optimization ,DIFFERENTIAL equations ,OPERATIONS research ,SIMULATION methods & models - Abstract
Studies of the convergence of algorithms often revolve around the existence of a function with respect to which monotonic descent is required. In this paper, we show that under relatively lenient conditions, "stage-dependent descent" (not necessarily monotonic) is sufficient to guarantee convergence. This development also provides the impetus to examine optimization algorithms. We show that one of the important avenues in the study of convergence, namely, the theory of epi-convergence imposes stronger conditions than are necessary to establish the convergence of an optimization algorithm. Working from a relaxation of epi-convergence, we introduce the notion of ∂-compatibility, and prove several results that permit relaxations of conditions imposed by previous approaches to algorithmic convergence. Finally, to illustrate the usefulness of the concepts, we combine stage-dependent descent with results derivable from ∂-compatibility to provide a basis for the convergence of a general algorithmic statement that might be used for stochastic and nondifferentiable optimization. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
18. A CONTINUATION APPROACH USING NCP FUNCTION FOR SOLVING MAX-CUT PROBLEM.
- Author
-
Fengmin Xu, Chengxian Xu, and Jiuquan Ren
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL analysis ,ALGORITHMS ,STOCHASTIC convergence ,APPROXIMATION theory - Abstract
A continuous approach using NCP function for approximating the solution of the max-cut problem is proposed. The max-cut problem is relaxed into an equivalent nonlinearly constrained continuous optimization problem and a feasible direction method without line searches is presented for generating an optimal solution of the relaxed continuous Optimization problem. The convergence of the algorithm is proved. Numerical experiments and comparisons on some max-cut test problems show that we can get the satisfactory solution of max-cut problems with less computation time. Furthermore, this is the first time that the feasible direction method is combined with NCP function for solving max-cut problem, and similar idea can be generalized to other combinatorial Optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
19. Nonsmooth Optimization Techniques for Semisupervised Classification.
- Author
-
Astorino, Annabella and Fuduli, Antonio
- Subjects
MATHEMATICAL optimization ,MATHEMATICAL analysis ,ALGORITHMS ,GENE expression ,MATHEMATICAL programming ,BIOINFORMATICS ,DIFFERENTIAL equations ,PRINCIPAL components analysis ,STOCHASTIC convergence - Abstract
We apply nonsmooth optimization techniques to classification problems, with particular reference to the Transductive Support Vector Machine (TSVM) approach, where the considered decision function is nonconvex and nondifferentiable, hence difficult to minimize. We present some numerical results obtained by running the proposed method on some standard test problems drawn from the binary classification literature. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
20. DOUBLE PENALTY METHOD FOR BILEVEL OPTIMIZATION PROBLEMS.
- Author
-
Ishizuka, Yo and Aiyoshi, Eitaro
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC convergence ,MATHEMATICAL programming ,MATHEMATICAL analysis ,THEORY of constraints ,OPERATIONS research - Abstract
A penalty function method approach for solving a constrained bilevel optimization problem is proposed. In the algorithm, both the upper level and the lower level problems are approximated by minimization problems of augmented objective functions. A convergence theorem is presented. The method is applicable to the non-singleton lower-level reaction set case. Constraint qualifications which imply the assumptions of the general convergence theorem are given. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
21. LINE SEARCH TECHNIQUES BASED ON INTERPOLATING POLYNOMIALS USING FUNCTION VALUES ONLY.
- Author
-
Tamir, Arie
- Subjects
INTERPOLATION ,NUMERICAL analysis ,POLYNOMIALS ,STOCHASTIC convergence ,GOLDEN ratio ,ALGORITHMS ,MATHEMATICAL analysis ,MATHEMATICAL optimization ,INDUSTRIAL efficiency - Abstract
In this study we derive the order of convergence of some line search techniques based on fitting polynomials, using function values only. It is shown that the order of convergence increases with the degree of the polynomial. If viewed as a sequence, the orders approach the Golden Section Ratio when the degree of the polynomial tends to infinity. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
22. A Note on the Cyclic Coordinate Ascent Method.
- Author
-
Zadeh, Norman
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL analysis ,CONCAVE functions ,STOCHASTIC convergence ,MAXIMAL functions ,MAXIMA & minima ,OPERATIONS research ,INDUSTRIAL efficiency - Abstract
The cyclic coordinate ascent method is a frequently used algorithm in optimization problems. It requires no derivatives and indicates in one iteration if a given point is optimal. It is proved that the cyclic coordinate ascent method will converge for pseudo concave functions, as well as for strictly concave functions as was previously known. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.