112 results
Search Results
2. DECOMPOSITION METHODS FOR COMPUTING DIRECTIONAL STATIONARY SOLUTIONS OF A CLASS OF NONSMOOTH NONCONVEX OPTIMIZATION PROBLEMS.
- Author
-
JONG-SHI PANG and MIN TAO
- Subjects
- *
DECOMPOSITION method , *NONSMOOTH optimization , *ALGORITHMS , *STOCHASTIC convergence , *LIPSCHITZ spaces , *GROUP theory - Abstract
Motivated by block partitioned problems arising from group sparsity representation and generalized noncooperative potential games, this paper presents a basic decomposition method for a broad class of multiblock nonsmooth optimization problems subject to coupled linear constraints on the variables that may additionally be individually constrained. The objective of such an optimization problem is given by the sum of two nonseparable functions minus a sum of separable, pointwise maxima of finitely many convex differentiable functions. One of the former two nonseparable functions is of the class LC1, i.e., differentiable with a Lipschitz gradient, while the other summand is multiconvex. The subtraction of the separable, pointwise maxima of convex functions induces a partial difference-of-convex (DC) structure in the overall objective; yet with all three terms together, the objective is nonsmooth and non-DC, but is blockwise directionally differentiable. By taking advantage of the (negative) pointwise maximum structure in the objective, the developed algorithm and its convergence result are aimed at the computation of a blockwise directional stationary solution, which arguably is the sharpest kind of stationary solutions for this class of nonsmooth problems. This aim is accomplished by combining the alternating direction method of multipliers (ADMM) with a semilinearized Gauss{Seidel scheme, resulting in a decomposition of the overall problem into subproblems each involving the individual blocks. To arrive at a stationary solution of the desired kind, our algorithm solves multiple convex subprograms at each iteration, one per convex function in each pointwise maximum. In order to lessen the potential computational burden in each iteration, a probabilistic version of the algorithm is presented and its almost sure convergence is established. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. NONSMOOTH VARIANTS OF POWELL'S BFGS CONVERGENCE THEOREM.
- Author
-
JIAYI GUO and LEWIS, A. S.
- Subjects
- *
NONSMOOTH optimization , *STOCHASTIC convergence , *EUCLIDEAN geometry , *QUASI-Newton methods , *ALGORITHMS - Abstract
The popular BFGS quasi-Newton minimization algorithm under reasonable conditions converges globally on smooth convex functions. This result was proved by Powell in a landmark 1976 paper: we consider its implications for functions that are not smooth. In particular, an analogous convergence result holds for functions (like the Euclidean norm) whose minimizers are isolated nonsmooth points. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. TOWARD PARALLEL COARSE GRID CORRECTION FOR THE PARAREAL ALGORITHM.
- Author
-
SHU-LIN WU
- Subjects
- *
ALGORITHMS , *STOCHASTIC convergence - Abstract
In this paper, we present an idea toward parallel coarse grid correction (CGC) for the parareal algorithm. It is well known that such a CGC procedure is often the bottleneck of speedup of the parareal algorithm. For an ODE system with initial-value condition u(0) = u0 the idea can be explained as follows. First, we apply the G-propagator to the same ODE system but with a special condition u(0) = αu(T), where α ∈ ℝ is a crux parameter. Second, in each iteration of the parareal algorithm the CGC procedure will be carried out by the so-called diagonalization technique established recently. The parameter α controls both the roundoff error arising from such a diagonalization technique and the convergence rate of the resulting parareal algorithm. We show that there exists some threshold α* such that the parareal algorithm with diagonalization-based CGC possesses the same convergence rate as that of the parareal algorithm with classical CGC if |α| ≤ α*. With |α| = α*, we show that the condition number associated with the diagonalization technique is a moderate quantity of order O(1) (and therefore the roundoff error is small) and is independent of the length of the time interval. Numerical results are given to support our findings. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. GLOBALLY CONVERGENT JACOBI-TYPE ALGORITHMS FOR SIMULTANEOUS ORTHOGONAL SYMMETRIC TENSOR DIAGONALIZATION.
- Author
-
JIANZE LI, USEVICH, KONSTANTIN, and COMON, PIERRE
- Subjects
- *
STOCHASTIC convergence , *JACOBI operators , *SYMMETRY (Physics) , *ORTHOGONAL systems , *ALGORITHMS - Abstract
In this paper, we consider a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors. For the Jacobi-based algorithm of [M. Ishteva, P.-A. Absil, and P. Van Dooren, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 651{672], we prove its global convergence for simultaneous orthogonal diagonalization of symmetric matrices and 3rd-order tensors. We also propose a new Jacobi-based algorithm in the general setting and prove its global convergence for sufficiently smooth functions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. CONVERGENCE AND CYCLING IN WALKER-TYPE SADDLE SEARCH ALGORITHMS.
- Author
-
LEVITT, ANTOINE and ORTNER, CHRISTOPH
- Subjects
- *
ALGORITHMS , *MAXIMA & minima , *MATHEMATICAL optimization , *STOCHASTIC convergence , *METHODOLOGY - Abstract
Algorithms for computing local minima of smooth objective functions enjoy a mature theory as well as robust and effcient implementations. By comparison, the theory and practice of saddle search is destitute. In this paper, we present results for idealized versions of the dimer and gentlest ascent (GAD) saddle search algorithms that showcase the limitations of what is theoretically achievable within the current class of saddle search algorithms: (1) we present an improved estimate on the region of attraction of saddles, (2) we give explicit examples of potential energy wells from which GAD-type dynamics are unable to escape, and (3) we present a local analysis of \singular points" around which the dynamics gets trapped and prove the existence of quasi-periodic solutions. These results indicate that it is impossible to obtain globally convergent variants of dimer- and GAD-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. CONVOLUTION KERNELS AND STABILITY OF THRESHOLD DYNAMICS METHODS.
- Author
-
ESEDOḠLU, SELIM and JACOBS, MATT
- Subjects
- *
STOCHASTIC convergence , *KERNEL (Mathematics) , *ALGORITHMS , *MACHINE learning , *NUMERICAL analysis - Abstract
Threshold dynamics and its extensions have proven useful in computing interfacial motions in applications as diverse as materials science and machine learning. Certain desirable properties of the algorithm, such as unconditional monotonicity in two-phase ows and gradient stability more generally, hinge on positivity properties of the convolution kernel and its Fourier transform. Recent developments in the analysis of this class of algorithms indicate that sometimes, as in the case of certain anisotropic curvature ows arising in materials science, these properties of the convolution kernel cannot be expected. Other applications, such as machine learning, would benefit from as great a level of exibility in choosing the convolution kernel as possible. In this paper, we establish certain desirable properties of threshold dynamics, such as gamma convergence of its associated energy, for a substantially wider class of kernels than has been hitherto possible. We also present variants of the algorithm that extend some of these properties to even wider classes of convolution kernels. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
8. ANALYTIC REGULARITY AND GPC APPROXIMATION FOR CONTROL PROBLEMS CONSTRAINED BY LINEAR PARAMETRIC ELLIPTIC AND PARABOLIC PDEs.
- Author
-
KUNOTH, ANGELA and SCHWAB, CHRISTOPH
- Subjects
- *
OPTIMAL control theory , *PARABOLIC differential equations , *POLYNOMIAL chaos , *STOCHASTIC convergence , *ALGORITHMS , *MONTE Carlo method , *GALERKIN methods , *WAVELETS (Mathematics) - Abstract
This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic partial differential equation (PDE). We address the (difficult) case that the state equation depends on a countable number of parameters i.e., on σj with j ∈ N, and that the PDE operator may depend nonaffinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. Building on recent results in [A. Cohen, R. DeVore, and Ch. Schwab, Found. Comput. Math., 10 (2010), pp. 615-646; Anal. Appl., 9 (2011), pp. 1-37], we show that the state and the control are analytic as functions depending on these parameters σj. We establish sparsity of generalized polynomial chaos (gpc) expansions of both state and control in terms of the stochastic coordinate sequence σ = (σj )j≥1 of the random inputs, and we prove convergence rates of best N-term truncations of these expansions. Such truncations are the key for subsequent computations since they do not assume that the stochastic input data has a finite expansion. In a follow-up paper [A. Kunoth and Ch. Schwab, Sparse adaptive tensor Galerkin approximations of stochastic PDE-constrained control problems, in preparation], we explain two methods for how such best N-term truncations can be computed practically: by greedy-type algorithms as in [Ch. Schwab and C. J. Gittelson, Acta Numer., 20 (2011), pp. 291-467; C. J. Gittelson, Report 2011-12, Seminar for Applied Mathematics, ETH Zürich, Zürich, Switzerland, 2011], or by multilevel Monte-Carlo methods as in [F. Y. Kuo, Ch. Schwab, and I. H. Sloan, SIAM J. Numer. Anal., 50 (2012), pp. 3351-3374]. The sparsity result allows, in conjunction with adaptive wavelet Galerkin schemes, for sparse, adaptive tensor discretizations of control problems constrained by linear elliptic and parabolic PDEs developed in [W. Dahmen and A. Kunoth, SIAM J. Control Optim., 43 (2005), pp. 1640-1675; M. D. Gunzburger and A. Kunoth, SIAM J. Control. Optim., 49 (2011), pp. 1150-1170; A. Kunoth, Numer. Algorithms, 39 (2005), pp. 199-220]; see [A. Kunoth and Ch. Schwab, Sparse adaptive tensor Galerkin approximations of stochastic PDE-constrained control problems, in preparation]. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
9. A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR DUAL-DEGENERATE PARAMETRIC OPTIMIZATION PROBLEMS.
- Author
-
VYACHESLAV KUNGURTSEV and JÄSCHKE, JOHANNES
- Subjects
- *
MATHEMATICAL optimization , *NONLINEAR theories , *ALGORITHMS , *PARAMETRIC modeling , *MATHEMATICAL functions , *STOCHASTIC convergence - Abstract
Most path-following algorithms for tracing a solution path of a parametric nonlinear optimization problem are only certifiably convergent under strong regularity assumptions about the problem functions. In particular, linear independence of the constraint gradients at the solutions is typically assumed, which implies unique multipliers. In this paper we propose a procedure designed to solve problems satisfying a weaker set of conditions, allowing for nonunique (but bounded) multipliers. Each iteration along the path consists of three parts: (1) a Newton corrector step for the primal and dual variables, which is obtained by solving a linear system ot equations, (2) a predictor step for the primal and dual variables, which is found as the solution of a quadratic programming problem, and (3) a jump step for the dual variables, which is found as t he solution of a linear programming problem. We present a convergence proof and demonstrate th e successful solution tracking of the algorithm numerically on a couple of illustrative examples. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. ON THE FINITE CONVERGENCE OF THE DOUGLAS-RACHFORD ALGORITHM FOR SOLVING (NOT NECESSARILY CONVEX) FEASIBILITY PROBLEMS IN EUCLIDEAN SPACES.
- Author
-
BAUSCHKE, HEINZ H. and DAO, MINH N.
- Subjects
- *
FEASIBILITY problem (Mathematical optimization) , *STOCHASTIC convergence , *ALGORITHMS , *PROBLEM solving , *EUCLIDEAN metric , *NUMERICAL analysis - Abstract
Solving feasibility problems is a central task in mathematics and the applied sciences. One particularly successful method is the Douglas--Rachford algorithm. In this paper, we provide many new conditions sufficient for finite convergence. Numerou s examples illustrate our results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. LINEAR CONVERGENCE OF PROXIMAL GRADIENT ALGORITHM WITH EXTRAPOLATION FOR A CLASS OF NONCONVEX NONSMOOTH MINIMIZATION PROBLEMS.
- Author
-
BO WEN, XIAOJUN CHEN, and TING KEI PONG
- Subjects
- *
STOCHASTIC convergence , *EXTRAPOLATION , *ALGORITHMS , *LIPSCHITZ spaces , *CONVEX functions , *NONCONVEX programming , *NONSMOOTH optimization - Abstract
In this paper, we study the proximal gradient algorithm with extrapolation for minimizing the sum of a Lipschitz differentiable function and a proper closed convex function. Under the error bound condition used in [Ann. Oper. Res., 46 (1993), pp. 157--178] for analyzing the convergence of the proximal gradient algorithm, we show that there exists a threshold such that if the extrapolation coefficients are chosen below this threshold, then the sequence generated converges B-linearly to a stationary point of the problem. Moreover, the corresponding sequence of objective values is also B-linearly convergent. In addition, the threshold reduces t o 1 for convex problems, and as a consequence we obtain the B-linear convergence of the sequence generated by FISTA with fixed restart. Finally, we present some numerical experiments to illustrate our results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. PROXIMAL POINT ALGORITHMS ON HADAMARD MANIFOLDS: LINEAR CONVERGENCE AND FINITE TERMINATION.
- Author
-
JINHUA WANG, CHONG LI, LOPEZ, GENARO, and JEN-CHIH YAO
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *PARAMETERS (Statistics) , *VECTOR fields , *MATHEMATICAL singularities , *MANIFOLDS (Mathematics) - Abstract
In the present paper, we consider inexact proximal point algorithms for finding singular points of multivalued vector fields on Hadamard manifolds. The rate of convergence is shown to be linear under the mild assumption of metric subregularity. Furthermore, if the sequence of parameters associated with the iterative scheme converges to 0, then the convergence rate is superlinear. At the same time, the finite termination of the inexact proximal point algorithm is also provided under a weak sharp minima-like condition. Applications to optimization problems are provided. Some of our results are new even in Euclidean spaces, while others improve and/or extend some known results in Euclidean spaces. As a matter of fact, in the case of exact proximal point algorithm, our results improve the corresponding results in [G. C. Bento and J. X. Cruz Neto, Optim., 63 (2014), pp. 1281-1288]. Finally, several examples are provided to illustrate that our results are applicable while the corresponding results in the Hilbert space setting are not. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. ERGODIC CONVERGENCE OF A STOCHASTIC PROXIMAL POINT ALGORITHM.
- Author
-
BIANCHI, PASCAL
- Subjects
- *
ERGODIC theory , *STOCHASTIC convergence , *STOCHASTIC processes , *MATHEMATICAL sequences , *ALGORITHMS - Abstract
The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates (xn) given by xn+1 = (I + λnA(ξn+1,.))-1(xn); where (A(s,.) : s ∈ E) is a collection of maximal monotone operators on a separable Hilbert space, (λn) is an independent identically distributed sequence of random variables on E, and (λn) is a positive sequence in ℓ²nℓ¹. The weighted averaged sequence of iterates is shown to converge weakly to a zero (assumed to exist) of the Aumann expectation E(A(λ1,.)) under the assumption that the latter is maximal. We consider applications to stochastic optimization problems of the form min E((λ1, x)) w.r.t. x ∈ ∩mi=1 Xi, where is a normal convex integrand and (Xi) is a collection of closed convex sets. In this case, the iterations are closely related to a stochastic proximal algorithm recently proposed by Wang and Bertsekas [Incremental Constraint Projection-Proximal Methods for Nonsmooth Convex Optimization, Tech. report, MIT, Cambridge, MA, 2013]. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
14. PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY.
- Author
-
HAGER, WILLIAM W. and HONGCHAO ZHANG
- Subjects
- *
NONLINEAR theories , *POLYHEDRA , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL optimization - Abstract
An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version of the projection problem and then uses the relationship between the primal and dual to recover the projection. The techniques in the paper exploit sparsity. Sparse reconstruction by separable approximation (SpaRSA) is used to approximately identify active constraints in the polyhedron, and the dual active set algorithm (DASA) is used to compute a high precision solution. A linear convergence result is established for SpaRSA that does not require the strong concavity of the dual to the projection problem, and an earlier R-linear convergence rate is strengthened to a Q-linear convergence property. An algorithmic framework is developed for combining SpaRSA with an asymptotically preferred algorithm such as DASA. It is shown that only the preferred algorithm is executed asymptotically. Numerical results are given using the polyhedra associated with the Netlib LP test set. A comparison is made to the interior point method contained in the general purpose open source software package IPOPT for nonlinear optimization, and to the commercial package CPLEX, which contains an implementation of the barrier method that is targeted to problems with the structure of the polyhedral projection problem. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. ON CONVERGENCE RATES OF LINEARIZED PROXIMAL ALGORITHMS FOR CONVEX COMPOSITE OPTIMIZATION WITH APPLICATIONS.
- Author
-
YAOHUA HU, CHONG LI, and XIAOQI YANG
- Subjects
- *
GAUSS-Newton method , *STOCHASTIC convergence , *CONVEX functions , *ALGORITHMS , *ITERATIVE methods (Mathematics) - Abstract
In the present paper, we investigate a linearized proximal algorithm (LPA) for solving a convex composite optimization problem. Each iteration of the LPA is a proximal minimization of the convex composite function with the inner function being linearized at the current iterate. The LPA has the attractive computational advantage that the solution of each subproblem is a singleton, which avoids the difficulty as in the Gauss-Newton method (GNM) of finding a solution with minimum norm among the set of minima of its subproblem, while still maintaining the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order p (p ∈ [1, 2]) and a quasi-regularity condition, we establish a local superlinear convergence rate for the LPA. We also propose a globalization strategy for the LPA based on a backtracking line-search and an inexact version of the LPA. We further apply the LPA to solve a (possibly nonconvex) feasibility problem, as well as a sensor network localization problem. Our numerical results illustrate that the LPA meets the demand for an efficient and robust algorithm for the sensor network localization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. THE DOUGLAS-RACHFORD ALGORITHM FOR TWO (NOT NECESSARILY INTERSECTING) AFFINE SUBSPACES.
- Author
-
BAUSCHKE, HEINZ H. and MOURSI, WALAA M.
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL sequences , *ALGORITHMS , *SUBSPACES (Mathematics) , *MONOTONE operators - Abstract
The Douglas-Rachford algorithm is a classical and very successful splitting method for finding the zeros of the sums of monotone operators. When the underlying operators are normal cone operators, the algorithm solves a convex feasibility problem. In this paper, we provide a detailed study of the Douglas-Rachford iterates and the corresponding shadow sequence when the sets are affine subspaces that do not necessarily intersect. We prove strong convergence of the shadows to the nearest generalized solution. Our results extend recent work from the consistent case to the inconsistent case. Various examples are provided to illustrates the results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. A SINGULAR VALUE THRESHOLDING ALGORITHM FOR MATRIX COMPLETION.
- Author
-
CAI, JIAN-FENG, CANDÉS, EMMANUEL J., and SHEN, ZUOWEI
- Subjects
- *
ALGORITHMS , *APPROXIMATION theory , *MATRICES (Mathematics) , *CONVEX functions , *STOCHASTIC convergence - Abstract
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {Xk,Yk}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Yk. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {Xk} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1, 000 x 1, 000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
18. ADAPTIVE BARRIER UPDATE STRATEGIES FOR NONLINEAR INTERIOR METHODS.
- Author
-
NOCEDAL, JORGE, WÄCHTER, ANDREAS, and WALTZ, RICHARD A.
- Subjects
- *
ITERATIVE methods (Mathematics) , *INTERIOR-point methods , *NONLINEAR programming , *HEURISTIC programming , *MATHEMATICAL functions , *STOCHASTIC convergence , *CONSTRAINED optimization , *ALGORITHMS - Abstract
This paper considers strategies for selecting the barrier parameter at every iteration of an interior-point method for nonlinear programming. Numerical experiments suggest that heuristic adaptive choices, such as Mehrotra's probing procedure, outperform monotone strategies that hold the barrier parameter fixed until a barrier optimality test is satisfied. A new adaptive strategy is proposed based on the minimization of a quality function. The paper also proposes a globalization framework that ensures the convergence of adaptive interior methods, and examines convergence failures of the Mehrotra predictor-corrector algorithm. The barrier update strategies proposed in this paper are applicable to a wide class of interior methods and are tested in the two distinct algorithmic frameworks provided by the IPOPT and KNITRO software packages. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
19. JACOBI CORRECTION EQUATION, LINE SEARCH, AND CONJUGATE GRADIENTS IN HERMITIAN EIGENVALUE COMPUTATION I: COMPUTING AN EXTREME EIGENVALUE.
- Author
-
Ovtchinnikov, E. E.
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *CONJUGATE gradient methods , *ITERATIVE methods (Mathematics) , *HERMITIAN operators , *JACOBI method , *LINEAR operators , *RAYLEIGH quotient , *MATRICES (Mathematics) - Abstract
This paper is concerned with the convergence properties of iterative algorithms of conjugate gradient type applied to the computation of an extreme eigenvalue of a Hermitian operator via the optimization of the Rayleigh quotient functional. The algorithms in focus employ the line search for the extremum of the Rayleigh quotient functional in the direction that is a linear combination of the gradient and the previous search direction. An asymptotically explicit equation for the reduction in the eigenvalue error after the line search step as a function of the search direction is derived and applied to the analysis of the local convergence properties (i.e., the error reduction after an iteration) of the algorithms in question. The local (stepwise) asymptotic equivalence of various conjugate gradient algorithms and the generalized Davidson method is proved, and a new convergence estimate for conjugate gradient iterations is derived, showing the reduction in the eigenvalue error after any two consecutive iterations. The paper's analysis extensively employs remarkable properties of the operator of the so-called Jacobi orthogonal complement correction equation. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
20. NONDEGENERACY AND WEAK GLOBAL CONVERGENCE OF THE LLOYD ALGORITHM IN RD.
- Author
-
Emelianenko, Maria, Lili Ju, and Rand, Alexande
- Subjects
- *
MATHEMATICS , *GEOMETRIC quantization , *ALGORITHMS , *ALGEBRAIC geometry , *VORONOI polygons , *STOCHASTIC convergence - Abstract
The Lloyd algorithm originated in the context of optimal quantization and represents a fixed point iteration for computing an optimal quantizer. Reducing average distortion at every step, it constructs a Voronoi partition of the domain and replaces each generator with the centroid of the corresponding Voronoi cell. Optimal quantization is obtained in the case of a centroidal Voronoi tessellation (CVT), which is a special Voronoi tessellation of a domain Ω ϵ ℝd having the property that the generators of the Voronoi diagram are also the centers of mass, with respect to a given density function ? ⩾ 0, of the corresponding Voronoi cells. The Lloyd iteration is currently the most popular and elegant algorithm for computing CVTs and optimal quantizers, but many questions remain about its convergence, especially in d-dimensional spaces (d > 1). In this paper, we prove that any limit point of the Lloyd iteration in any dimensional spaces is nondegenerate provided that Ω is a convex and bounded set and ? belongs to L¹(Ω) and is positive almost everywhere. This ensures that the fixed point map remains closed and hence the standard theory of descent methods guarantees weak global convergence of the Lloyd iteration to the set of nondegenerate fixed point quantizers. While previously only conjectured, the convergence properties of the Lloyd iteration are rigorously justified under such minimal regularity assumptions on the density functional. The results presented in this paper go beyond existing convergence theories for CVTs and optimal quantization related algorithms and should be of interest to both the mathematical and engineering communities. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
21. FA-SART: A FREQUENCY-ADAPTIVE ALGORITHM IN PINHOLE SPECT TOMOGRAPHY.
- Author
-
Israel-Jost, Vincent
- Subjects
- *
ALGORITHMS , *TOMOGRAPHY , *CROSS-sectional imaging , *ITERATIVE methods (Mathematics) , *MATHEMATICAL analysis , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
As a solution to the frequently reported problem of the high frequencies reconstructed sometimes many iterations after the low frequencies in tomography, we proposed in a recent paper [V. Israel-Jost, Ph. Choquet, and A. Constantinesco, Int. J. Biomed. Imaging, 2006, paper 34043.] a general adaptation that can apply to many algorithms: the incomplete backprojection. We stressed the fact that this type of frequency adapted (FA) algorithm works only when the linear problem of tomography to be solved involves deblurring, i.e., when the point spread functions (PSFs) associated with each voxel are large. Thus, depending on the desired level of detail, a parameter is set to achieve the reconstruction within a small number of iterations. The aim of this paper is to give a mathematical analysis of both the acceleration obtained and the convergence of an FA algorithm derived from the simultaneous algebraic reconstruction technique (SART) algorithm of Andersen and Kak, the so-called FA-SART algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
22. CONVERGENCE OF AUTONOMOUS MOBILE ROBOTS WITH INACCURATE SENSORS AND MOVEMENTS.
- Author
-
Cohen, Reuven and Peleg, David
- Subjects
- *
MOBILE robots , *AUTOMATION , *DETECTORS , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL functions , *FINITE groups , *ROBOTICS - Abstract
A number of recent studies concern algorithms for distributed control and coordination in systems of autonomous mobile robots. The common theoretical model adopted in these studies assumes that the positional input of the robots is obtained by perfectly accurate visual sensors, that robot movements are accurate, and that internal calculations performed by the robots on (real) coordinates are perfectly accurate as well. The current paper concentrates on the effect of weakening this rather strong set of assumptions and replacing it with the more realistic assumption that the robot sensors, movement, and internal calculations may have slight inaccuracies. Specifically, the paper concentrates on the ability of robot systems with inaccurate sensors, movements, and calculations to carry out the task of convergence. The paper presents several impossibility theorems, limiting the inaccuracy levels that still allow convergence, and prohibiting a general algorithm for gathering, namely, meeting at a point, in a finite number of steps. The main positive result is an algorithm for convergence under bounded measurement, movement, and calculation errors. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
23. ANALYSIS OF GENERALIZED PATTERN SEARCHES.
- Author
-
Audet, Charles and Dennis Jr., J. E.
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICAL analysis , *STOCHASTIC convergence , *MATHEMATICS - Abstract
This paper contains a new convergence analysis for the Lewis and Torczon generalized pattern search (GPS) class of methods for unconstrained and linearly constrained optimization. This analysis is motivated by a desire to understand the successful behavior of the algorithm under hypotheses that are satisfied by many practical problems. Specifically, even if the objective function is discontinuous or extended-valued, the methods find a limit point with some minimizing properties. Simple examples show that the strength of the optimality conditions at a limit point depends not only on the algorithm, but also on the directions it uses and on the smoothness of the objective at the limit point in question. The contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties and to the defining directions for the algorithm, and it gives previous results as corollaries. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
24. ON THE NONLINEAR INEXACT UZAWA ALGORITHM FOR SADDLE-POINT PROBLEMS.
- Author
-
Xiao-Liang Cheng
- Subjects
- *
ALGORITHMS , *NUMERICAL analysis , *STOCHASTIC convergence , *FOUNDATIONS of arithmetic , *ALGEBRA - Abstract
This paper discusses convergence behavior of the nonlinear inexact Uzawa algorithm for solving saddle-point problems presented in a recent paper of Bramble, Pasciak, and Vassilev (SIAM J. Numer. Anal., 34 (1997), pp. 10721092). It is shown that this algorithm converges under a condition weaker than that stated in their paper. Key words: saddle-point problems; nonlinear inexact Uzawa algorithm; convergence behavior [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
25. NEW DOUGLAS-RACHFORD ALGORITHMIC STRUCTURES AND THEIR CONVERGENCE ANALYSES.
- Author
-
CENSOR, YAIR and MANSOUR, RAFIQ
- Subjects
- *
OPERATOR theory , *STOCHASTIC convergence , *CONVEX functions , *HILBERT space , *ALGORITHMS , *FIXED point theory - Abstract
In this paper we study new algorithmic structures with Douglas-Rachford (DR) operators to solve convex feasibility problems. We propose to embed the basic two-set-DR algorithmic operator into the string-averaging projections and into the block-iterative projection algorithmic structures, thereby creating new DR algorithmic schemes that include the recently proposed cyclic DR algorithm and the averaged DR algorithm as special cases. We further propose and investigate a new multiple-set-DR algorithmic operator. Convergence of all these algorithmic schemes is studied by using properties of strongly quasi-nonexpansive operators and firmly nonexpansive operators. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. CONVERGENCE ANALYSIS OF ALTERNATING DIRECTION METHOD OF MULTIPLIERS FOR A FAMILY OF NONCONVEX PROBLEMS.
- Author
-
MINGYI HONG, ZHI-QUAN LUO, and RAZAVIYAYN, MEISAM
- Subjects
- *
CONVEX functions , *CONSTRAINED optimization , *STOCHASTIC convergence , *MULTIPLIERS (Mathematical analysis) , *LAGRANGIAN functions , *NONSMOOTH optimization , *ALGORITHMS - Abstract
The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. ANALYSIS AND DESIGN OF OPTIMIZATION ALGORITHMS VIA INTEGRAL QUADRATIC CONSTRAINTS.
- Author
-
LESSARD, LAURENT, RECHT, BENJAMIN, and PACKARD, ANDREW
- Subjects
- *
INTEGRAL quadratic constraints , *MATHEMATICAL optimization , *ROBUST control , *SEMIDEFINITE programming , *STOCHASTIC convergence , *CONVEX functions , *ALGORITHMS , *MATHEMATICAL inequalities - Abstract
This paper develops a new framework to analyze and design iterative optimization algorithms built on the notion of integral quadratic constraints (IQCs) from robust control theory. IQCs provide sufficient conditions for the stability of complicated interconnected systems, and these conditions can be checked by semidefinite programming. We discuss how to adapt IQC theory to study optimization algorithms, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers. Using these inequalities, we derive numerical upper bounds on convergence rates for the Gradient method, the Heavy-ball method, Nesterov's accelerated method, and related variants by solving small, simple semidefinite programming problems. We also briefly show how these techniques can be used to search for optimization algorithms with desired performance characteristics, establishing a new methodology for algorithm design. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. MODULATING FUNCTIONS BASED ALGORITHM FOR THE ESTIMATION OF THE COEFFICIENTS AND DIFFERENTIATION ORDER FOR A SPACE-FRACTIONAL ADVECTION-DISPERSION EQUATION.
- Author
-
ALDOGHAITHER, ABEER, DA-YAN LIU, and LALEG-KIRATI, TAOUS-MERIEM
- Subjects
- *
MATHEMATICAL functions , *ALGORITHMS , *COEFFICIENTS (Statistics) , *STOCHASTIC convergence , *PARTIAL differential equations , *APPLIED mathematics - Abstract
In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. A GENERAL INERTIAL PROXIMAL POINT ALGORITHM FOR MIXED VARIATIONAL INEQUALITY PROBLEM.
- Author
-
CAIHUA CHEN, SHIQIAN MA, and JUNFENG YANG
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *PROBLEM solving , *STOCHASTIC convergence , *CONVEX programming , *ALGORITHMS - Abstract
In this paper, we first propose a general inertial proximal point algorithm (PPA) for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, a convergence rate result is not known in the literature for inertial type PPAs. Under certain conditions, we are able to establish the global convergence and nonasymptotic O(1/k) convergence rate result (under a certain measure) of the proposed general inertial PPA. We then show that both the linearized augmented Lagrangian method (ALM) and the linearized alternating direction method of multipliers (ADMM) for structured convex optimization are applications of a general PPA, provided that the algorithmic parameters are properly chosen. Consequently, global convergence and convergence rate results of the linearized ALM and ADMM follow directly from results existing in the literature. In particular, by applying the proposed inertial PPA for mixed VI to structured convex optimization, we obtain inertial versions of the linearized ALM and ADMM whose global convergence is guaranteed. We also demonstrate the effect of the inertial extrapolation step via experimental results on the compressive principal component pursuit problem. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. STABILITY OF OVER-RELAXATIONS FOR THE FORWARD-BACKWARD ALGORITHM, APPLICATION TO FISTA.
- Author
-
AUJOL, J.-F. and DOSSAL, C. H.
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *STABILITY theory , *RELAXATION methods (Mathematics) , *ITERATIVE methods (Mathematics) , *OPERATOR theory - Abstract
This paper is concerned with the convergence of over-relaxations of the forwardbackward algorithm (FB) (in particular the fast iterative soft thresholding algorithm (FISTA)) in the case when proximal maps and/or gradients are computed with a possible error. We show that, provided these errors are small enough, the algorithm still converges to a minimizer of the functional, and with a speed of convergence (in terms of values of the functional) that remains the same as in the noise-free case. We also show that larger errors can be allowed, using a lower over-relaxation than FISTA. This still leads to the convergence of iterates and with ergodic convergence speed faster than the classical FB and FISTA. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
31. THE CYCLIC BLOCK CONDITIONAL GRADIENT METHOD FOR CONVEX OPTIMIZATION PROBLEMS.
- Author
-
BECK, AMIR, PAUWELS, EDOUARD, and SABACH, SHOHAM
- Subjects
- *
MATHEMATICAL optimization , *CONVEX domains , *PROBLEM solving , *SMOOTHNESS of functions , *ALGORITHMS , *STOCHASTIC convergence - Abstract
In this paper we study the convex problem of optimizing the sum of a smooth function and a compactly supported nonsmooth term with a specific separable form. We analyze the block version of the generalized conditional gradient method when the blocks are chosen in a cyclic order. A global sublinear rate of convergence is established for two different stepsize strategies commonly used in this class of methods. Numerical comparisons of the proposed method to both the classical conditional gradient algorithm and its random block version demonstrate the effectiveness of the cyclic block update rule. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
32. CONVERGENCE RATE OF STOCHASTIC APPROXIMATION ALGORITHMS IN THE DEGENERATE CASE.
- Author
-
Han-Fu Clien
- Subjects
- *
MATHEMATICAL functions , *STOCHASTIC approximation , *STOCHASTIC convergence , *ASYMPTOTIC expansions , *ALGORITHMS - Abstract
Let f(·) be an unknown function whose root x[SUP0] is sought by stochastic approximation(SA). Convergence rate and asymptotic normality are usually established for the nondegenerate case f[SUP1](x[SUP0]) ≠ 0. This paper demonstrates the convergence rate of SA algorithms for the degenerate case f[SUP1] (x[SUP0]) = 0. In comparison with the previous work, in this paper no growth rate restriction is imposed on f(·), no statistical property is required for the measurement noise, the general step size is considered, and the result is obtained for the multidimensional case, which is not a straightforward extension of the one-dimensional result. Although the observation noise may be either deterministic or random, the analysis is purely deterministic and elementary. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
33. CONVERGENCE RESULTS FOR PROJECTED LINE-SEARCH METHODS ON VARIETIES OF LOW-RANK MATRICES VIA ŁOJASIEWICZ INEQUALITY.
- Author
-
SCHNEIDER, REINHOLD and USCHMAJEW, ANDRÉ
- Subjects
- *
STOCHASTIC convergence , *MATRICES (Mathematics) , *MATHEMATICAL inequalities , *METHOD of steepest descent (Numerical analysis) , *ALGORITHMS , *MATHEMATICAL optimization - Abstract
The aim of this paper is to derive convergence results for projected line-search methods on the real-algebraic variety M≤κ of real m×n matrices of rank at most k. Such methods extend Riemannian optimization methods, which are successfully used on the smooth manifold Mk of rankk matrices, to its closure by taking steps along gradient-related directions in the tangent cone, and afterwards projecting back to M≤κ. Considering such a method circumvents the difficulties which arise from the nonclosedness and the unbounded curvature of Mk. The pointwise convergence is obtained for real-analytic functions on the basis of a Lojasiewicz inequality for the projection of the antigradient to the tangent cone. If the derived limit point lies on the smooth part of M≤κ, i.e., in Mk, this boils down to more or less known results, but with the benefit that asymptotic convergence rate estimates (for specific step-sizes) can be obtained without an a priori curvature bound, simply from the fact that the limit lies on a smooth manifold. At the same time, one can give a convincing justification for assuming critical points to lie in Mk: if X is a critical point of f on M≤κ, then either X has rank k, or ∇f(X) = 0. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
34. PROJECTED REFLECTED GRADIENT METHODS FOR MONOTONE VARIATIONAL INEQUALITIES.
- Author
-
MALITSKY, YU.
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *HILBERT space , *ALGORITHMS , *STOCHASTIC convergence , *INNER product , *NONLINEAR analysis - Abstract
This paper is concerned with some new projection methods for solving variational inequality problems with monotone and Lipschitz-continuous mapping in Hilbert space. First, we propose the projected reflected gradient algorithm with a constant stepsize. It is similar to the projected gradient method, namely, the method requires only one projection onto the feasible set and only one value of the mapping per iteration. This distinguishes our method from most other projection-type methods for variational inequalities with monotone mapping. Also we prove that it has R-linear rate of convergence under the strong monotonicity assumption. The usual drawback of algorithms with constant stepsize is the requirement to know the Lipschitz constant of the mapping. To avoid this, we modify our first algorithm so that the algorithm needs at most two projections per iteration. In fact, our computational experience shows that such cases with two projections are very rare. This scheme, at least theoretically, seems to be very effective. All methods are shown to be globally convergent to a solution of the variational inequality. Preliminary results from numerical experiments are quite promising. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
35. CONSTRAINED BEST EUCLIDEAN DISTANCE EMBEDDING ON A SPHERE: A MATRIX OPTIMIZATION APPROACH.
- Author
-
SHUANGHUA BAI, HUO-DUO QI, and NAIHUA XIU
- Subjects
- *
EUCLIDEAN distance , *MATHEMATICAL optimization , *ALGORITHMS , *STOCHASTIC convergence , *NEWTON-Raphson method , *MULTIDIMENSIONAL scaling - Abstract
The problem of data representation on a sphere of unknown radius arises from various disciplines such as statistics (spatial data representation), psychology (constrained multidimensional scaling), and computer science (machine learning and pattern recognition). The best representation often needs to minimize a distance function of the data on a sphere as well as to satisfy some Euclidean distance constraints. It is those spherical and Euclidean distance constraints that present an enormous challenge to the existing algorithms. In this paper, we reformulate the problem as an Euclidean distance matrix optimization problem with a low rank constraint. We then propose an iterative algorithm that uses a quadratically convergent Newton-CG method at each step. We study fundamental issues including constraint nondegeneracy and the nonsingularity of generalized Jacobian that ensure the quadratic convergence of the Newton method. We use some classic examples from the spherical multidimensional scaling to demonstrate the flexibility of the algorithm in incorporating various constraints. We also present an interesting application to the circle fitting problem. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. ON ATTACK-RESILIENT DISTRIBUTED FORMATION CONTROL IN OPERATOR-VEHICLE NETWORKS.
- Author
-
MINGHUI ZHU and MARTÍNEZ, SONIA
- Subjects
- *
CYBER physical systems , *A priori , *MATRICES (Mathematics) , *STOCHASTIC convergence , *ALGORITHMS - Abstract
This paper tackles a distributed formation control problem where a group of vehicles is remotely controlled by a network of operators. Each operator-vehicle pair is attacked by an adversary, who corrupts the commands sent from the operator to the vehicle. From the point of view of operators, each adversary follows an attacking strategy linearly parameterized by some (potentially time-varying) matrix which is unknown a priori. In particular, we consider two scenarios depending upon whether adversaries can adapt their attacking tactics online. To assure mission completion in such a hostile environment, we propose two novel attack-resilient distributed control algorithms that allow operators to adjust their policies on the fly by exploiting the latest collected information about adversaries. Both algorithms enable vehicles to asymptotically achieve the desired formation from any initial configuration and initial estimate of the adversaries' strategies. It is further shown that the sequence of the distances to the desired formation is square summable for each proposed algorithm. In numerical examples, the convergence rates of our algorithms are exponential, outperforming the theoretic results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
37. ON STOCHASTIC SUBGRADIENT MIRROR-DESCENT ALGORITHM WITH WEIGHTED AVERAGING.
- Author
-
NEDIĆ, ANGELIA and SOOMIN LEE
- Subjects
- *
SUBGRADIENT methods , *STOCHASTIC convergence , *AVERAGING method (Differential equations) , *ALGORITHMS , *CONVEX functions - Abstract
This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use of these weighted averages, we show that the known optimal rates can be obtained with simpler algorithms than those currently existing in the literature. Specifically, by suitably choosing the stepsize values, one can obtain the rate of the order 1/k for strongly convex functions, and the rate 1/√k for general convex functions (not necessarily differentiable). Furthermore, for the latter case, it is shown that a stochastic subgradient mirror-descent with iterate averaging converges (along a subsequence) to an optimal solution, almost surely, even with the stepsize of the form 1/ √1 + k, which was not previously known. The stepsize choices that achieve the best rates are those proposed by Tseng for acceleration of proximal gradient methods [P. Tseng, SIAM J. Optim., submitted]. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
38. CONVERGENCE ANALYSIS OF MARKOV CHAIN MONTE CARLO LINEAR SOLVERS USING ULAM VON NEUMANN ALGORITHM.
- Author
-
HAO JI, MASCAGNI, MICHAEL, and YAOHANG LI
- Subjects
- *
STOCHASTIC convergence , *MARKOV chain Monte Carlo , *VON Neumann algebras , *LINEAR systems , *ALGORITHMS - Abstract
The convergence of Markov chain--based Monte Carlo linear solvers using the Ulam--von Neumann algorithm for a linear system of the form x = Hx + b is investigated in this paper. We analyze the convergence of the Monte Carlo solver based on the original Ulam--von Neumann algorithm under the conditions that ||H|| < 1 as well as p(H) < 1, where p(H) is the spectral radius of H. We find that although the Monte Carlo solver is based on sampling the Neumann series, the convergence of Neumann series is not a sufficient condition for the convergence of the Monte Carlo solver. Actually, properties of H are not the only factors determining the convergence of the Monte Carlo solver; the underlying transition probability matrix plays an important role. An improper selection of the transition matrix may result in divergence even though the condition ||H|| < 1 holds. However, if the condition ||H|| < 1 is satisfied, we show that there always exist certain transition matrices that guarantee convergence of the Monte Carlo solver. On the other hand, if p(H) < 1 but ||H|| > 1, the Monte Carlo linear solver may or may not converge. In particular, if the row sum ... > 1 for every row in H or, more generally, p(H+) > 1, where H+ is the nonnegative matrix where Hij+ = |Hij|, we show that transition matrices leading to convergence of the Monte Carlo solver do not exist. Finally, given H and a transition matrix P, denoting the matrix H* via H*ij = Hij²/Pij, we find that p(H*) < 1 is a necessary and sufficient condition for convergence of the Markov chain--based Monte Carlo linear solvers using the Ulam--von Neumann algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. MULTISCALE ASYMPTOTIC ANALYSIS AND COMPUTATION OF OPTIMAL CONTROL FOR ELLIPTIC SYSTEMS WITH CONSTRAINTS.
- Author
-
JIANJUN LIU, LIQUN CAO, and NINGNING YAN
- Subjects
- *
ALGORITHMS , *ELLIPTIC differential equations , *ASYMPTOTIC expansions , *NUMERICAL analysis , *STOCHASTIC convergence , *OPTIMAL control theory - Abstract
This paper reports the multiscale analysis and numerical algorithms for a class of distributed elliptic optimal control problems with constraints. Multiscale asymptotic expansions are presented, and an explicit rate of convergence is derived. A multiscale numerical method is developed. Numerical experiments are carried out to validate the predicted convergence results. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. CONVERGENT LEARNING ALGORITHMS FOR UNKNOWN REWARD GAMES.
- Author
-
CHAPMAN, ARCHIE C., LESLIE, DAVID S., ROGERS, ALEX, and JENNINGS, NICHOLAS R.
- Subjects
- *
NASH equilibrium , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL optimization , *REINFORCEMENT learning , *STOCHASTIC convergence - Abstract
In this paper, we address the problem of convergence to Nash equilibria in games with rewards that are initially unknown and must be estimated over time from noisy observations. These games arise in many real-world applications, whenever rewards for actions cannot be prespecified and must be learned online, but standard results in game theory do not consider such settings. For this problem, we derive a multiagent version of Q-learning to estimate the reward functions using novel forms of the-greedy learning policy. Using these Q-learning schemes to estimate reward functions, we then provide conditions guaranteeing the convergence of adaptive play and the better-reply processes to Nash equilibria in potential games and games with more general forms of acyclicity, and of regret matching to the set of correlated equilibria in generic games. A secondary result is that we prove the strong ergoditicity of stochastic adaptive play and stochastic better-reply processes in the case of vanishing perturbations. Finally, we illustrate the efficacy of the algorithms in a set of randomly generated three-player coordination games and show the practical necessity of our results by demonstrating that violations to the derived learning parameter conditions can cause the algorithms to fail to converge. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. BISECTION SEARCH WITH NOISY RESPONSES.
- Author
-
WAEBER, ROLF, FRAZIER, PETER I., and HENDERSON, SHANE G.
- Subjects
- *
BISECTORS (Geometry) , *ALGORITHMS , *PROBABILITY theory , *STOCHASTIC convergence , *SEQUENTIAL analysis , *BAYESIAN analysis - Abstract
Bisection search is the most efficient algorithm for locating a unique point X* ∈ [0, 1] when we are able to query an oracle only about whether X* lies to the left or right of a point x of our choosing. We study a noisy version of this classic problem, where the oracle's response is correct only with probability p. The probabilistic bisection algorithm (PBA) introduced by Horstein [IEEE Trans. Inform. Theory, 9 (1963), pp. 136-143] can be used to locate X* in this setting. While the method works extremely well in practice, very little is known about its theoretical properties. In this paper, we provide several key findings about the PBA, which lead to the main conclusion that the expected absolute residuals of successive search results, i.e., E[|X* - Xn|], converge to 0 at a geometric rate. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. REGULARIZED PERIMETER FOR TOPOLOGY OPTIMIZATION.
- Author
-
AMSTUTZ, SAMUEL
- Subjects
- *
PERIMETERS (Geometry) , *TOPOLOGY , *MATHEMATICAL optimization , *STOCHASTIC convergence , *ALGORITHMS - Abstract
The perimeter functional is known to oppose serious difficulties when it has to be handled within a topology optimization procedure. In this paper, a regularized perimeter functional Perε, defined for two- and three-dimensional domains, is introduced. On one hand, the convergence of Perε to the exact perimeter when ε tends to zero is proved. On the other hand, the topological differentiability of Perε for ε > 0 is analyzed. These features lead to the design of a topology optimization algorithm suitable for perimeter-dependent objective functionals. Several numerical results illustrate the method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
43. ACCELERATING OPTIMIZATION OF PARAMETRIC LINEAR SYSTEMS BY MODEL ORDER REDUCTION.
- Author
-
YAO YUE and MEERBERGEN, KARL
- Subjects
- *
MATHEMATICAL optimization , *LINEAR systems , *ALGORITHMS , *STOCHASTIC convergence , *CIVIL engineering - Abstract
Design optimization problems are often formulated as an optimization problem whose objective is a function of the output of a large-scale parametric linear system, obtained from the discretization of a PDE. To reduce the high computational cost of the objective and its gradient, model order reduction techniques can be used. This paper uses interpolatory reduced models as surrogate models in an optimization procedure. We replace the standard first-order condition by the relaxed first-order condition, which is more suitable when algebraic reduced models are used as surrogate models. The relaxed first-order condition imposes that the approximation quality of the surrogate model at the interpolation point can be measured and refined and that the surrogate model is equipped with an error bound on the entire parameter space. We propose two optimization algorithms: one uses the error bound to define a trust region and the other penalizes the objective with the error bound. We prove convergence of both methods under mild conditions in a unified framework. These methods are efficient when surrogate models are cheap to build and evaluate, since they only need these operations to achieve convergence. Numerical experiments from civil engineering show good performance of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
44. MINIMIZATION PRINCIPLES FOR THE LINEAR RESPONSE EIGENVALUE PROBLEM II: COMPUTATION.
- Author
-
ZHAOJUN BAI and REN-CANG LI
- Subjects
- *
EIGENVALUES , *STOCHASTIC convergence , *EIGENVECTORS , *APPROXIMATION theory , *ALGORITHMS - Abstract
In Part I of this paper we presented minimization principles and related theoretical results for the linear response eigenvalue problem. Here we develop best approximations for the few smallest eigenvalues with the positive sign via a structure-preserving subspace projection. Then we present four-dimensional subspace search conjugate gradient-like algorithms for simultaneously computing these eigenvalues and their associated eigenvectors. Finally, we present numerical examples to illustrate convergence behaviors of the proposed methods with and without preconditioning. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
45. ANISOTROPIC "GOAL-ORIENTED" MESH ADAPTIVITY FOR ELLIPTIC PROBLEMS.
- Author
-
CARPIO, JAIME, PRIETO, JUAN LUIS, and BERMEJO, RODOLFO
- Subjects
- *
ALGORITHMS , *FINITE element method , *ADVECTION-diffusion equations , *ANISOTROPY , *ERROR analysis in mathematics , *INTERPOLATION , *STOCHASTIC convergence - Abstract
We propose in this paper an anisotropic, adaptive, finite element algorithm for steady, linear advection-diffusion-reaction problems with strong anisotropic features. The error analysis is based on the dual weighted residual methodology, allowing us to perform "goal-oriented" adaptation of a certain functional J(u) of the solution and derive an "optimal" metric tensor for local mesh adaptation with linear and quadratic finite elements. As a novelty, and to evaluate the weights of the error estimator on unstructured meshes composed of anisotropic triangles, we make use of a patchwise, higher-order interpolation recovery readily extendable to finite elements of arbitrary order. We carry out a number of numerical experiments in two dimensions so as to prove the capabilities of the goal-oriented adaptive method. We compute the convergence rate and the effectivity index for a series of output functionals of the solution. The results show the good performance of the algorithm with linear as well as quadratic finite elements. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
46. A RIDGE AND CORNER PRESERVING MODEL FOR SURFACE RESTORATION.
- Author
-
RONGJIE LAI, XUE-CHENG TAI, and CHAN, TONY F.
- Subjects
- *
GEOMETRIC surfaces , *DERIVATIVES (Mathematics) , *ALGORITHMS , *LAGRANGE multiplier , *STOCHASTIC convergence - Abstract
One challenge in surface restoration is to design surface diffusion preserving ridges and sharp corners. In this paper, we propose a new surface restoration model based on the observation that surfaces' implicit representations are continuous functions whose first order derivatives have discontinuities at ridges and sharp corners. Regularized by vectorial total variation on the derivatives of surfaces' implicit representation functions, the proposed model has ridge and corner preserving properties validated by numerical experiments. To solve the proposed fourth order and convex problem efficiently, we further design a numerical algorithm based on the augmented Lagrangian method. Moreover, the theoretical convergence analysis of the proposed algorithm is also provided. To demonstrate the efficiency and robustness of the proposed method, we show restoration results on several different surfaces and also conduct comparisons with the mean curvature flow method and the nonlocal mean method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
47. ERROR BOUNDS FOR THE LANCZOS METHODS FOR APPROXIMATING MATRIX EXPONENTIALS.
- Author
-
QIANG YE
- Subjects
- *
MATRIX exponential , *KRYLOV subspace , *LANCZOS method , *ALGORITHMS , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
In this paper, we present new error bounds for the Lanczos method and the shift-and-invert Lanczos method for computing e-τAv for a large sparse symmetric positive semidefinite matrix A. Compared with the existing error analysis for these methods, our bounds relate the convergence to the condition numbers of the matrix that generates the Krylov subspace. In particular, we show that the Lanczos method will converge rapidly if the matrix A is well-conditioned, regardless of what the norm of τA is. Numerical examples are given to demonstrate the theoretical bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
48. OPTIMAL QUANTUM CONTROL BY AN ADAPTED COORDINATE ASCENT ALGORITHM.
- Author
-
DEGANI, ILAN and ZANNA, ANTONELLA
- Subjects
- *
OPTIMAL control theory , *ALGORITHMS , *STOCHASTIC convergence , *NONLINEAR programming , *NANOSTRUCTURED materials - Abstract
This paper explores an adaption of the coordinate ascent approach to quantum control problems. It was motivated by the observation that several of the existing monotone-converging schemes for quantum control may be viewed as approximations of the well-known coordinate ascent method. Our implementation employs line searches in coordinate directions in control space using only evaluations of the performance functional, without invoking its derivatives. It is based on recasting the performance functional as a local tracking function which gives the "future" quality of a control at each moment in time. Back propagation of a basis of the target space enables these performance functional (i.e., tracking function) evaluations to be done efficiently. The performance functional may include, or not include, regularization terms as needed. Convergence of the resulting algorithm and its relation to previous schemes are discussed, and numerical examples are provided. In our tests, the coordinate ascent algorithm exhibited rapid convergence to high yield controls. Moreover, it is readily adapted to different restrictions on the controls imposed by the physics of the system under study, e.g., controls with spectrum restrictions in laser control problems, and discrete controls on a specified time grid in electronically controlled nanodevices, or in NMR. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
49. A REGULARIZED EXPLICIT EXCHANGE METHOD FOR SEMI-INFINITE PROGRAMS WITH AN INFINITE NUMBER OF CONIC CONSTRAINTS.
- Author
-
OKUNO, TAKAYUKI, HAYASHI, SHUNSUKE, and FUKUSHIMA, MASAO
- Subjects
- *
MATHEMATICAL programming , *APPROXIMATION theory , *CHEMICAL engineering approximation methods , *ALGORITHMS , *STOCHASTIC convergence - Abstract
The semi-infinite program (SIP) is normally represented with infinitely many inequality constraints and has been studied extensively so far. However, there have been very few studies on the SIP involving conic constraints, even though it has important applications such as Chebyshev-like approximation, filter design, and so on. In this paper, we focus on the SIP with infinitely many conic constraints, called an SICP for short. We show that under the Robinson constraint qualification a local optimum of the SICP satisfies the KKT conditions that can be represented only with a finite subset of the conic constraints. We also introduce two exchange type algorithms for solving the convex SICP. We first provide an explicit exchange method and show that it has global convergence under the strict convexity assumption on the objective function. We then propose an algorithm combining a regularization method with the explicit exchange method and establish global convergence of the hybrid algorithm without the strict convexity assumption. We report some numerical results to examine the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
50. STABILITY OF DISCRETE-TIME SWITCHED HOMOGENEOUS SYSTEMS ON CONES AND CONEWISE HOMOGENEOUS INCLUSIONS.
- Author
-
Jinglai Shen and Jianghai Hu
- Subjects
- *
STABILITY (Mechanics) , *RADIUS (Geometry) , *ALGORITHMS , *STOCHASTIC convergence , *APPROXIMATION theory , *LYAPUNOV stability - Abstract
This paper presents a stability analysis of switched homogeneous systems on cones under arbitrary and optimal switching rules with extensions to conewise homogeneous or linear inclusions. Several interrelated approaches, such as the joint spectral radius approach and the generating function approach, are exploited to derive necessary and sufficient stability conditions and to develop suitable algorithms for stability tests. Specifically, the generalized joint spectral radius and the generalized joint lower spectral radius are introduced to characterize the radii of domains of strong and weak attraction. Furthermore, strong and weak generating functions and their radii of convergence are employed to derive stability conditions; their analytic properties, numerical approximations, and convergence analysis are established. Extensions to conewise homogeneous or linear inclusions are made to address state-dependent switching dynamics. Relations between different stability notions in the strong or weak sense are studied; Lyapunov techniques are used for stability analysis of the conewise linear inclusions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.