33 results
Search Results
2. A nonmonotone scaled conjugate gradient algorithm for large-scale unconstrained optimization.
- Author
-
Ou, Yigui and Zhou, Xin
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC convergence ,SMOOTHNESS of functions - Abstract
This paper proposes a nonmonotone scaled conjugate gradient algorithm for solving large-scale unconstrained optimization problems, which combines the idea of scaled memoryless Broyden-Fletcher-Goldfarb-Shanno preconditioned conjugate gradient method with the nonmonotone technique. An attractive property of the proposed method is that the search direction always provides sufficient descent step at each iteration. This property is independent of the line search used. Under appropriate assumptions, the method is proven to possess global convergence for nonconvex smooth functions, and R-linear convergence for strongly convex functions. Preliminary numerical results and related comparisons show the efficiency of the proposed method in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. Minibatch stochastic subgradient-based projection algorithms for feasibility problems with convex inequalities.
- Author
-
Necoara, Ion and Nedić, Angelia
- Subjects
CONVEX sets ,ALGORITHMS ,PARALLEL programming ,STOCHASTIC convergence - Abstract
In this paper we consider convex feasibility problems where the feasible set is given as the intersection of a collection of closed convex sets. We assume that each set is specified algebraically as a convex inequality, where the associated convex function is general (possibly non-differentiable). For finding a point satisfying all the convex inequalities we design and analyze random projection algorithms using special subgradient iterations and extrapolated stepsizes. Moreover, the iterate updates are performed based on parallel random observations of several constraint components. For these minibatch stochastic subgradient-based projection methods we prove sublinear convergence results and, under some linear regularity condition for the functional constraints, we prove linear convergence rates. We also derive sufficient conditions under which these rates depend explicitly on the minibatch size. To the best of our knowledge, this work is the first deriving conditions that show theoretically when minibatch stochastic subgradient-based projection updates have a better complexity than their single-sample variants when parallel computing is used to implement the minibatch. Numerical results also show a better performance of our minibatch scheme over its non-minibatch counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Convergence results of a matrix splitting algorithm for solving weakly nonlinear complementarity problems.
- Author
-
Luo, Mei-Ju, Wang, Ya-Yi, and Liu, Hong-Ling
- Subjects
STOCHASTIC convergence ,MATRICES (Mathematics) ,ALGORITHMS ,LINEAR complementarity problem ,NONLINEAR theories - Abstract
In this paper, we consider a class of weakly nonlinear complementarity problems (WNCP) with large sparse matrix. We present an accelerated modulus-based matrix splitting algorithm by reformulating the WNCP as implicit fixed point equations based on two splittings of the system matrixes. We show that, if the system matrix is a P-matrix, then under some mild conditions the sequence generated by the algorithm is convergent to the solution of WNCP. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. Submission to the DTA2012 Special Issue: Convergence of Time Discretization Schemes for Continuous-Time Dynamic Network Loading Models.
- Author
-
Ma, Rui, Ban, Xuegang, Pang, Jong-Shi, and Liu, Henry
- Subjects
TRAFFIC assignment ,STOCHASTIC convergence ,DISCRETIZATION methods ,CONTINUOUS time systems ,DYNAMIC models ,ALGORITHMS - Abstract
Dynamic Network Loading (DNL) is an essential component of Dynamic Traffic Assignment (DTA) and Dynamic User Equilibrium (DUE). Most DNL models are formulated in continuous time but solved in discrete time to obtain numerical solutions. This paper discusses the importance of choosing proper discretization schemes to numerically solve continuous-time DNL models and further to obtain convergence and other desirable properties of the discretization schemes. We use the recently developed α point-queue model as an example. We first develop theoretical results to prove the consistency, stability and convergence of the implicit and explicit discretization schemes for solving the α point-queue model. We then conduct numerical experiments to show such results accordingly. We also discuss the implications of the implicit and explicit discretization schemes to the developments of DNL and DTA/DUE solution algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Instrumental variables for nonlinearity recovering in block-oriented systems driven by correlated signals.
- Author
-
Mzyk, Grzegorz
- Subjects
HAMMERSTEIN equations ,NONLINEAR systems ,LINEAR dynamical systems ,LEAST squares ,ALGORITHMS ,STOCHASTIC convergence - Abstract
The goal of the paper is to identify the Hammerstein-type systems excited and disturbed by correlated random processes. The problem is semi-parametric in the sense that the nonlinear static characteristic is recovered without prior knowledge about the linear dynamic block, i.e. when its order is unknown. The method is based on the instrumental variables technique, with the instruments generated by input inverse filtering. It is proved that, in contrast to the least-squares-based approach, the proposed algorithm leads to an asymptotically unbiased, strongly consistent estimate. Constructive procedures of instrumental variables generation are given for some popular cases. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
7. TOWARD PARALLEL COARSE GRID CORRECTION FOR THE PARAREAL ALGORITHM.
- Author
-
SHU-LIN WU
- Subjects
ALGORITHMS ,STOCHASTIC convergence - Abstract
In this paper, we present an idea toward parallel coarse grid correction (CGC) for the parareal algorithm. It is well known that such a CGC procedure is often the bottleneck of speedup of the parareal algorithm. For an ODE system with initial-value condition u(0) = u
0 the idea can be explained as follows. First, we apply the G-propagator to the same ODE system but with a special condition u(0) = αu(T), where α ∈ ℝ is a crux parameter. Second, in each iteration of the parareal algorithm the CGC procedure will be carried out by the so-called diagonalization technique established recently. The parameter α controls both the roundoff error arising from such a diagonalization technique and the convergence rate of the resulting parareal algorithm. We show that there exists some threshold α* such that the parareal algorithm with diagonalization-based CGC possesses the same convergence rate as that of the parareal algorithm with classical CGC if |α| ≤ α*. With |α| = α*, we show that the condition number associated with the diagonalization technique is a moderate quantity of order O(1) (and therefore the roundoff error is small) and is independent of the length of the time interval. Numerical results are given to support our findings. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
8. Stability and convergence analysis for different harmonic control algorithm implementations.
- Author
-
Zazas, Ilias and Daley, Steve
- Subjects
ENGINEERING systems ,VIBRATION (Mechanics) ,MOTORS ,STOCHASTIC convergence ,ALGORITHMS - Abstract
In many engineering systems there is a common requirement to isolate the supporting foundation from low frequency periodic machinery vibration sources. In such cases the vibration is mainly transmitted at the fundamental excitation frequency and its multiple harmonics. It is well known that passive approaches have poor performance at low frequencies and for this reason a number of active control technologies have been developed. For discrete frequencies disturbance rejection Harmonic Control (HC) techniques provide excellent performance. In the general case of variable speed engines or motors, the disturbance frequency changes with time, following the rotational speed of the engine or motor. For such applications, an important requirement for the control system is to converge to the optimal solution as rapidly as possible for all variations without altering the system's stability. For a variety of applications this may be difficult to achieve, especially when the disturbance frequency is close to a resonance peak and a small value of convergence gain is usually preferred to ensure closed-loop stability. This can lead to poor vibration isolation performance and long convergence times. In this paper, the performance of two recently developed HC algorithms are compared (in terms of both closed-loop stability and speed of convergence) in a vibration control application and for the case when the disturbance frequency is close to a resonant frequency. In earlier work it has been shown that both frequency domain HC algorithms can be represented by Linear Time Invariant (LTI) feedback compensators each designed to operate at the disturbance frequency. As a result, the convergence and stability analysis can be performed using the LTI representations with any suitable method from the LTI framework. For the example mentioned above, the speed of convergence provided by each algorithm is compared by determining the locations of the dominant closed-loop poles and stability analysis is performed using the open-loop frequency responses and the Nyquist criterion. The theoretical findings are validated through simulations and experimental analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. MSO: a framework for bound-constrained black-box global optimization algorithms.
- Author
-
Al-Dujaili, Abdullah, Suresh, S., and Sundararajan, N.
- Subjects
GLOBAL optimization ,ALGORITHMS ,MULTISCALE modeling ,STOCHASTIC convergence ,MATHEMATICAL optimization - Abstract
This paper addresses a class of algorithms for solving bound-constrained black-box global optimization problems. These algorithms partition the objective function domain over multiple scales in search for the global optimum. For such algorithms, we provide a generic procedure and refer to as multi-scale optimization ( MSO). Furthermore, we propose a theoretical methodology to study the convergence of MSO algorithms based on three basic assumptions: (a) local Hölder continuity of the objective function f, (b) partitions boundedness, and (c) partitions sphericity. Moreover, the worst-case finite-time performance and convergence rate of several leading MSO algorithms, namely, Lipschitzian optimization methods, multi-level coordinate search, dividing rectangles, and optimistic optimization methods have been presented. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. The modulus-based matrix splitting algorithms for a class of weakly nonlinear complementarity problems.
- Author
-
Huang, Na and Ma, Changfeng
- Subjects
MATRICES (Mathematics) ,ALGORITHMS ,SET theory ,NONLINEAR theories ,LINEAR complementarity problem ,BOUNDARY value problems ,FIXED point theory ,STOCHASTIC convergence - Abstract
In this paper, we study a class of weakly nonlinear complementarity problems arising from the discretization of free boundary problems. By reformulating the complementarity problems as implicit fixed-point equations based on splitting of the system matrices, we propose a class of modulus-based matrix splitting algorithms. We show their convergence by assuming that the system matrix is positive definite. Moreover, we give several kinds of typical practical choices of the modulus-based matrix splitting iteration methods based on the different splitting of the system matrix. Numerical experiments on two model problems are presented to illustrate the theoretical results and examine the numerical effectiveness of our modulus-based matrix splitting algorithms. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem.
- Author
-
Jeong, Yongjin, Park, Jinsu, Lee, Hyun Chul, and Lee, Deokjung
- Subjects
- *
STOCHASTIC convergence , *FINITE difference method , *CORRECTION factors , *ALGORITHMS , *NEUTRON diffusion , *EIGENVALUES - Abstract
In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also noted that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. Split hierarchical variational inequality problems and fixed point problems for nonexpansive mappings.
- Author
-
Ansari, Qamrul, Rehan, Aisha, and Wen, Ching-Feng
- Subjects
VARIATIONAL inequalities (Mathematics) ,NONEXPANSIVE mappings ,STOCHASTIC convergence ,NONLINEAR operators ,ALGORITHMS - Abstract
The present paper deals with the common solution method for finding a fixed point of a nonexpansive mapping and a solution of split hierarchical Minty variational inequality problems. We discuss the weak convergence of the sequences generated by the proposed method to a common solution of a fixed point problem and a split hierarchical Minty variational inequality problem. An example is presented to illustrate the proposed algorithm and result. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. An iterative method for split hierarchical monotone variational inclusions.
- Author
-
Ansari, Qamrul and Rehan, Aisha
- Subjects
ITERATIVE methods (Mathematics) ,MATHEMATICAL inequalities ,STOCHASTIC convergence ,ALGORITHMS ,MATHEMATICAL sequences - Abstract
In this paper, we introduce a split hierarchical monotone variational inclusion problem (SHMVIP) which includes split variational inequality problems, split monotone variational inclusion problems, split hierarchical variational inequality problems, etc., as special cases. An iterative algorithm is proposed to compute the approximate solutions of an SHMVIP. The weak convergence of the sequence generated by the proposed algorithm is studied. We present an example to illustrate our algorithm and convergence result. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
14. CONVERGENCE RESULTS FOR PROJECTED LINE-SEARCH METHODS ON VARIETIES OF LOW-RANK MATRICES VIA ŁOJASIEWICZ INEQUALITY.
- Author
-
SCHNEIDER, REINHOLD and USCHMAJEW, ANDRÉ
- Subjects
STOCHASTIC convergence ,MATRICES (Mathematics) ,MATHEMATICAL inequalities ,METHOD of steepest descent (Numerical analysis) ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
The aim of this paper is to derive convergence results for projected line-search methods on the real-algebraic variety M
≤κ of real m×n matrices of rank at most k. Such methods extend Riemannian optimization methods, which are successfully used on the smooth manifold Mk of rankk matrices, to its closure by taking steps along gradient-related directions in the tangent cone, and afterwards projecting back to M≤κ . Considering such a method circumvents the difficulties which arise from the nonclosedness and the unbounded curvature of Mk. The pointwise convergence is obtained for real-analytic functions on the basis of a Lojasiewicz inequality for the projection of the antigradient to the tangent cone. If the derived limit point lies on the smooth part of M≤κ , i.e., in Mk, this boils down to more or less known results, but with the benefit that asymptotic convergence rate estimates (for specific step-sizes) can be obtained without an a priori curvature bound, simply from the fact that the limit lies on a smooth manifold. At the same time, one can give a convincing justification for assuming critical points to lie in Mk: if X is a critical point of f on M≤κ , then either X has rank k, or ∇f(X) = 0. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
15. Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization.
- Author
-
Patrascu, Andrei and Necoara, Ion
- Subjects
STOCHASTIC convergence ,GLOBAL optimization ,MATHEMATICAL optimization ,ALGORITHMS ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function consisting of a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of the above structure, we propose random coordinate descent algorithms and analyze their convergence properties. For the general case, when the objective function is nonconvex and composite we prove asymptotic convergence for the sequences generated by our algorithms to stationary points and sublinear rate of convergence in expectation for some optimality measure. Additionally, if the objective function satisfies an error bound condition we derive a local linear rate of convergence for the expected values of the objective function. We also present extensive numerical experiments for evaluating the performance of our algorithms in comparison with state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. Global optimization in Hilbert space.
- Author
-
Houska, Boris and Chachuat, Benoît
- Subjects
ALGORITHMS ,HILBERT space ,MATHEMATICAL optimization ,FINITE element method ,STOCHASTIC convergence - Abstract
We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an ε-suboptimal global solution within finite run-time for any given termination tolerance ε>0. Finally, we illustrate these results for a problem of calculus of variations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. An efficient numerical algorithm for the fractional Drinfeld–Sokolov–Wilson equation.
- Author
-
Singh, Jagdev, Kumar, Devendra, Baleanu, Dumitru, and Rathore, Sushila
- Subjects
- *
DRINFELD modular varieties , *ALGORITHMS , *HOMOTOPY equivalences , *STOCHASTIC convergence , *POLYNOMIALS - Abstract
The fundamental purpose of the present paper is to apply an effective numerical algorithm based on the mixture of homotopy analysis technique, Sumudu transform approach and homotopy polynomials to obtain the approximate solution of a nonlinear fractional Drinfeld–Sokolov–Wilson equation. The nonlinear Drinfeld–Sokolov–Wilson equation naturally occurs in dispersive water waves. The uniqueness and convergence analysis are shown for the suggested technique. The convergence of the solution is fixed and managed by auxiliary parameter ℏ. The numerical results are shown graphically. Results obtained by the application of the technique disclose that the suggested scheme is very accurate, flexible, effective and simple to use. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. DECOMPOSITION METHODS FOR COMPUTING DIRECTIONAL STATIONARY SOLUTIONS OF A CLASS OF NONSMOOTH NONCONVEX OPTIMIZATION PROBLEMS.
- Author
-
JONG-SHI PANG and MIN TAO
- Subjects
- *
DECOMPOSITION method , *NONSMOOTH optimization , *ALGORITHMS , *STOCHASTIC convergence , *LIPSCHITZ spaces , *GROUP theory - Abstract
Motivated by block partitioned problems arising from group sparsity representation and generalized noncooperative potential games, this paper presents a basic decomposition method for a broad class of multiblock nonsmooth optimization problems subject to coupled linear constraints on the variables that may additionally be individually constrained. The objective of such an optimization problem is given by the sum of two nonseparable functions minus a sum of separable, pointwise maxima of finitely many convex differentiable functions. One of the former two nonseparable functions is of the class LC1, i.e., differentiable with a Lipschitz gradient, while the other summand is multiconvex. The subtraction of the separable, pointwise maxima of convex functions induces a partial difference-of-convex (DC) structure in the overall objective; yet with all three terms together, the objective is nonsmooth and non-DC, but is blockwise directionally differentiable. By taking advantage of the (negative) pointwise maximum structure in the objective, the developed algorithm and its convergence result are aimed at the computation of a blockwise directional stationary solution, which arguably is the sharpest kind of stationary solutions for this class of nonsmooth problems. This aim is accomplished by combining the alternating direction method of multipliers (ADMM) with a semilinearized Gauss{Seidel scheme, resulting in a decomposition of the overall problem into subproblems each involving the individual blocks. To arrive at a stationary solution of the desired kind, our algorithm solves multiple convex subprograms at each iteration, one per convex function in each pointwise maximum. In order to lessen the potential computational burden in each iteration, a probabilistic version of the algorithm is presented and its almost sure convergence is established. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. A MEMORY GRADIENT METHOD BASED ON THE NONMONOTONE TECHNIQUE.
- Author
-
YIGUI OU and Yuanwen Liu
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,PROBLEM solving ,NUMERICAL analysis - Abstract
In this paper, we present a new nonmonotone memory gradient algorithm for unconstrained optimization problems. An attractive property of the proposed method is that the search direction always provides suffcient descent step at each iteration. This property is independent of the line search used. Under mild assumptions, the global and local convergence results of the proposed algorithm are established respectively. Numerical results are also reported to show that the proposed method is suitable to solve large-scale optimization problems and is more stable than other similar methods in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. SABRINA: A Stochastic Subspace Majorization-Minimization Algorithm.
- Author
-
Chouzenoux, Emilie and Fest, Jean-Baptiste
- Subjects
ALGORITHMS ,IMAGE processing ,STOCHASTIC processes ,MACHINE learning ,IMAGE reconstruction ,STOCHASTIC convergence ,DIFFERENTIABLE functions - Abstract
A wide class of problems involves the minimization of a coercive and differentiable function F on R N whose gradient cannot be evaluated in an exact manner. In such context, many existing convergence results from standard gradient-based optimization literature cannot be directly applied and robustness to errors in the gradient is not necessarily guaranteed. This work is dedicated to investigating the convergence of Majorization-Minimization (MM) schemes when stochastic errors affect the gradient terms. We introduce a general stochastic optimization framework, called StochAstic suBspace majoRIzation-miNimization Algorithm SABRINA that encompasses MM quadratic schemes possibly enhanced with a subspace acceleration strategy. New asymptotical results are built for the stochastic process generated by SABRINA. Two sets of numerical experiments in the field of machine learning and image processing are presented to support our theoretical results and illustrate the good performance of SABRINA with respect to state-of-the-art gradient-based stochastic optimization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Convergence analysis for recursive Hammerstein identification.
- Author
-
Mattsson, Per and Wigren, Torbjörn
- Subjects
- *
STOCHASTIC convergence , *HAMMERSTEIN equations , *RECURSIVE sequences (Mathematics) , *DIFFERENTIAL equations , *ALGORITHMS - Abstract
This paper derives a recursive prediction error identification method based on the Hammerstein model structure. The convergence properties of the algorithm are analysed by application of Ljung’s associated differential equation method. It is proved that the algorithm can only converge to stable stationary points of the associated ordinary differential equation. General conditions for local convergence to the true parameter vector are given, and the cases with piecewise linear and polynomial nonlinearities are treated in detail. The derived identification method is illustrated in a numerical study that treats identification of a subsystem of a cement mill. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. A class of triangular splitting methods for saddle point problems.
- Author
-
Zheng, Qing-Qing and Ma, Chang-Feng
- Subjects
- *
SET theory , *TRIANGULARIZATION (Mathematics) , *SADDLEPOINT approximations , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *MATHEMATICAL singularities , *EIGENVALUES , *STOCHASTIC convergence - Abstract
In this paper, we study a class of efficient iterative algorithms for the large sparse nonsingular saddle point problems based on the upper and lower triangular (ULT) splitting of the coefficient matrix. We call these algorithms ULT methods. First, the ULT algorithm is established and the characteristic of eigenvalues of the iteration matrix of these new methods is analyzed. Then we give the sufficient and necessary conditions for the convergence of these ULT methods. Moreover, the optimal iteration parameters and the corresponding convergence factors for some special cases of the ULT methods are presented. Numerical experiments on a few model problems are presented to support the theoretical results and examine the numerical effectiveness of these new methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. A proof of uniform convergence over time for a distributed particle filter.
- Author
-
Míguez, Joaquín and Vázquez, Manuel A.
- Subjects
- *
MATHEMATICAL proofs , *STOCHASTIC convergence , *SIGNAL processing , *MONTE Carlo method , *ALGORITHMS - Abstract
Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic convergence is guaranteed uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. A class of differential quadratic programming problems.
- Author
-
Wang, Xing, Tao, Chang-qi, and Tang, Guo-ji
- Subjects
- *
DIFFERENTIAL quadrature method , *COMPUTER programming , *SET theory , *EXISTENCE theorems , *STOCHASTIC convergence , *ALGORITHMS - Abstract
A class of differential quadratic programming problems in finite dimensional spaces is introduced in this paper. First, the existence of solutions for the differential quadratic programming problem is established under some suitable assumptions. Second, an algorithm for solving the differential quadratic programming problem is given and the convergence analysis for the algorithm is shown. Finally, some numerical experiments are reported to verify the validity of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. Optimizing cluster structures with inner product induced norm based dissimilarity measures: Theoretical development and convergence analysis.
- Author
-
Saha, Arkajyoti and Das, Swagatam
- Subjects
- *
K-means clustering , *INNER product , *ALGORITHMS , *EUCLIDEAN distance , *STOCHASTIC convergence - Abstract
Dissimilarity measures play a key role in exploring the inherent cluster structure of the data for any partitional clustering algorithm. Commonly used dissimilarity functions for clustering purpose are so far confined to the Euclidean, exponential and Mahalanobish distances. In this article we develop generalized algorithms to solve the partitional clustering problems formulated with a general class of Inner Product Induced Norm (IPIN) based dissimilarity measures. We provide an in-depth mathematical analysis of the underlying optimization framework and analytically address the issue of existence of a solution and its uniqueness. In absence of a closed form solution, we develop a fast stochastic gradient descent algorithm and the Minimization by Incremental Surrogate Optimization (MISO) algorithm (in case of constrained optimization) with exponential convergence rate to obtain the solution. We carry out a convergence analysis of the fuzzy and k -means clustering algorithms with the IPIN based dissimilarity measures and also establish how these algorithms guarantee convergence to a stationary point. In addition, we investigate the nature of the stationary point. Novelty of the paper lies in the introduction of a generalized class of divergence measures, development of fuzzy and k -means clustering algorithms with the general class of divergence measures and a thorough convergence analysis of the developed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. Compressive hard thresholding pursuit algorithm for sparse signal recovery.
- Author
-
Liping Geng, Jinchuan Zhou, Zhongfeng Sun, and Jingyong Tang
- Subjects
RESTRICTED isometry property ,STOCHASTIC convergence ,ALGORITHMS ,OSCILLATIONS ,NUMERICAL analysis - Abstract
Hard Thresholding Pursuit (HTP) is one of the important and efficient algorithms for reconstructing sparse signals. Unfortunately, the hard thresholding operator is independent of the objective function and hence leads to numerical oscillation in the course of iterations. To alleviate this drawback, the hard thresholding operator should be applied to a compressible vector. Motivated by this idea, we propose a new algorithm called Compressive Hard Thresholding Pursuit (CHTP) by introducing a compressive step first to the standard HTP. Convergence analysis and stability of CHTP are established in terms of the restricted isometry property of a sensing matrix. Numerical experiments show that CHTP is competitive with other mainstream algorithms such as the HTP, Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP) algorithms both in the sparse signal reconstruction ability and average recovery runtime. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. CONVERGENCE ANALYSIS OF THE GLOBAL FOM AND GMRES METHODS FOR SOLVING MATRIX EQUATIONS AXB = C WITH SPD COEFFICIENTS.
- Author
-
MOHSENI MOGHADAM, M., RIVAZ, A., TAJADDINI, A., and SABERI MOVAHED, F.
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *MATRICES (Mathematics) , *EQUATIONS , *COEFFICIENTS (Statistics) - Abstract
In this paper, we study convergence behavior of the global FOM (Gl-FOM) and global GMRES (Gl-GMRES) methods for solving the matrix equation AXB = C where A and B are symmetric positive definite (SPD). We present some new theoretical results of these methods such as computable exact expressions and upper bounds for the norm of the error and residual. In particular, the obtained upper bounds for the Gl-FOM method help us to predict the behavior of the Frobenius norm of the Gl-FOM residual. We also explore the worst-case convergence behavior of these methods. Finally, some numerical experiments are given to show the performance of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2015
28. A class of accelerated Uzawa algorithms for saddle point problems.
- Author
-
Zheng, Qingqing and Ma, Changfeng
- Subjects
- *
SADDLEPOINT approximations , *ALGORITHMS , *PROBLEM solving , *EXTRAPOLATION , *PARAMETER estimation , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
In this paper, we establish a class of accelerated Uzawa (AU) algorithms for solving the large sparse nonsingular saddle point problems by making use of the extrapolation technique. This extrapolation technique is based on the eigenvalues of the iterative matrix. These AU algorithms involve two iteration parameters whose special choices can cover the known classical Uzawa method, as well as yield new ones. Firstly, the accelerated model for the Uzawa algorithm is established and the detail algorithm description of AU method is presented. Then the convergence analyse of the AU method is given. Moreover, theoretical analyses show that the AU algorithm converges faster than some Uzawa-type methods (the Uzawa method is also included in) when the eigenvalues of the iterative matrix and the parameter τ satisfy some conditions. Numerical experiments on a few model problems are presented to illustrate the theoretical results and examine the numerical effectiveness of the AU method. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. THE ROTHE-NEWTON APPROACH TO SIMULATE THE VARIABLE COEFFICIENT CONVECTION-DIFFUSION EQUATIONS.
- Author
-
SRIVASTAVA, H. M. and IZADI, M.
- Subjects
COLLOCATION methods ,STOCHASTIC convergence ,MATHEMATICAL variables ,MATHEMATICAL functions ,ALGORITHMS - Abstract
The current article presents a novel hybrid approach based on the Rothe time-marching algorithm and a spectral matrix collocation approach using the well-known Newton bases to deal with the spatial variable. Utilizing the Rothe approach converts the underlying convection--diffusion into initial-boundary value problems and then the Newton collocation method solves the continuous discretized time equation in each time step. The error analysis of the newly employed basis functions is established. Three numerical simulations are developed to show the accuracy and utility of the proposed hybrid strategy. Applying the current study to other linear and nonlinear PDEs and high-order PDEs can be performed straightforwardly. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Transient and steady-state MSE analysis of the IMPNLMS algorithm.
- Author
-
Haddad, Diego B. and Petraglia, Mariane R.
- Subjects
- *
ALGORITHMS , *ESTIMATION theory , *SIGNAL processing , *GAUSSIAN processes , *STOCHASTIC convergence - Abstract
Several techniques have been proposed in the literature to accelerate the convergence of adaptive algorithms for the identification of sparse impulse responses (i.e., with energy concentrated in a few coefficients). Among these techniques, the improved µ-law proportionate normalized least mean squares (IMPNLMS) algorithm is one of the most effective. This paper presents an accurate transient analysis of this algorithm and derives an estimate of its steady-state MSE, without requiring the assumption of white Gaussian input signals. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. A Modified Matricial PSO Algorithm Applied to System Identification with Convergence Analysis.
- Author
-
Azevedo Dantas, Andre, Maitelli, Andre, Silva Linhares, Leandro, and Araujo, Fabio
- Subjects
PARTICLE swarm optimization ,PARAMETER estimation ,ALGORITHMS ,NONLINEAR dynamical systems ,EVOLUTIONARY computation ,SYSTEM identification ,STOCHASTIC convergence - Abstract
Recently, several evolutionary computation techniques have been used in research areas such as parameter estimation of linear and nonlinear dynamic processes. This motivates the use of algorithms such as the particle swarm optimization (PSO) in the aforementioned fields of knowledge. However, little is known about the convergence of this algorithm, and mainly the analyses and studies have focused on experimental results. Therefore, the objective of this work is to propose a structure for the PSO that better analyze the convergence of the algorithm analytically. For this, the PSO is restructured to assume a matrix form, reformulated as a piecewise linear system. There was a convergence analysis of the algorithm as a whole, using an almost sure convergence criterion applicable to switched systems. Subsequently, traditional parameter identification algorithms were combined with the matricial PSO (MPSO), so as to make the identification results as good as or better than identifying only using the PSO or only the traditional algorithms. The obtained functions, after the identification, using the MPSO algorithm combined with the conventional identification algorithms, presented a better generalization and proper identification. The conclusions reached were that the hybridization permits a minimum performance and also contributes to improve the results obtained with the traditional algorithms, allowing the system representation in a higher range of frequencies. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
32. Convergence analysis of inexact proximal point algorithms on Hadamard manifolds.
- Author
-
Wang, Jinhua, Yao, Jen-Chih, Li, Chong, and Lopez, Genaro
- Subjects
HADAMARD matrices ,VECTOR fields ,MONOTONE operators ,ALGORITHMS ,STOCHASTIC convergence - Abstract
Inexact proximal point methods are extended to find singular points for multivalued vector fields on Hadamard manifolds. Convergence criteria are established under some mild conditions. In particular, in the case of proximal point algorithm, that is, $$\varepsilon _n=0$$ for each $$n$$ , our results improve sharply the corresponding results in Li et al. (). Applications to optimization problems, variational inequality problems and gradient methods are also given. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. A new parareal waveform relaxation algorithm for time-periodic problems.
- Author
-
Song, Bo and Jiang, Yao-Lin
- Subjects
WAVE analysis ,PROBLEM solving ,ALGORITHMS ,MATHEMATICAL bounds ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
We present a new parareal waveform relaxation algorithm for time-periodic problems which performs the parallelism both in sub-systems and in time. The new parareal waveform relaxation algorithm only needs to solve an initial-value coarse problem in each iteration instead of the periodic coarse problem in the classical parareal waveform relaxation algorithm. The convergence result of the new parareal waveform relaxation algorithm is then proved with a linear bound at most on the convergence factor. Numerical experiments including some parallel experiments illustrate our analysis and the effectiveness of the new parareal waveform relaxation algorithm finally. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.