174 results
Search Results
2. A regularized limited memory BFGS method for nonconvex unconstrained minimization.
- Author
-
Liu, Tao-Wen
- Subjects
ALGORITHMS ,NONCONVEX programming ,MATHEMATICAL optimization ,MATHEMATICAL regularization ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
3. A modified ODE-based algorithm for unconstrained optimization problems.
- Author
-
Ou, Yi-gui and Ma, Wei
- Subjects
MATHEMATICAL optimization ,ORDINARY differential equations ,ALGORITHMS ,ITERATIVE methods (Mathematics) ,SYSTEM analysis ,NUMERICAL analysis - Abstract
This paper presents a modified ODE-based algorithm for unconstrained optimization problems. It combines the idea of IMPBOT algorithm with nonmonotone and subspace techniques. The main feature of this method is that at each iteration, a lower dimensional system of linear equations is solved to obtain a trial step. Under some standard assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
4. A subspace conjugate gradient algorithm for large-scale unconstrained optimization.
- Author
-
Yang, Yueting, Chen, Yuting, and Lu, Yunlong
- Subjects
SUBSPACES (Mathematics) ,CONJUGATE gradient methods ,MATHEMATICAL optimization ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
In this paper, a subspace three-term conjugate gradient method is proposed. The search directions in the method are generated by minimizing a quadratic approximation of the objective function on a subspace. And they satisfy the descent condition and Dai-Liao conjugacy condition. At each iteration, the subspace is spanned by the current negative gradient and the latest two search directions. Thereby, the dimension of the subspace should be 2 or 3. Under some appropriate assumptions, the global convergence result of the proposed method is established. Numerical experiments show the proposed method is competitive for a set of 80 unconstrained optimization test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
5. A new integral filter algorithm for unconstrained global optimization.
- Author
-
Yue, Lina and Yang, Yongjian
- Subjects
INTEGRALS ,ALGORITHMS ,MATHEMATICAL optimization ,NUMERICAL analysis ,DIGITAL filters (Mathematics) ,MATHEMATICAL bounds - Abstract
In this paper, making use a exponential integral filter, a new algorithm for unconstrained global optimization is proposed. Compared with Yang's absolute value type integral filter method (Yang et al., Appl Math Comput 18:173-180, ), this algorithm is more effective and more sensitive. Numerical results for some typical examples show that in most cases, this algorithm works effectively and reliably. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
6. A nonmonotone trust region method with new inexact line search for unconstrained optimization.
- Author
-
Liu, Jinghui and Ma, Changfeng
- Subjects
MATHEMATICAL optimization ,STOCHASTIC convergence ,ALGORITHMS ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
7. Accelerated hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization.
- Author
-
Andrei, Neculai
- Subjects
CONJUGATE gradient methods ,ALGORITHMS ,CONVEX functions ,MATHEMATICAL optimization ,NUMERICAL analysis - Abstract
An accelerated hybrid conjugate gradient algorithm represents the subject of this paper. The parameter β
k is computed as a convex combination of $\beta_k^{HS}$ (Hestenes and Stiefel, J Res Nat Bur Stand 49:409–436, ) and $\beta_k^{DY}$ (Dai and Yuan, SIAM J Optim 10:177–182, ), i.e. $\beta_k^C =\left({1-\theta_k}\right)\beta_k^{HS} + \theta_k \beta_k^{DY}$. The parameter θk in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know, i.e. the Newton direction, while the pair ( sk , yk ) satisfies the modified secant condition given by Li et al. (J Comput Appl Math 202:523–539, ) Bk + 1 sk = zk , where $z_k =y_k +\left({{\eta_k} / {\left\| {s_k} \right\|^2}} \right)s_k$, $\eta_k =2\left( {f_k -f_{k+1}} \right)+\left( {g_k +g_{k+1}} \right)^Ts_k$, sk = xk + 1 − xk and yk = gk + 1 − gk . It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent. The algorithm uses an acceleration scheme modifying the steplength αk for improving the reduction of the function values along the iterations. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei (Numer Algorithms 47:143–156, ), in which the pair ( sk , yk ) satisfies the classical secant condition Bk + 1 sk = yk , as well as some other conjugate gradient algorithms including Hestenes-Stiefel, Dai-Yuan, Polack-Ribière-Polyak, Liu-Storey, hybrid Dai-Yuan, Gilbert-Nocedal etc. A set of 75 unconstrained optimization problems with 10 different dimensions is being used (Andrei, Adv Model Optim 10:147–161, ). [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
8. On the best achievable quality of limit points of augmented Lagrangian schemes
- Author
-
Gabriel Haeser, Roberto Andreani, Leonardo D. Secchin, Alberto Ramos, and Leonardo M. Mito
- Subjects
Constraint (information theory) ,Mathematical optimization ,Augmented Lagrangian method ,Applied Mathematics ,Numerical analysis ,media_common.quotation_subject ,Theory of computation ,Convergence (routing) ,Limit point ,Quality (business) ,Algebra over a field ,Mathematics ,media_common - Abstract
The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper, we propose a somewhat different approach, in the sense that the algorithm itself is used in order to formulate a new optimality condition satisfied by its feasible limit points. With this tool at hand, we present several new properties and insights on limit points of augmented Lagrangian schemes, in particular, characterizing the strongest possible global convergence result for the safeguarded augmented Lagrangian method.
- Published
- 2021
9. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems
- Author
-
Dang Van Hieu
- Subjects
Sequence ,Mathematical optimization ,021103 operations research ,Iterative method ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,Hilbert space ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,symbols.namesake ,Theory of computation ,Convergence (routing) ,symbols ,Equilibrium problem ,0101 mathematics ,Algebra over a field ,Algorithm ,Mathematics - Abstract
The paper introduces and analyzes the convergence of a new iterative algorithm for approximating solutions of equilibrium problems involving strongly pseudomonotone and Lipschitz-type bifunctions in Hilbert spaces. The algorithm uses a stepsize sequence which is non-increasing, diminishing, and non-summable. This leads to the main advantage of the algorithm, namely that the construction of solution approximations and the proof of its convergence are done without the prior knowledge of the modulus of strong pseudomonotonicity and Lipschitz-type constants of bifunctions. The strongly convergent theorem is established under suitable assumptions. The paper also discusses the assumptions used in the formulation of the convergent theorem. Several numerical results are reported to illustrate the behavior of the algorithm with different sequences of stepsizes and also to compare it with others.
- Published
- 2017
10. A feasible and effective technique in constructing ERKN methods for multi-frequency multidimensional oscillators in scientific computation
- Author
-
Xianyang Zeng and Hongli Yang
- Subjects
010101 applied mathematics ,Mathematical optimization ,Applied Mathematics ,Ordinary differential equation ,Integrator ,Numerical analysis ,Theory of computation ,010103 numerical & computational mathematics ,0101 mathematics ,Algebra over a field ,01 natural sciences ,Computational science ,Mathematics - Abstract
In last few years, many ERKN methods have been investigated for solving multi-frequency multidimensional second-order ordinary differential equations, and the numerical efficiency has been checked strongly in scientific computation. But in the constructions of (especially high-order) new ERKN methods, lots of time and effort are costed in presenting the practical order conditions firstly and then in adding some reasonable assumptions to get the coefficient functions finally. In this paper, a feasible and effective technique is given which makes the construction of ERKN methods finished in a few seconds or a few minutes, even for high-order integrators. Moreover, this technique does not need any more information and knowledge except the classical RKN method. And this paper also gives the theoretical explanation to guarantee that the ERKN method obtained from this technique has the same order and the same properties as the underlying RKN method.
- Published
- 2017
11. Comments on direct transcription solution of DAE constrained optimal control problems with two discretization approaches
- Author
-
John T. Betts and Stephen L. Campbell
- Subjects
Mathematical optimization ,Discretization ,Applied Mathematics ,Numerical analysis ,010103 numerical & computational mathematics ,Optimal control ,01 natural sciences ,Regularization (mathematics) ,010101 applied mathematics ,Theory of computation ,0101 mathematics ,Transcription (software) ,Differential algebraic equation ,Mathematics - Abstract
There have been a number of results in the literature showing that a direct transcription numerical approach to optimal control problems could have a number of surprising and desirable behaviors that were not always predicted by the existing theory especially when differential algebraic equations are involved. Most of these results were developed with an implicit Runge-Kutta (IRK) discretization being used. It is important to know which of these observations hold for direct transcription software using different types of discretizations and which are discretization specific. This paper reexamines several of these questions but using an hp-pseudospectral code. It is seen that while philosophically the results are often similar to the IRK results, there are some differences that should be understood by users solving constrained optimal control problems. This paper also discusses a type of regularization for higher index problems that does not reduce the index.
- Published
- 2016
12. An algorithm for automatically selecting a suitable verification method for linear systems
- Author
-
Katsuhisa Ozaki, Takeshi Ogita, and Shin'ichi Oishi
- Subjects
Mathematical optimization ,Range (mathematics) ,Dimension (vector space) ,Applied Mathematics ,Numerical analysis ,Runtime verification ,Linear system ,Theory of computation ,Coefficient matrix ,Algorithm ,Condition number ,Mathematics - Abstract
Several methods have been proposed to calculate a rigorous error bound of an approximate solution of a linear system by floating-point arithmetic. These methods are called `verification methods'. Applicable range of these methods are different. It depends mainly on the condition number and the dimension of the coefficient matrix whether such methods succeed to work or not. In general, however, the condition number is not known in advance. If the dimension or the condition number is large to some extent, then Oishi---Rump's method, which is known as the fastest verification method for this purpose, may fail. There are more robust verification methods whose computational cost is larger than the Oishi---Rump's one. It is not so efficient to apply such robust methods to well-conditioned problems. The aim of this paper is to choose a suitable verification method whose computational cost is minimum to succeed. First in this paper, four fast verification methods for linear systems are briefly reviewed. Next, a compromise method between Oishi---Rump's and Ogita---Oishi's one is developed. Then, an algorithm which automatically and efficiently chooses an appropriate verification method from five verification methods is proposed. The proposed algorithm does as much work as necessary to calculate error bounds of approximate solutions of linear systems. Finally, numerical results are presented.
- Published
- 2010
13. [Untitled]
- Author
-
Mathias Brieu and Jocelyne Erhel
- Subjects
Mathematical optimization ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,Composite number ,02 engineering and technology ,01 natural sciences ,Homogenization (chemistry) ,Nonlinear system ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Theory of computation ,0101 mathematics ,Composite material ,Mathematics - Abstract
In order to simulate the nonlinear behaviour of elastomer composite materials, we use a homogenization technique which induces nonlinear problems. This paper presents a non-incremental solving method which allows the reduction of computational costs. In this paper the equivalence between the proposed solving method and a Newton-type method is proved, which allows us to prove the convergence under realistic assumptions. Numerical results on a composite illustrate the performances of this method.
- Published
- 2003
14. [Untitled]
- Author
-
Jean Della Dora, Evelyne Tournier, and Mihaela Mirica-Ruse
- Subjects
Mathematical optimization ,Computer engineering ,Applied Mathematics ,Hybrid system ,Numerical analysis ,Theory of computation ,Algebra over a field ,Hybrid computation ,Field (computer science) ,Mathematics - Abstract
In the first part of this paper we will give a short historical survey of the field of hybrid systems, a precise definition of a hybrid system and some comments on the definition. In a second paper (“Hybrid systems and hybrid computation – 2nd part: Hybrid computation”) we will concentrate on a particular aspect of the theory closely related to scientific computation, that we have called hybrid computation.
- Published
- 2003
15. [Untitled]
- Author
-
John C. Butcher and T. M. H. Chan
- Subjects
Runge–Kutta methods ,Mathematical optimization ,Applied Mathematics ,Numerical analysis ,Theory of computation ,Order (ring theory) ,Applied mathematics ,Algebraic number ,Composition methods ,Mathematics::Numerical Analysis ,Variable (mathematics) ,Symplectic geometry ,Mathematics - Abstract
The interest in the concept of “effective order” has been revived by its rediscovery in applications to symplectic problems. In this paper we revert to the original application, the construction of explicit Runge–Kutta methods. Changing stepsize is a characteristic difficulty with effective order methods and we propose a way of overcoming this difficulty. We also consider the possible cancellation of local truncation errors of two methods over two successive steps. Using the algebraic approach for deriving these results gives us further insight into these methods and compositions of methods. A particular sixth stage Runge–Kutta pair is derived in the paper and is shown to be competitive.
- Published
- 2001
16. [Untitled]
- Author
-
Teo Roldán and Inmaculada Higueras
- Subjects
Nonlinear system ,Mathematical optimization ,Differential equation ,Applied Mathematics ,Numerical analysis ,Theory of computation ,MathematicsofComputing_NUMERICALANALYSIS ,Numerical methods for ordinary differential equations ,Construct (python library) ,Algebra over a field ,Algorithm ,Mathematics - Abstract
When differential equations are solved with implicit Runge-Kutta methods, the computational effort is dominated by the cost of solving the nonlinear systems. That is why it is important to have good starting values to begin the iterations. In this paper we construct starting algorithms for some DIRK methods. Numerical experiments have been carried out in order to compare the initializers studied in this paper with others used in the literature.
- Published
- 2000
17. Two accelerated nonmonotone adaptive trust region line search methods.
- Author
-
Babaie-Kafaki, Saman and Rezaee, Saeed
- Subjects
MONOTONIC functions ,RADIUS (Geometry) ,ALGORITHMS ,MATHEMATICAL optimization ,NUMERICAL analysis - Abstract
Hybridizing monotone and nonmonotone approaches, we employ a modified trust region ratio in which more information is provided about the agreement between the exact and the approximate models. Also, we use an adaptive trust region radius as well as two accelerated Armijo-type line search strategies to avoid resolving the trust region subproblem whenever a trial step is rejected. We show that the proposed algorithm is globally and locally superlinearly convergent. Comparative numerical experiments show practical efficiency of the proposed accelerated adaptive trust region algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. A wide neighborhood primal-dual predictor-corrector interior-point method for symmetric cone optimization.
- Author
-
Sayadi Shahraki, M., Mansouri, H., Zangiabadi, M., and Mahdavi-Amiri, N.
- Subjects
MATHEMATICAL optimization ,ITERATIVE methods (Mathematics) ,MATHEMATICAL bounds ,NUMERICAL analysis ,JORDAN algebras - Abstract
We present a primal-dual predictor-corrector interior-point method for symmetric cone optimization. The proposed algorithm is based on the Nesterov-Todd search directions and a wide neighborhood, which is an even wider neighborhood than a given negative infinity neighborhood. At each iteration, the method computes two corrector directions in addition to the Ai and Zhang directions (SIAM J. Optim. 16, 400-417, 2005), in order to improve performance. Moreover, we derive the complexity bound of the wide neighborhood predictor-corrector interior-point method for symmetric cone optimization that coincides with the currently best known theoretical complexity bounds for the short step algorithm. Finally, some numerical experiments are provided to reveal the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. An efficient alternating direction method of multipliers for optimal control problems constrained by random Helmholtz equations.
- Author
-
Li, Jingshi, Wang, Xiaoshen, and Zhang, Kai
- Subjects
ALGORITHMS ,ALGEBRA ,NUMERICAL analysis ,MATHEMATICAL models ,MATHEMATICAL optimization - Abstract
Based on the alternating direction method of multipliers (ADMM), we develop three numerical algorithms incrementally for solving the optimal control problems constrained by random Helmholtz equations. First, we apply the standard Monte Carlo technique and finite element method for the random and spatial discretization, respectively, and then ADMM is used to solve the resulting system. Next, combining the multi-modes expansion, Monte Carlo technique, finite element method, and ADMM, we propose the second algorithm. In the third algorithm, we preprocess certain quantities before the ADMM iteration, so that nearly no random variable is in the inner iteration. This algorithm is the most efficient one and is easy to implement. The error estimates of these three algorithms are established. The numerical experiments verify the efficiency of our algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. A neurodynamic approach to zero-one quadratic programming
- Author
-
Haichan Lin and Yigui Ou
- Subjects
State variable ,Mathematical optimization ,Computer simulation ,Applied Mathematics ,Computation ,Numerical analysis ,Stability (learning theory) ,010103 numerical & computational mathematics ,01 natural sciences ,010101 applied mathematics ,Convergence (routing) ,Theory of computation ,Quadratic programming ,0101 mathematics ,Mathematics - Abstract
This paper proposes a neurodynamic approach to zero-one quadratic programming with linear constraints. Compared with one existing neurodynamic approach to such problems, the proposed one has lower dimensions of state variables and less computational cost, which makes the implementation easier. Under some suitable conditions, the stability and convergence properties of the proposed approach are established. Numerical simulation results and related comparisons show the efficiency of this proposed method in practical computation.
- Published
- 2021
21. Mesh selection for stiff two-point boundary value problems
- Author
-
G. Moore, Jeff Cash, and R. W. Wright
- Subjects
Mathematical optimization ,Collocation ,Applied Mathematics ,Numerical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Value (computer science) ,Theory of computation ,Code (cryptography) ,Applied mathematics ,Boundary value problem ,Selection algorithm ,Selection (genetic algorithm) ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
This paper is concerned with the mesh selection algorithm of COLSYS, a well known collocation code for solving systems of boundary value problems. COLSYS was originally designed to solve non-stiff and mildly stiff problems only. In this paper we show that its performance for solving extremely stiff problems can be considerably improved by modifying its error estimation and mesh selection algorithms. Numerical examples indicate the superiority of the modified algorithm.
- Published
- 1994
22. Reliable determination of interpolating polynomials
- Author
-
Maurice G Cox
- Subjects
Mathematical optimization ,Polynomial ,Applied Mathematics ,Numerical analysis ,Lagrange polynomial ,Function (mathematics) ,Polynomial interpolation ,symbols.namesake ,Iterative refinement ,Hermite interpolation ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,symbols ,Curve fitting ,Algorithm ,Mathematics - Abstract
This paper addresses a fundamental problem in mathematics and numerical analysis, that of determining a polynomial interpolant to specified data. The data is taken as consisting of a set of points (abscissae), at each of which is specified a function value. Additionally, at each point, any number of leading derivative values of the function may be given. Mathematically, this problem is solved. The classical Lagrangian interpolation formula applies in the derivative-free case, and the Newton form of the interpolating polynomial in general.Numerically, few reliable algorithms are available; most published algorithms concentrate on speed of computation. This paper describes an algorithm that delivers the required polynomial in Chebyshev form. It is based on the repeated use of the Newton representation, with a data ordering strategy and iterative refinement to improve accuracy, using a carefully devised merit function to measure success. The algorithm attempts to provide a polynomial that is stable in the sense of backward error analysis, i.e. that is exact for slightly perturbed data. Implementations of the algorithm have been in use since the early 1980s in the NAG Library and NPL's Data Approximation Subroutine Library (DASL). In addition to providing polynomial interpolants in their own right, these implementations are used as computational modules in the NAG and DASL routines for constrained least-squares polynomial data fitting. This paper constitutes the first detailed presentation of the algorithm.
- Published
- 1993
23. Flattened aggregate function method for nonlinear programming with many complicated constraints
- Author
-
Yueting Yang, Mingyuan Cao, Yunlong Lu, and Xiaowei Jiang
- Subjects
Mathematical optimization ,Applied Mathematics ,Numerical analysis ,Aggregate (data warehouse) ,010103 numerical & computational mathematics ,Function (mathematics) ,01 natural sciences ,Aggregate function ,Nonlinear programming ,010101 applied mathematics ,symbols.namesake ,Theory of computation ,Convergence (routing) ,symbols ,0101 mathematics ,Newton's method ,Mathematics - Abstract
In this paper, efforts are made to solve nonlinear programming with many complicated constraints more efficiently. The constrained optimization problem is firstly converted to a minimax problem, where the max-value function is approximately smoothed by the so-called flattened aggregate function or its modified version. For carefully updated aggregate parameters, the smooth unconstrained optimization problem is solved by an inexact Newton method. Because the flattened aggregate function can usually reduce greatly the amount of computation for gradients and Hessians, the method is more efficient. Convergence of the proposed method is proven and some numerical results are given to show its efficiency.
- Published
- 2020
24. An augmented Lagrangian proximal alternating method for sparse discrete optimization problems
- Author
-
Bo Yu, Li Yang, Xiaoliang Song, and Yue Teng
- Subjects
Mathematical optimization ,Sequence ,021103 operations research ,Augmented Lagrangian method ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Projection (linear algebra) ,Discrete optimization problem ,Theory of computation ,Limit point ,Minification ,0101 mathematics ,Mathematics - Abstract
In this paper, we propose an augmented Lagrangian proximal alternating (ALPA) method for solving two classes of large-scale sparse discrete constrained optimization problems. Specifically, the ALPA method generates a sequence of augmented Lagrangian (AL) subproblems in the out iterations and utilizes a proximal alternating linearized minimization method and sparse projection techniques to solve these AL subproblems. And we study the first-order optimality conditions for these two classes of problems. Under some suitable assumptions, we show that any accumulation point of the sequence generated by the ALPA method satisfies the necessary first-order optimality conditions of these problems or is a local minimizer of these problems. The computational results with practical problems demonstrate that our method can find the suboptimal solutions of the problems efficiently and is competitive with some other local solution methods.
- Published
- 2019
25. A new three-term conjugate gradient algorithm for unconstrained optimization.
- Author
-
Andrei, Neculai
- Subjects
CONJUGATE gradient methods ,MATHEMATICAL optimization ,QUASI-Newton methods ,CONJUGACY classes ,CONVEX functions ,NUMERICAL analysis - Abstract
A new three-term conjugate gradient algorithm which satisfies both the descent condition and the conjugacy condition is presented. The algorithm is obtained by minimization the one-parameter quadratic model of the objective function in which the symmetrical approximation of the Hessian matrix satisfies the general quasi-Newton equation. The search direction is obtained by symmetrization of the iteration matrix corresponding to the solution of the quadratic model minimization. Using the general quasi-Newton equation the search direction includes a parameter which is determined by the minimization of the condition number of the iteration matrix. It is proved that this direction satisfies both the conjugacy and the descent condition. The new approximation of the minimum is obtained by the general Wolfe line search using by now a standard acceleration technique. Under standard assumptions, for uniformly convex functions the global convergence of the algorithm is proved. The numerical experiments using 800 large-scale unconstrained optimization test problems show that minimization of the condition number of the iteration matrix lead us to a value of the parameter in the search direction able to define a competitive three-term conjugate gradient algorithm. Numerical comparisons of this variant of the algorithm versus known conjugate gradient algorithms ASCALCG, CONMIN, TTCG and THREECG, as well as the limited memory quasi-Newton algorithm LBFGS (m = 5) and the truncated Newton TN show that our algorithm is indeed more efficient and more robust. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Some new error bounds for linear complementarity problems of H-matrices.
- Author
-
Li, Wen and Zheng, Hua
- Subjects
ERROR analysis in mathematics ,MATHEMATICAL statistics ,NUMERICAL analysis ,MATHEMATICAL optimization ,LINEAR complementarity problem ,COMPLEMENTARITY constraints (Mathematics) - Abstract
Some new error bounds for linear complementarity problems of H-matrices are presented based on the preconditioned technique. Numerical examples show that these bounds are better than some existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. A radial basis function method for solving PDE-constrained optimization problems.
- Author
-
Pearson, John
- Subjects
RADIAL basis functions ,MATHEMATICAL optimization ,PROBLEM solving ,NUMERICAL analysis ,FINITE element method ,NEUMANN boundary conditions - Abstract
In this article, we apply the theory of meshfree methods to the problem of PDE-constrained optimization. We derive new collocation-type methods to solve the distributed control problem with Dirichlet boundary conditions and also discuss the Neumann boundary control problem, both involving Poisson's equation. We prove results concerning invertibility of the matrix systems we generate, and discuss a modification to guarantee invertibility. We implement these methods using M atlab, and produce numerical results to demonstrate the methods' capability. We also comment on the methods' effectiveness in comparison to the widely-used finite element formulation of the problem, and make some recommendations as to how this work may be extended. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
28. Parallel computing investigations for the projection method applied to the interface transport scheme of a two-phase flow by the method of characteristics
- Author
-
Toni Sayah, Frédéric Hecht, Mireille Haddad, and Pierre Henri Tournier
- Subjects
Mathematical optimization ,Discretization ,Applied Mathematics ,Numerical analysis ,Domain decomposition methods ,010103 numerical & computational mathematics ,01 natural sciences ,Finite element method ,010101 applied mathematics ,Method of characteristics ,Projection method ,0101 mathematics ,Schwarz alternating method ,Algorithm ,Mortar methods ,Mathematics - Abstract
This paper deals with the discretization of the problem of mould filling in iron foundry and its numerical solution using a Schwarz domain decomposition method. An adapted technique for domain decomposition methods that suits the discretization in time by the method of characteristics is introduced. Furthermore, the projection method is used to reduce the computation time. Finally, numerical experiments show and validate the effectiveness of the proposed scheme.
- Published
- 2018
29. Multiple reduced gradient method for multiobjective optimization problems
- Author
-
M. El Moudden and A. El Ghali
- Subjects
Sequence ,Mathematical optimization ,021103 operations research ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Multi-objective optimization ,Multiobjective optimization problem ,Quadratic equation ,Theory of computation ,0101 mathematics ,Descent direction ,Gradient method ,Mathematics - Abstract
Multiobjective optimization has a large number of real-life applications. Under this motivation, in this paper, we present a new method for solving multiobjective optimization problems with both linear constraints and bound constraints on the variables. This method extends, to the multiobjective setting, the classical reduced gradient method for scalar-valued optimization. The proposed algorithm generates a feasible descent direction by solving an appropriate quadratic subproblem, without the use of any scalarization approaches. We prove that the sequence generated by the algorithm converges to Pareto-critical points of the problem. We also present some numerical results to show the efficiency of the proposed method.
- Published
- 2018
30. Implicit Milstein method for stochastic differential equations via the Wong-Zakai approximation
- Author
-
Nahid Jamshidi and Minoo Kamrani
- Subjects
Mathematical optimization ,Applied Mathematics ,Numerical analysis ,Stability (learning theory) ,Milstein method ,010103 numerical & computational mathematics ,01 natural sciences ,Mathematics::Numerical Analysis ,010101 applied mathematics ,Stochastic partial differential equation ,Stochastic differential equation ,Mathematics::Probability ,Rate of convergence ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Brownian motion ,Mathematics - Abstract
The aim of this paper is to derive a numerical scheme for solving stochastic differential equations (SDEs) via Wong-Zakai approximation. One of the most important methods for solving SDEs is Milstein method, but this method is not so popular because the cost of simulating the double stochastic integrals is high. For overcoming this complexity, we present an implicit Milstein scheme based on Wong-Zakai approximation by approximating the Brownian motion with its truncated Haar expansion. The main advantages of this method lie in the fact that it preserves the convergence order and also stability region of the Milstein method while its simulation is much easier than Milstein scheme. We show the convergence rate of the method by some numerical examples.
- Published
- 2017
31. Novel analytical and numerical techniques for fractional temporal SEIR measles model
- Author
-
Farah Aini Abdullah, Kevin Burrage, Pamela Burrage, Fawang Liu, and Tong Li
- Subjects
Predictor–corrector method ,Mathematical optimization ,Applied Mathematics ,Numerical analysis ,Fractional model ,010103 numerical & computational mathematics ,Derivative ,Time step ,01 natural sciences ,010101 applied mathematics ,Error analysis ,Ordinary differential equation ,Theory of computation ,Applied mathematics ,0101 mathematics ,Mathematics - Abstract
In this paper, a fractional temporal SEIR measles model is considered. The model consists of four coupled time fractional ordinary differential equations. The time-fractional derivative is defined in the Caputo sense. Firstly, we solve this model by solving an approximate model that linearizes the four time fractional ordinary differential equations (TFODE) at each time step. Secondly, we derive an analytical solution of the single TFODE. Then, we can obtain analytical solutions of the four coupled TFODE at each time step, respectively. Thirdly, a computationally effective fractional Predictor-Corrector method (FPCM) is proposed for simulating the single TFODE. And the error analysis for the fractional predictor-corrector method is also given. It can be shown that the fractional model provides an interesting technique to describe measles spreading dynamics. We conclude that the analytical and Predictor-Corrector schemes derived are easy to implement and can be extended to other fractional models. Fourthly, for demonstrating the accuracy of analytical solution for fractional decoupled measles model, we applied GMMP Scheme (Gorenflo-Mainardi-Moretti-Paradisi) to the original fractional equations. The comparison of the numerical simulations indicates that the solution of the decoupled and linearized system is close enough to the solution of the original system. And it also indicates that the linearizing technique is correct and effective.
- Published
- 2017
32. Weak and strong convergence theorems for variational inequality problems
- Author
-
Duong Viet Thong and Dang Van Hieu
- Subjects
Mathematical optimization ,021103 operations research ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,Monotonic function ,010103 numerical & computational mathematics ,02 engineering and technology ,Lipschitz continuity ,01 natural sciences ,Monotone polygon ,Variational inequality ,Convergence (routing) ,Theory of computation ,0101 mathematics ,Constant (mathematics) ,Mathematics - Abstract
In this paper, we study the weak and strong convergence of two algorithms for solving Lipschitz continuous and monotone variational inequalities. The algorithms are inspired by Tseng's extragradient method and the viscosity method with Armijo-like step size rule. The main advantages of our algorithms are that the construction of solution approximations and the proof of convergence of the algorithms are performed without the prior knowledge of the Lipschitz constant of cost operators. Finally, we provide numerical experiments to show the efficiency and advantage of the proposed algorithms.
- Published
- 2017
33. An inexact alternating direction method of multipliers for the solution of linear complementarity problems arising from free boundary problems
- Author
-
Jian-Li Zhang, Jian-Jun Zhang, and Wan-Zhou Ye
- Subjects
Mathematical optimization ,021103 operations research ,Augmented Lagrangian method ,Iterative method ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Complementarity (physics) ,Linear complementarity problem ,Alternating direction implicit method ,Theory of computation ,0101 mathematics ,Mathematics - Abstract
A large number of free boundary problems can be formulated as linear-complementarity problems. In this paper, we propose an inexact alternating direction method of multipliers for solving linear complementarity problem arising from free boundary problems by using the special structure of these problems. The convergence of our proposed method is proved. Numerical results show that the proposed method is feasible and effective, and it is significantly faster than modified alternating direction implicit algorithm and many other methods, especially when dimension of the problem being solved is large.
- Published
- 2017
34. Speckle noise removal in ultrasound images by first- and second-order total variation
- Author
-
Jin-Jin Mei, Ting-Zhu Huang, Xi-Le Zhao, Si Wang, and Jie Huang
- Subjects
Mathematical optimization ,business.industry ,Applied Mathematics ,Numerical analysis ,Ultrasound ,020206 networking & telecommunications ,Speckle noise ,02 engineering and technology ,Total variation denoising ,Peak signal-to-noise ratio ,Regularization (mathematics) ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,Piecewise ,020201 artificial intelligence & image processing ,business ,Algorithm ,Mathematics - Abstract
Speckle noise contamination is a common issue in ultrasound imaging system. Due to the edge-preserving feature, total variation (TV) regularization-based techniques have been extensively utilized for speckle noise removal. However, TV regularization sometimes causes staircase artifacts as it favors solutions that are piecewise constant. In this paper, we propose a new model to overcome this deficiency. In this model, the regularization term is represented by a combination of total variation and high-order total variation, while the data fidelity term is depicted by a generalized Kullback-Leibler divergence. The proposed model can be efficiently solved by alternating direction method with multipliers (ADMM). Compared with some state-of-the-art methods, the proposed method achieves higher quality in terms of the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM). Numerical experiments demonstrate that our method can remove speckle noise efficiently while suppress staircase effects on both synthetic images and real ultrasound images.
- Published
- 2017
35. An efficient nonmonotone trust-region method for unconstrained optimization.
- Author
-
Ahookhosh, Masoud and Amini, Keyvan
- Subjects
MATHEMATICAL optimization ,OPERATOR theory ,STOCHASTIC convergence ,ASYMPTOTIC expansions ,NUMERICAL analysis - Abstract
The monotone trust-region methods are well-known techniques for solving unconstrained optimization problems. While it is known that the nonmonotone strategies not only can improve the likelihood of finding the global optimum but also can improve the numerical performance of approaches, the traditional nonmonotone strategy contains some disadvantages. In order to overcome to these drawbacks, we introduce a variant nonmonotone strategy and incorporate it into trust-region framework to construct more reliable approach. The new nonmonotone strategy is a convex combination of the maximum of function value of some prior successful iterates and the current function value. It is proved that the proposed algorithm possesses global convergence to first-order and second-order stationary points under some classical assumptions. Preliminary numerical experiments indicate that the new approach is considerably promising for solving unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
36. Iterative methods for solving proximal split minimization problems
- Author
-
Qamrul Hasan Ansari, Olaniyi S. Iyiola, Mujahid Abbas, Yekini Shehu, and Mohammad M. Alshahrani
- Subjects
Mathematical optimization ,Iterative method ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,Minimization problem ,01 natural sciences ,010101 applied mathematics ,Convergence (routing) ,Theory of computation ,Minification ,0101 mathematics ,Operator norm ,Selection (genetic algorithm) ,Mathematics - Abstract
In this paper, we propose two iterative algorithms for finding the minimum-norm solution of a split minimization problem. We prove strong convergence of the sequences generated by the proposed algorithms. The iterative schemes are proposed in such a way that the selection of the step-sizes does not need any prior information about the operator norm. We further give some examples to numerically verify the efficiency and implementation of our new methods and compare the two algorithms presented. Our results act as supplements to several recent important results in this area.
- Published
- 2017
37. Implementation of reduced gradient with bisection algorithms for non-convex optimization problem via stochastic perturbation
- Author
-
Abdelkrim El Mouatasim
- Subjects
Mathematical optimization ,021103 operations research ,Line search ,Applied Mathematics ,Numerical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Perturbation (astronomy) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Theory of computation ,Bisection method ,Differentiable function ,0101 mathematics ,Gradient method ,Algorithm ,Lehmer–Schur algorithm ,Mathematics - Abstract
In this paper, we proposed an implementation of stochastic perturbation of reduced gradient and bisection (SPRGB) method for optimizing a non-convex differentiable function subject to linear equality constraints and non-negativity bounds on the variables. In particular, at each iteration, we compute a search direction by reduced gradient, and optimal line search by bisection algorithm along this direction yields a decrease in the objective value. SPRGB method is desired to establish the global convergence of the algorithm. An implementation and tests of SPRGB algorithm are given, and some numerical results of large-scale problems are presented, which show the efficient of this approach.
- Published
- 2017
38. An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization
- Author
-
Hongwei Liu and Zexian Liu
- Subjects
Mathematical optimization ,021103 operations research ,Scale (ratio) ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Adaptive stepsize ,01 natural sciences ,Quadratic equation ,Line segment ,Theory of computation ,Convergence (routing) ,0101 mathematics ,Gradient method ,Mathematics - Abstract
In this paper, we introduce a new concept of approximate optimal stepsize for gradient method, use it to interpret the Barzilai-Borwein (BB) method, and present an efficient gradient method with approximate optimal stepsize for large unconstrained optimization. If the objective function f is not close to a quadratic on a line segment between the current iterate x k and the latest iterate x k−1, we construct a conic model to generate the approximate optimal stepsize for gradient method if the conic model is suitable to be used. Otherwise, we construct a new quadratic model or two other new approximation models to generate the approximate optimal stepsize for gradient method. We analyze the convergence of the proposed method under some suitable conditions. Numerical results show the proposed method is very promising.
- Published
- 2017
39. Polyak’s gradient method for split feasibility problem constrained by level sets
- Author
-
Fenghui Wang
- Subjects
Sequence ,Mathematical optimization ,Signal processing ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,Regular polygon ,010103 numerical & computational mathematics ,01 natural sciences ,Theory of computation ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Gradient method ,SIMPLE algorithm ,Mathematics - Abstract
In this paper, we are concerned with the split feasibility problem (SFP) whenever the convex sets involved are composed of level sets. By applying Polyak’s gradient method, we get a new and simple algorithm for such a problem. Under standard assumptions, we prove that the whole sequence generated by the algorithm weakly converges to a solution. We also modify the proposed algorithm and state the strong convergence without regularity conditions on the sets involved. Numerical experiments are included to illustrate its applications in signal processing.
- Published
- 2017
40. Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces
- Author
-
Do Sang Kim and Bui Van Dinh
- Subjects
Mathematical optimization ,021103 operations research ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,Hilbert space ,Parallel algorithm ,Solution set ,02 engineering and technology ,01 natural sciences ,010101 applied mathematics ,Set (abstract data type) ,symbols.namesake ,Variational inequality ,Theory of computation ,symbols ,Product topology ,0101 mathematics ,Algorithm ,Mathematics - Abstract
In this paper, we introduce an extension of multiple set split variational inequality problem (Censor et al. Numer. Algor. 59, 301–323 2012) to multiple set split equilibrium problem (MSSEP) and propose two new parallel extragradient algorithms for solving MSSEP when the equilibrium bifunctions are Lipschitz-type continuous and pseudo-monotone with respect to their solution sets. By using extragradient method combining with cutting techniques, we obtain algorithms for these problems without using any product space. Under certain conditions on parameters, the iteration sequences generated by the proposed algorithms are proved to be weakly and strongly convergent to a solution of MSSEP. An application to multiple set split variational inequality problems and a numerical example and preliminary computational results are also provided.
- Published
- 2017
41. Iterative approaches to solving convex minimization problems and fixed point problems in complete CAT(0) spaces
- Author
-
Kritsada Lerkchaiyaphum and Withun Phuengrattana
- Subjects
Mathematical optimization ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,Regular polygon ,Fixed point ,01 natural sciences ,010101 applied mathematics ,Set (abstract data type) ,Proximal point ,Theory of computation ,Convex optimization ,Convergence (routing) ,0101 mathematics ,Mathematics - Abstract
In this paper, we propose a new modified proximal point algorithm for finding a common element of the set of common minimizers of a finite family of convex and lower semi-continuous functions and the set of common fixed points of a finite family of nonexpansive mappings in complete CAT(0) spaces, and prove some convergence theorems of the proposed algorithm under suitable conditions. A numerical example is presented to illustrate the proposed method and convergence result. Our results improve and extend the corresponding results existing in the literature.
- Published
- 2017
42. A generalized inexact Uzawa method for stable principal component pursuit problem with nonnegative constraints
- Author
-
Zhanke Yu, Kaizhan Huai, Mingfang Ni, Feng Ma, and Xiang Zheng
- Subjects
Mathematical optimization ,021103 operations research ,Applied Mathematics ,Numerical analysis ,0211 other engineering and technologies ,Image processing ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Matrix (mathematics) ,Compressed sensing ,Ranking ,Convergence (routing) ,Variational inequality ,Theory of computation ,0101 mathematics ,Mathematics - Abstract
The problem of recovering the low-rank and sparse components of a matrix is known as the stable principal component pursuit (SPCP) problem. It has found many applications in compressed sensing, image processing, and web data ranking. This paper proposes a generalized inexact Uzawa method for SPCP with nonnegative constraints. The main advantage of our method is that the resulting subproblems all have closed-form solutions and can be executed in distributed manners. Global convergence of the method is proved from variational inequalities perspective. Numerical experiments show that our algorithm converges to the optimal solution as other distributed methods, with better performances.
- Published
- 2017
43. Updating preconditioners for modified least squares problems
- Author
-
Daniel Guerrero, José Marín, José Luis Verdú Más, and Ken Hayami
- Subjects
Mathematical optimization ,Iterative methods ,Iterative method ,Preconditioner ,Applied Mathematics ,Numerical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,Preconditioners ,01 natural sciences ,Least squares ,010101 applied mathematics ,Set (abstract data type) ,Sparse matrices ,Theory of computation ,Low-rank updates ,Least squares problems ,0101 mathematics ,MATEMATICA APLICADA ,Mathematics ,Cholesky decomposition ,Sparse matrix - Abstract
[EN] In this paper, we analyze how to update incomplete Cholesky preconditioners to solve least squares problems using iterative methods when the set of linear relations is updated with some new information, a new variable is added or, contrarily, some information or variable is removed from the set. Our proposed method computes a low-rank update of the preconditioner using a bordering method which is inexpensive compared with the cost of computing a new preconditioner. Moreover, the numerical experiments presented show that this strategy gives, in many cases, a better preconditioner than other choices, including the computation of a new preconditioner from scratch or reusing an existing one., Partially supported by Spanish Grants MTM2014-58159-P and MTM2015-68805-REDT.
- Published
- 2017
44. Adaptive radial basis function interpolation using an error indicator
- Author
-
Yangzhang Zhao, Qi Zhang, and Jeremy Levesley
- Subjects
Mathematical optimization ,Radial basis function network ,Adaptive algorithm ,Applied Mathematics ,Numerical analysis ,010103 numerical & computational mathematics ,Function (mathematics) ,01 natural sciences ,010101 applied mathematics ,Approximation error ,Radial basis function ,0101 mathematics ,Condition number ,Mathematics ,Interpolation - Abstract
In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed. In this paper, we propose a new adaptive algorithm for radial basis function (RBF) interpolation which aims to assess the local approximation quality, and add or remove points as required to improve the error in the specified region. For Gaussian and multiquadric approximation, we have the flexibility of a shape parameter which we can use to keep the condition number of interpolation matrix at a moderate size. Numerical results for test functions which appear in the literature are given for dimensions 1 and 2, to show that our method performs well. We also give a three-dimensional example from the finance world, since we would like to advertise RBF techniques as useful tools for approximation in the high-dimensional settings one often meets in finance.
- Published
- 2017
45. An asynchronous direct solver for banded linear systems
- Author
-
Anthony A. Ruffa, James Baglama, and Michael Jandron
- Subjects
Mathematical optimization ,Floating point ,Tridiagonal matrix ,Applied Mathematics ,Numerical analysis ,Linear system ,020206 networking & telecommunications ,010103 numerical & computational mathematics ,02 engineering and technology ,Parallel computing ,Solver ,01 natural sciences ,LU decomposition ,law.invention ,Superposition principle ,law ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Row echelon form ,Mathematics - Abstract
Banded linear systems occur frequently in mathematics and physics. However, direct solvers for large systems cannot be performed in parallel without communication. The aim of this paper is to develop a general asymmetric banded solver with a direct approach that scales across many processors efficiently. The key mechanism behind this is that reduction to a row-echelon form is not required by the solver. The method requires more floating point calculations than a standard solver such as LU decomposition, but by leveraging multiple processors the overall solution time is reduced. We present a solver using a superposition approach that decomposes the original linear system into q subsystems, where q is the number of superdiagonals. These methods show optimal computational cost when q processors are available because each system can be solved in parallel asynchronously. This is followed by a q×q dense constraint matrix problem that is solved before a final vectorized superposition is performed. Reduction to row echelon form is not required by the solver, and hence the method avoids fill-in. The algorithm is first developed for tridiagonal systems followed by an extension to arbitrary banded systems. Accuracy and performance is compared with existing solvers and software is provided in the supplementary material.
- Published
- 2017
46. Splitting extragradient-like algorithms for strongly pseudomonotone equilibrium problems
- Author
-
Ngoc Hai Trinh and Ky Anh Pham
- Subjects
Mathematical optimization ,021103 operations research ,Quantitative Biology::Molecular Networks ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Rate of convergence ,Theory of computation ,Convergence (routing) ,Equilibrium problem ,0101 mathematics ,Algebra over a field ,Noisy data ,Algorithm ,Mathematics - Abstract
In this paper, two splitting extragradient-like algorithms for solving strongly pseudomonotone equilibrium problems given by a sum of two bifunctions are proposed. The convergence of the proposed methods is analyzed and the R-linear rate of convergence under suitable assumptions on bifunctions is established. Moreover, a noisy data case, when a part of the bifunction is contaminated by errors, is studied. Finally, some numerical experiments are given to demonstrate the efficiency of our algorithms.
- Published
- 2016
47. Numerical simulation of a Finite Moment Log Stable model for a European call option
- Author
-
Qianqian Yang, Fawang Liu, Ian Turner, Shanzhen Chen, and Hongmei Zhang
- Subjects
Mathematical optimization ,Partial differential equation ,Computer simulation ,Applied Mathematics ,Numerical analysis ,Fast Fourier transform ,Numerical technique ,010103 numerical & computational mathematics ,01 natural sciences ,010101 applied mathematics ,Rate of convergence ,Theory of computation ,Call option ,0101 mathematics ,Mathematics - Abstract
Compared to the classical Black-Scholes model for pricing options, the Finite Moment Log Stable (FMLS) model can more accurately capture the dynamics of the stock prices including large movements or jumps over small time steps. In this paper, the FMLS model is written as a fractional partial differential equation and we will present a new numerical scheme for solving this model. We construct an implicit numerical scheme with second order accuracy for the FMLS and consider the stability and convergence of the scheme. In order to reduce the storage space and computational cost, we use a fast bi-conjugate gradient stabilized method (FBi-CGSTAB) to solve the discrete scheme. A numerical example is presented to show the efficiency of the numerical method and to demonstrate the order of convergence of the implicit numerical scheme. Finally, as an application, we use the above numerical technique to price a European call option. Furthermore, by comparing the FMLS model with the classical B-S model, the characteristics of the FMLS model are also analyzed.
- Published
- 2016
48. Total variation reconstruction from quadratic measurements
- Author
-
Anastasia Zakharova, Laboratoire de Mathématiques de l'INSA de Rouen Normandie (LMI), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), and Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)
- Subjects
Total variation ,Mathematical optimization ,Applied Mathematics ,Numerical analysis ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Variation (game tree) ,Image (mathematics) ,Image recovery ,Nonlinear system ,Quadratic equation ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,[MATH]Mathematics [math] ,Phase retrieval ,Algorithm ,Mathematics - Abstract
International audience; In this paper, we consider a problem of reconstructing an image from incomplete quadratic measurements by minimizing its total variation. The problem of reconstructing an object from incomplete nonlinear acquisitions arises in many applications, such as astronomical imaging or depth reconstruction. Placing ourselves in a discrete setting, we provide theoretical guarantees for stable and robust image recovery from incomplete noisy quadratic measurements.
- Published
- 2016
49. Variants of the groupwise update strategy for short-recurrence Krylov subspace methods
- Author
-
Kensuke Aihara
- Subjects
Mathematical optimization ,Applied Mathematics ,Rounding ,Numerical analysis ,Linear system ,Recursion (computer science) ,010103 numerical & computational mathematics ,Krylov subspace ,Residual ,01 natural sciences ,010101 applied mathematics ,Convergence (routing) ,Theory of computation ,0101 mathematics ,Algorithm ,Mathematics - Abstract
Krylov subspace methods often use short-recurrences for updating approximations and the corresponding residuals. In the bi-conjugate gradient (Bi-CG) type methods, rounding errors arising from the matrix---vector multiplications used in the recursion formulas influence the convergence speed and the maximum attainable accuracy of the approximate solutions. The strategy of a groupwise update has been proposed for improving the convergence of the Bi-CG type methods in finite-precision arithmetic. In the present paper, we analyze the influence of rounding errors on the convergence properties when using alternative recursion formulas, such as those used in the bi-conjugate residual (Bi-CR) method, which are different from those used in the Bi-CG type methods. We also propose variants of a groupwise update strategy for improving the convergence speed and the accuracy of the approximate solutions. Numerical experiments demonstrate the effectiveness of the proposed method.
- Published
- 2016
50. Small sample statistical condition estimation for the total least squares problem
- Author
-
Yimin Wei, Huaian Diao, and Pengpeng Xie
- Subjects
Mathematical optimization ,Applied Mathematics ,Numerical analysis ,Reliability (computer networking) ,010103 numerical & computational mathematics ,Solver ,01 natural sciences ,Augmented matrix ,010101 applied mathematics ,Singular value decomposition ,Theory of computation ,Applied mathematics ,0101 mathematics ,Total least squares ,Condition number ,Computer Science::Cryptography and Security ,Mathematics - Abstract
In this paper, under the genericity condition, we study the condition estimation of the total least squares (TLS) problem based on small sample condition estimation (SCE), which can be incorporated into the direct solver for the TLS problem via the singular value decomposition (SVD) of the augmented matrix [A, b]. Our proposed condition estimation algorithms are efficient for the small and medium size TLS problem because they utilize the computed SVD of [A, b] during the numerical solution to the TLS problem. Numerical examples illustrate the reliability of the algorithms. Both normwise and componentwise perturbations are considered. Moreover, structured condition estimations are investigated for the structured TLS problem.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.