195 results on '"Derivative-free"'
Search Results
2. A modified PRP-type derivative-free projection algorithm for constrained nonlinear equations with applications.
- Author
-
Li, Dandan, Li, Yong, Li, Yuanfei, and Wang, Songhua
- Subjects
IMAGE denoising ,SIGNAL denoising ,NONLINEAR equations ,SEARCH algorithms ,ALGORITHMS ,ORTHOGONAL matching pursuit - Abstract
In this paper, we propose a novel derivative-free projection algorithm based on the classical Polak–Ribiére–Polyak (PRP) method. This algorithm designs a search direction with sufficient descent and trust region properties, integrating an efficient line search approach and projection techniques. Under standard assumptions, the global convergence of the proposed algorithm is established. Numerical experiments demonstrate that the proposed algorithm outperforms two existing algorithms in terms of efficiency and robustness, and is successfully applied to sparse signal recovery and image denoising problems. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. 凸约束方程组的新型无导数算法 及在信号重构中的应用.
- Author
-
夏 艳, 李远飞, 王松华, and 李丹丹
- Subjects
SIGNAL reconstruction ,NONLINEAR systems ,COMPUTER simulation ,ALGORITHMS - Abstract
Copyright of Journal of Jilin University (Science Edition) / Jilin Daxue Xuebao (Lixue Ban) is the property of Zhongguo Xue shu qi Kan (Guang Pan Ban) Dian zi Za zhi She and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
4. EXTENDED CONVERGENCE OF A DERIVATIVE FREE FOURTH ORDER METHOD FOR EQUATIONS OR SYSTEMS.
- Author
-
Argyros, Ioannis K., Jayaraman, Jayakumar, and John, Jinny Ann
- Subjects
NONLINEAR equations ,LINEAR operators ,BANACH spaces ,EQUATIONS ,MATRICES (Mathematics) - Abstract
In many areas of science and engineering, solving equations or systems of equations is really important. Instead of finding exact solutions, which can be very hard or impossible in some cases, people often use iterative methods to attain the desired solutions. This article is dedicated to introducing a highly efficient derivative-free fourth order iterative technique renowned for its exceptional convergence properties. The analysis within this article deals with scrutinizing both local and semi-local convergence characteristics, taking into account the φ, ψ-continuity constraints imposed on the operators that are present in these methods. It's worth noting that the innovative methodology proposed herein isn't confined to specific techniques but has broader applicability, encompassing a wide spectrum of approaches involving the utilization of inverses of linear operators or matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
5. An interior-point derivative-free algorithm for nonlinear complementarity problems
- Author
-
Wang, Jueyu, Gu, Chao, and Zhu, Detong
- Published
- 2024
- Full Text
- View/download PDF
6. STOCHASTIC TRUST-REGION AND DIRECT-SEARCH METHODS: A WEAK TAIL BOUND CONDITION AND REDUCED SAMPLE SIZING.
- Author
-
RINALDI, F., VICENTE, L. N., and ZEFFIRO, D.
- Subjects
- *
SAMPLE size (Statistics) , *STOCHASTIC convergence , *NONSMOOTH optimization - Abstract
Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration k of this type of algorithm was O(\Delta 4 k), where Delta k is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only O(\Delta 2 varepsilon k ), where varepsilon > 0 can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of Delta 2 k can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Hybrid derivative-free methods for solving system of nonlinear equations.
- Author
-
Waziri, Mohammed Yusuf, Muhammad, Hadiza Usman, Halilu, Abubakar Sani, Ahmed, Kabiru, and Murtala, Salisu
- Subjects
NONLINEAR equations ,JACOBIAN matrices ,HYBRID systems - Abstract
This work presents a predictor-corrector iterative approach for solving systems of nonlinear equations. The methods are derivative-free with correction and acceleration parameters obtained via approximating the Jacobian matrix. Using an inexact line search procedure and under appropriate conditions, we proved that the proposed method is globally convergent. We, additionally, present some numerical results to show the efficiency and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A Nobel Variation of 16th-Order Iterative Scheme for Models in Blood Stream, Chemical Reactor, and Its Dynamics.
- Author
-
Akram, Saima, Khalid, Hareem, Rasulov, Tulkin, Khalid, Maira, and Rehman, Mutti-Ur
- Abstract
As science and engineering continue to advance in numerous fields, it has become increasingly difficult to find a fast and accurate solution to non-linear problems in the real world. To resolve these issues, more effective and efficient iterative techniques are required. In this paper, we present a weighted type four-point sixteenth-order iterative scheme of Steffensen–Ostrowski type. Maple 18, a computer algebra system employed to figure out the optimality of the proposed scheme that satisfies the Kung–Traub conjecture and is optimal. The theoretical convergence order of our recently devised scheme was determined to be sixteen. The scheme has applications in numerous fields, such as blood stream dynamics, chemical reactor dynamics, and Kepler's equation, among others. In addition, numerical testing demonstrates that the developed iterative schemes are more effective than the existing optimal four-point iterative schemes in the domain. Thus, the fourth-step Steffensen–Ostrowski-type weighted family is a more efficient addition to the literature and a superior alternative. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Hermite least squares optimization: a modification of BOBYQA for optimization with limited derivative information.
- Author
-
Fuhrländer, Mona and Schöps, Sebastian
- Abstract
Derivative-free optimization tackles problems, where the derivatives of the objective function are unknown. However, in practical optimization problems, the derivatives of the objective function are often not available with respect to all optimization variables, but for some. In this work we propose the Hermite least squares optimization method: an optimization method, specialized for the case that some partial derivatives of the objective function are available and others are not. The main goal is to reduce the number of objective function calls compared to state of the art derivative-free solvers, while the convergence properties are maintained. The Hermite least squares method is a modification of Powell's derivative-free BOBYQA algorithm. But instead of (underdetermined) interpolation for building the quadratic subproblem in each iteration, the training data is enriched with first and—if possible—second order derivatives and then least squares regression is used. Proofs for global convergence are discussed and numerical results are presented. Further, the applicability is verified for a realistic test case in the context of yield optimization. Numerical tests show that the Hermite least squares approach outperforms classic BOBYQA if half or more partial derivatives are available. In addition, it achieves more robustness and thus better performance in case of noisy objective functions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Constrained multiobjective optimization of expensive black-box functions using a heuristic branch-and-bound approach
- Author
-
Jones, Donald R. and Lovison, Alberto
- Published
- 2024
- Full Text
- View/download PDF
11. A Derivative-Free Optimization Algorithm Combining Line-Search and Trust-Region Techniques.
- Author
-
Xie, Pengcheng and Yuan, Ya-xiang
- Subjects
- *
OPTIMIZATION algorithms , *MATHEMATICAL optimization , *COVARIANCE matrices , *INTERPOLATION algorithms , *INTERPOLATION , *ALGORITHMS - Abstract
The speeding-up and slowing-down (SUSD) direction is a novel direction, which is proved to converge to the gradient descent direction under some conditions. The authors propose the derivative-free optimization algorithm SUSD-TR, which combines the SUSD direction based on the covariance matrix of interpolation points and the solution of the trust-region subproblem of the interpolation model function at the current iteration step. They analyze the optimization dynamics and convergence of the algorithm SUSD-TR. Details of the trial step and structure step are given. Numerical results show their algorithm's efficiency, and the comparison indicates that SUSD-TR greatly improves the method's performance based on the method that only goes along the SUSD direction. Their algorithm is competitive with state-of-the-art mathematical derivative-free optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. SUPERIORIZATION: THE ASYMMETRIC ROLES OF FEASIBILITY-SEEKING AND OBJECTIVE FUNCTION REDUCTION.
- Author
-
CENSOR, YAIR
- Subjects
METHODOLOGY ,ALGORITHMS - Abstract
The superiorization methodology can be thought of as lying conceptually between feasibility-seeking and constrained minimization. It is not trying to solve the full-fledged constrained minimization problem composed from the modeling constraints and the chosen objective function. Rather, the task is to find a feasible point which is "superior" (in a well-defined manner) with respect to the objective function, to one returned by a feasibility-seeking only algorithm. We telegraphically review the superiorization methodology and where it stands today and propose a rigorous formulation of its, yet only partially resolved, guarantee problem. The real-world situation in an application field is commonly represented by constraints defined by the modeling process and the data, obtained from measurements or otherwise dictated by the model-user. The feasibility-seeking problem requires to find a point in the intersection of all constraints without using any objective function to aim at any specific feasible point. At the heart of the superiorization methodology lies the modeler desire to use an objective function, that is exogenous to the constraints, in order to seek a feasible solution that will have lower (not necessarily minimal) objective function value. This aim is less demanding than full-fledged constrained minimization but more demanding than plain feasibility-seeking. Putting emphasis on the need to satisfy the constraints, because they represent the real-world situation, one recognizes the "asymmetric roles of feasibility-seeking and objective function reduction", namely, that fulfilling the constraints is the main task while reduction of the exogenous objective function plays only a secondary role. There are two research directions in the superiorization methodology that nourish from this same general principle: Weak superiorization and strong superiorization. Since its inception in 2007, the superiorization methodology has evolved and gained ground, as can be seen from the, compiled and continuously updated, bibliography at: http://math.haifa.ac.il/yair/bib-superiorization-censor.html. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Reasons for stability in the construction of derivative‐free multistep iterative methods.
- Author
-
Cordero, Alicia, Neta, Beny, and Torregrosa, Juan R.
- Abstract
In this paper, a deep dynamical analysis is made, by using tools from multidimensional real discrete dynamics, of some derivative‐free iterative methods with memory. All of them have good qualitative properties, but one of them (due to Traub) shows to have the same behavior as Newton's method on quadratic polynomials. Then, the same techniques are employed to analyze the performance of several multipoint schemes with memory, whose first step is Traub's method, but their construction was made using different procedures. Therefore, their stability is analyzed, showing which is the best in terms of wideness of basins of convergence or the existence of free critical points that would yield to convergence toward different elements from the desired zeros of the nonlinear function. Therefore, the best stability properties are linked with the best estimations made in the iterative expressions, rather than in their simplicity. These results have been checked with numerical and graphical comparison with many other known methods with and without memory, with different order of convergence, with excellent performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. A NEURAL NETWORK APPROACH FOR HOMOGENIZATION OF MULTISCALE PROBLEMS.
- Author
-
JIHUN HAN and YOONSANG LEE
- Subjects
- *
RANDOM fields , *NONLINEAR equations - Abstract
We propose a neural network-based approach to the homogenization of multiscale problems. The proposed method uses a derivative-free formulation of a training loss, which incorporates Brownian walkers to find the macroscopic description of a multiscale PDE solution. Compared with other network-based approaches for multiscale problems, the proposed method is free from the design of hand-crafted neural network architecture and the cell problem to calculate the homogenization coefficient. The exploration neighborhood of the Brownian walkers affects the overall learning trajectory. We determine the bounds of micro- and macro-time steps that capture the local heterogeneous and global homogeneous solution behaviors, respectively, through a neural network. The bounds imply that the computational cost of the proposed method is independent of the microscale periodic structure for the standard periodic problems. We validate the efficiency and robustness of the proposed method through a suite of linear and nonlinear multiscale problems with periodic and random field coefficients. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Exploration of the Global Minimum and Conical Intersection with Bayesian Optimization.
- Author
-
Somaki R, Inagaki T, and Hatanaka M
- Subjects
- Algorithms, Bayes Theorem, Quantum Theory
- Abstract
Conventional molecular geometry searches on a potential energy surface (PES) utilize energy gradients from quantum chemical calculations. However, replacing energy calculations with noisy quantum computer measurements generates errors in the energies, which makes geometry optimization using the energy gradient difficult. One gradient-free optimization method that can potentially solve this problem is Bayesian optimization (BO). To use BO in geometry search, an acquisition function (AF), which involves an objective variable, must be defined suitably. In this study, we propose a strategy for geometry searches using BO and examine the appropriate AFs to explore two critical structures: the global minimum (GM) on the singlet ground state (S
0 ) and the most stable conical intersection (CI) point between S0 and the singlet excited state. We applied our strategy to two molecules and located the GM and the most stable CI geometries with high accuracy for both molecules. We also succeeded in the geometry searches even when artificial random noises were added to the energies to simulate geometry optimization using noisy quantum computer measurements., (© 2025 The Author(s). Molecular Informatics published by Wiley-VCH GmbH.)- Published
- 2025
- Full Text
- View/download PDF
16. Extended Multi-Step Jarratt-like Schemes of High Order for Equations and Systems.
- Author
-
Argyros, Ioannis K., Argyros, Chirstopher, Argyros, Michael, Ceballos, Johan, and González, Daniel
- Subjects
- *
BANACH spaces , *EQUATIONS , *MULTIAGENT systems - Abstract
The local convergence analysis of multi-step, high-order Jarratt-like schemes is extended for solving Banach space valued systems of equations using the derivative instead of up to the ninth derivative as in previous works. Our idea expands the usage of the scheme in cases not considered earlier and can also be utilized in other schemes, too. Experiments test the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Benchmarking Derivative-Free Global Optimization Methods on Variable Dimension Robotics Problems
- Author
-
Küdela, J., Juříček, M., Parǎk, R., Tzanetos, Alexandros, Matoušek, R., Küdela, J., Juříček, M., Parǎk, R., Tzanetos, Alexandros, and Matoušek, R.
- Abstract
Several real-world applications introduce derivativefree optimization problems, called variable dimension problems, where the problem's dimension is not known in advance. Despite their importance, no unified framework for developing, comparing, and benchmarking variable dimension problems exists. The robot arm controlling problem is a variable dimension problem where the number of joints to optimize defines the problem's dimension. For a holistic study of global optimization methods, we studied 14 representative methods from 4 different categories, i.e., (i) local search optimization techniques with random restarts, (ii) state-of-the-art DIRECT-type methods, (iii) established Evolutionary Computation approaches, and (iv) state-of-the-art Evolutionary Computation approaches. To investigate the effect of the problem's dimensionality on the solution we generated 20 instances of various combinations among the number of predefined and open decision variables, and we performed experiments for various computational budgets. The results attest that the robot arm controlling problem provides a proper benchmark for variable dimensions. Furthermore, methods in-corporating local search techniques have dominant performance for higher dimensionalities of the problem, while state-of-the-art EC methods dominate in the lower dimensionalities.
- Published
- 2024
- Full Text
- View/download PDF
18. Simulation-based inference of single-molecule force spectroscopy
- Author
-
Lars Dingeldein, Pilar Cossio, and Roberto Covino
- Subjects
simulation-based inference ,likelihood-free ,derivative-free ,neural posterior estimation ,single-molecule biophysics ,force spectroscopy ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Single-molecule force spectroscopy (smFS) is a powerful approach to studying molecular self-organization. However, the coupling of the molecule with the ever-present experimental device introduces artifacts, that complicate the interpretation of these experiments. Performing statistical inference to learn hidden molecular properties is challenging because these measurements produce non-Markovian time series, and even minimal models lead to intractable likelihoods. To overcome these challenges, we developed a computational framework built on novel statistical methods called simulation-based inference (SBI). SBI enabled us to directly estimate the Bayesian posterior, and extract reduced quantitative models from smFS, by encoding a mechanistic model into a simulator in combination with probabilistic deep learning. Using synthetic data, we could systematically disentangle the measurement of hidden molecular properties from experimental artifacts. The integration of physical models with machine-learning density estimation is general, transparent, easy to use, and broadly applicable to other types of biophysical experiments.
- Published
- 2023
- Full Text
- View/download PDF
19. Observability Analysis of a Power System Stochastic Dynamic Model Using a Derivative-Free Approach.
- Author
-
Zheng, Zongsheng, Xu, Yijun, Mili, Lamine, Liu, Zhigang, Korkali, Mert, and Wang, Yuhong
- Subjects
- *
STOCHASTIC systems , *OBSERVABILITY (Control theory) , *DYNAMIC models , *DYNAMICAL systems , *STOCHASTIC models , *SYNCHRONOUS generators , *POLYNOMIAL chaos - Abstract
Serving as a prerequisite to power system dynamic state estimation, the observability analysis of a power system dynamic model has recently attracted the attention of many power engineers. However, because this model is typically nonlinear and large-scale, the analysis of its observability is a challenge to the traditional derivative-based methods. Indeed, the linear-approximation-based approach may provide unreliable results while the nonlinear-technique-based approach inevitably faces extremely complicated derivations. Furthermore, because power systems are intrinsically stochastic, the traditional deterministic approaches may lead to inaccurate observability analyses. Facing these challenges, we propose a novel polynomial-chaos-based derivative-free observability analysis approach that not only is free of any linear approximations, but also accounts for the stochasticity of the dynamic model while bringing a low implementation complexity. Furthermore, this approach enables us to quantify the degree of observability of a stochastic model, what conventional deterministic methods cannot do. The excellent performance of the proposed method has been demonstrated by performing extensive simulations using a synchronous generator model with IEEE-DC1A exciter and the TGOV1 turbine governor. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Modified matrix-free methods for solving system of nonlinear equations.
- Author
-
Waziri, Mohammed Yusuf, Muhammad, Hadiza Usman, Halilu, Abubakar Sani, and Ahmed, Kabiru
- Subjects
- *
NONLINEAR equations - Abstract
Some hybrid derivative-free methods for nonlinear system of equations via acceleration and correction parameters are presented. These methods are based on applying the three term hybrid iterative scheme on an enhance matrix free method. The step length was determined using an inexact line search procedure. Moreover, the proposed methods proved to be globally convergent under mild conditions. The comprehensive numerical results presented in this work, show the efficiency of the proposed methods by comparing them with some existing methods in the previous literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. OPTIMIZATION OF STOCHASTIC BLACKBOXES WITH ADAPTIVE PRECISION.
- Author
-
ALARIE, STÉPHANE, AUDET, CHARLES, BOUCHET, PIERRE-YVES, and LE DIGABEL, SÉBASTIEN
- Subjects
- *
COMPUTER software execution , *DETERMINISTIC algorithms , *STANDARD deviations - Abstract
In derivative-free and blackbox optimization, the objective function is often evaluated through the execution of a computer program seen as a blackbox. It can be stochastic, in the sense that an additive centered Gaussian random noise is present in the output. Sometimes, the distribution of the noise is tunable, and its standard deviation can be chosen at any execution of the blackbox. A common strategy to deal with such a situation is to define a sequence of standard deviation values monotonically decreasing to zero with the iterations to ensure convergence of algorithms because the noise is asymptotically dismantled. However, in practice a monotonic reduction of the standard deviation monotonically increases the computation time and makes the optimization process long. There is another strategy, which does not force the standard deviation per iteration to monotonically diminish. This work proposes an algorithmic framework adapted from the deterministic Mads algorithm, on which these two strategies can be expressed. Although these strategies are proved to be theoretically equivalent, tests on analytical problems and on an industrial blackbox with non-Gaussian noise distribution are presented to illustrate practical differences. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Derivative-free superiorization: principle and algorithm.
- Author
-
Censor, Yair, Garduño, Edgar, Helou, Elias S., and Herman, Gabor T.
- Subjects
- *
ALGORITHMS , *IMAGE reconstruction algorithms , *PROBLEM solving , *IMAGE reconstruction , *SET functions , *CONSTRAINT algorithms - Abstract
The superiorization methodology is intended to work with input data of constrained minimization problems, that is, a target function and a set of constraints. However, it is based on an antipodal way of thinking to what leads to constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done by inserting target-function-reducing perturbations into a feasibility-seeking algorithm while retaining its feasibility-seeking ability and without paying a high computational price. A superiorized algorithm that employs component-wise target function reduction steps is presented. This enables derivative-free superiorization (DFS), meaning that superiorization can be applied to target functions that have no calculable partial derivatives or subgradients. The numerical behavior of our derivative-free superiorization algorithm is illustrated on a data set generated by simulating a problem of image reconstruction from projections. We present a tool (we call it a proximity-target curve) for deciding which of two iterative methods is "better" for solving a particular problem. The plots of proximity-target curves of our experiments demonstrate the advantage of the proposed derivative-free superiorization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. A parameter-free unconstrained reformulation for nonsmooth problems with convex constraints.
- Author
-
Galvan, Giulio, Sciandrone, Marco, and Lucidi, Stefano
- Subjects
NONSMOOTH optimization ,CONSTRAINED optimization ,MULTICASTING (Computer networks) - Abstract
In the present paper we propose to rewrite a nonsmooth problem subjected to convex constraints as an unconstrained problem. We show that this novel formulation shares the same global and local minima with the original constrained problem. Moreover, the reformulation can be solved with standard nonsmooth optimization methods if we are able to make projections onto the feasible sets. Numerical evidence shows that the proposed formulation compares favorably against state-of-art approaches. Code can be found at https://github.com/jth3galv/dfppm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. A modified Perry-type derivative-free projection method for solving large-scale nonlinear monotone equations.
- Author
-
Koorapetse, M., Kaelo, P., and Kooepile-Reikeletseng, S.
- Subjects
NONLINEAR equations ,CONJUGATE gradient methods - Abstract
In this paper, a new modified Perry-type derivative-free projection method for solving large-scale nonlinear monotone equations is presented. The method is developed by combining a modified Perry's conjugate gradient method with the hyperplane projection technique. Global convergence and numerical results of the proposed method are established. Preliminary numerical results show that the proposed method is promising and efficient compared to some existing methods in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. FIVE-POINT THIRTY-TWO OPTIMAL ORDER ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS.
- Author
-
ULLAH, Malik Zaka and AHMAD, Fayyaz
- Subjects
- *
NONLINEAR equations , *LOGICAL prediction , *POLYNOMIALS - Abstract
A five-point thirty-two convergence order derivative-free iterative method to find simple roots of non-linear equations is constructed. Six function evaluations are performed to achieve optimal convergence order26-1 = 32 conjectured by Kung and Traub [1]. Secant approximation to the derivative is computed around the initial guess. High order convergence is attained by constructing polynomials of quotients for functional values. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. A derivative-free RMIL conjugate gradient projection method for convex constrained nonlinear monotone equations with applications in compressive sensing.
- Author
-
Koorapetse, M., Kaelo, P., Lekoko, S., and Diphofu, T.
- Subjects
- *
NONLINEAR equations , *CONJUGATE gradient methods , *ORTHOGONAL matching pursuit - Abstract
In this paper, a derivative-free R M I L conjugate gradient projection method for solving large-scale nonlinear monotone equations with convex constraints is proposed. The proposed method is a modification of an R M I L conjugate gradient method combined with the projection techniques. We establish its global convergence under appropriate conditions. The method is then compared with other existing methods in the literature and the numerical results indicate that the method is efficient. Furthermore, the proposed method is used to recover a sparse signal from an incomplete and contaminated sampling measurements and the results are promising. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. A globally convergent projection method for a system of nonlinear monotone equations.
- Author
-
Kaelo, P. and Koorapetse, M.
- Subjects
- *
NONLINEAR equations , *CONJUGATE gradient methods , *MATHEMATICS - Abstract
In this paper, a new conjugate gradient-based projection method for solving a system of nonlinear monotone equations is proposed. The method can be viewed as an extension of a family of conjugate gradient methods for unconstrained optimization by Li et al. [A new family of conjugate gradient methods for unconstrained optimization, J. Appl. Math. Comput. 58 (2018), pp. 219–234]. The proposed method is derivative-free which makes it suitable for large-scale nonlinear monotone equations. We show that the method satisfies the descent condition independent of line searches and that the method is globally convergent. Numerical results indicate that the proposed method is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes.
- Author
-
Liu, Chein-Shan, Hong, Hong-Ki, and Lee, Tsung-Lin
- Subjects
- *
AFFINE transformations , *CAHN-Hilliard-Cook equation - Abstract
In the paper, we convert a single nonlinear equation to a system consisting of two equations. While a quasi-linear term is added on the first equation, the nonlinear term in the second equation is decomposed at two sides through a weight parameter. After performing a linearization, an iterative scheme is derived, which is proven of third-order convergence for certain parameters. An affine quasi-linear transformation in the plane is established, and the condition for the spectral radius being smaller than one for the convergence of the iterative scheme is derived. By using the splitting method, we can further identify a sufficient condition for the convergence of the iterative scheme. Then, we develop a step-wisely quasi-linear transformation technique to solve nonlinear equations. Proper values of the parameters are qualified by the derived inequalities for both iterative schemes, which accelerate the convergence speed. The performances of the proposed iterative schemes are assessed by numerical tests, whose advantages are fast convergence, saving the function evaluation per iteration and without needing the differential of the given function. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. The DIRECT algorithm: 25 years Later.
- Author
-
Jones, Donald R. and Martins, Joaquim R. R. A.
- Subjects
GLOBAL optimization ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Introduced in 1993, the DIRECT global optimization algorithm provided a fresh approach to minimizing a black-box function subject to lower and upper bounds on the variables. In contrast to the plethora of nature-inspired heuristics, DIRECT was deterministic and had only one hyperparameter (the desired accuracy). Moreover, the algorithm was simple, easy to implement, and usually performed well on low-dimensional problems (up to six variables). Most importantly, DIRECT balanced local and global search (exploitation vs. exploration) in a unique way: in each iteration, several points were sampled, some for global and some for local search. This approach eliminated the need for "tuning parameters" that set the balance between local and global search. However, the very same features that made DIRECT simple and conceptually attractive also created weaknesses. For example, it was commonly observed that, while DIRECT is often fast to find the basin of the global optimum, it can be slow to fine-tune the solution to high accuracy. In this paper, we identify several such weaknesses and survey the work of various researchers to extend DIRECT so that it performs better. All of the extensions show substantial improvement over DIRECT on various test functions. An outstanding challenge is to improve performance robustly across problems of different degrees of difficulty, ranging from simple (unimodal, few variables) to very hard (multimodal, sharply peaked, many variables). Opportunities for further improvement may lie in combining the best features of the different extensions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Algorithm 1007: QNSTOP—Quasi-Newton Algorithm for Stochastic Optimization.
- Author
-
Amos, Brandon D., Easterling, David R., Watson, Layne T., Thacker, William I., Castle, Brent S., and Trosset, Michael W.
- Subjects
- *
MATHEMATICAL optimization , *SYSTEMS biology , *COMPUTATIONAL biology , *FORTRAN , *ALGORITHMS , *GLOBAL optimization - Abstract
QNSTOP consists of serial and parallel (OpenMP) Fortran 2003 codes for the quasi-Newton stochastic optimization method of Castle and Trosset for stochastic search problems. A complete description of QNSTOP for both local search with stochastic objective and global search with "noisy" deterministic objective is given here, to the best of our knowledge, for the first time. For stochastic search problems, some convergence theory exists for particular algorithmic choices and parameter values. Both the parallel driver subroutine, which offers several parallel decomposition strategies, and the serial driver subroutine can be used for local stochastic search or global deterministic search, based on an input switch. Some performance data for computational systems biology problems is given. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. A Derivative-Free Line-Search Algorithm for Simulation-Driven Design Optimization Using Multi-Fidelity Computations
- Author
-
Riccardo Pellegrini, Andrea Serani, Giampaolo Liuzzi, Francesco Rinaldi, Stefano Lucidi, and Matteo Diez
- Subjects
derivative-free ,line search ,multi-fidelity ,simulation-driven design optimization ,Mathematics ,QA1-939 - Abstract
The paper presents a multi-fidelity extension of a local line-search-based derivative-free algorithm for nonsmooth constrained optimization (MF-CS-DFN). The method is intended for use in the simulation-driven design optimization (SDDO) context, where multi-fidelity computations are used to evaluate the objective function. The proposed algorithm starts using low-fidelity evaluations and automatically switches to higher-fidelity evaluations based on the line-search step length. The multi-fidelity algorithm is driven by a suitably defined threshold and initialization values for the step length, which are associated to each fidelity level. These are selected to increase the accuracy of the objective evaluations while progressing to the optimal solution. The method is demonstrated for a multi-fidelity SDDO benchmark, namely pertaining to the hull-form optimization of a destroyer-type vessel, aiming at resistance minimization in calm water at fixed speed. Numerical simulations are based on a linear potential flow solver, where seven fidelity levels are used selecting systematically refined computational grids for the hull and the free surface. The method performance is assessed varying the steplength threshold and initialization approach. Specifically, four MF-CS-DFN setups are tested, and the optimization results are compared to its single-fidelity (high-fidelity-based) counterpart (CS-DFN). The MF-CS-DFN results are promising, achieving a resistance reduction of about 12% and showing a faster convergence than CS-DFN. Specifically, the MF extension is between one and two orders of magnitude faster than the original single-fidelity algorithm. For low computational budgets, MF-CS-DFN optimized designs exhibit a resistance that is about 6% lower than that achieved by CS-DFN.
- Published
- 2022
- Full Text
- View/download PDF
32. Convergence Rate Evaluation of Derivative-Free Optimization Techniques
- Author
-
Lux, Thomas, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Pardalos, Panos M., editor, Conca, Piero, editor, Giuffrida, Giovanni, editor, and Nicosia, Giuseppe, editor
- Published
- 2016
- Full Text
- View/download PDF
33. A Note on Traub’s Method for Systems of Nonlinear Equations
- Author
-
Beny Neta
- Subjects
nonlinear systems ,iterative methods ,derivative-free ,efficiency ,stability ,Mathematics ,QA1-939 - Abstract
Traub’s method was extended here to systems of nonlinear equations and compared to Steffensen’s method. Even though Traub’s method is only of order 1.839 and not quadratic, it performed better in the 10 examples.
- Published
- 2021
- Full Text
- View/download PDF
34. Memorizing Schröder’s Method as an Efficient Strategy for Estimating Roots of Unknown Multiplicity
- Author
-
Alicia Cordero, Beny Neta, and Juan R. Torregrosa
- Subjects
nonlinear equations ,iterative methods with memory ,multiple roots ,derivative-free ,efficiency ,stability ,Mathematics ,QA1-939 - Abstract
In this paper, we propose, to the best of our knowledge, the first iterative scheme with memory for finding roots whose multiplicity is unknown existing in the literature. It improves the efficiency of a similar procedure without memory due to Schröder and can be considered as a seed to generate higher order methods with similar characteristics. Once its order of convergence is studied, its stability is analyzed showing its good properties, and it is compared numerically in terms of their basins of attraction with similar schemes without memory for finding multiple roots.
- Published
- 2021
- Full Text
- View/download PDF
35. Derivative-free Equilibrium Seeking in Multi-Agent Systems
- Author
-
Krilašević, S. (author) and Krilašević, S. (author)
- Abstract
Both societal and engineering systems are growing in complexity and interconnectivity, making it increasingly challenging, and sometimes impossible, to model their dynamics and behaviors. Moreover, individuals or entities within these systems, often referred to as agents, have their own objectives that may conflict with one another. Examples include various economic systems where agents compete for profit, wind farms where upwind turbines reduce the energy extraction of downwind turbines, unwanted perturbation minimization in extremum seeking control, and cooperative source-seeking robotic vehicles. Despite having access to only limited observable information, it is crucial to ensure that all participants are content with the outcomes of these interactions. In this thesis, we choose to examine these problems within the framework of games, where each agent has their own cost function and constraints, and all costs and constraints are interconnected. Since the notion of optimum in multi-agent problems is difficult to define, we often seek to find a Nash equilibrium, i.e., a set of decisions from which no agent has an incentive to deviate. This thesis primarily explores the development of Nash equilibrium seeking algorithms for scenarios where agents' cost functions are unknown and can only be assessed through measurements of a dynamical system's output, referred to as the zeroth-order (derivative-free) information case. We specifically concentrate on scenarios where partial derivatives can be estimated from these measurements and subsequently integrated into a full-information algorithm. Existing approaches exhibit significant drawbacks, such as the inability to handle shared constraints, stringent assumptions on the cost functions, and applicability limited to agents with continuous dynamics., Team Sergio Grammatico
- Published
- 2023
36. A new three-term conjugate gradient-based projection method for solving large-scale nonlinear monotone equations
- Author
-
Mompati Koorapetse and Professor Kaelo
- Subjects
nonlinear monotone equations ,derivative-free ,global convergence ,Mathematics ,QA1-939 - Abstract
A new three-term conjugate gradient-based projection method is presented in this paper for solving large-scale nonlinear monotone equations. This method is derivative-free and it is suitable for solving large-scale nonlinear monotone equations due to its lower storage requirements. The method satisfies the sufficient descent condition , where is a constant, and its global convergence is also established. Numerical results show that the method is efficient and promising.
- Published
- 2019
- Full Text
- View/download PDF
37. A New Three-Term Conjugate Gradient-Based Projection Method for Solving Large-Scale Nonlinear Monotone Equations.
- Author
-
Koorapetse, Mompati and Kaelo
- Subjects
NONLINEAR equations ,CONJUGATE gradient methods - Abstract
A new three-term conjugate gradient-based projection method is pre- sented in this paper for solving large-scale nonlinear monotone equations. This method is derivative-free and it is suitable for solving large-scale nonlinear monotone equations due to its lower storage requirements. The method satisfies the sufficient descent condition F
T k dk ≤ - τ ∥Fk ∥², where τ > 0 is a constant, and its global convergence is also established. Numerical results show that the method is efficient and promising. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
38. Stability and applicability of iterative methods with memory.
- Author
-
Chicharro, Francisco I., Cordero, Alicia, Garrido, Neus, and Torregrosa, Juan R.
- Subjects
- *
NONLINEAR equations , *MEMORY , *AMMONIA - Abstract
Based on the third-order Traub's method, two iterative schemes with memory are introduced. The proper inclusion of accelerating parameters allows the introduction of memory. Therefore, the order of convergence of the iterative methods increases from 3 up to 3.73 without new functional evaluations. One of them includes derivatives and the other one is derivative-free. The stability of the methods with memory is analyzed and their basins of attraction are compared to check the differences between them. The methods are applied to solve two nonlinear problems in Chemistry, such as the fractional conversion of the nitrogen-hydrogen feed that gets converted to ammonia and the Colebrook–White equation. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Derivative-free superiorization with component-wise perturbations.
- Author
-
Censor, Yair, Heaton, Howard, and Schulte, Reinhard
- Subjects
- *
COMPUTED tomography , *IMAGE reconstruction - Abstract
Superiorization reduces, not necessarily minimizes, the value of a target function while seeking constraints compatibility. This is done by taking a solely feasibility-seeking algorithm, analyzing its perturbation resilience, and proactively perturbing its iterates accordingly to steer them toward a feasible point with reduced value of the target function. When the perturbation steps are computationally efficient, this enables generation of a superior result with essentially the same computational cost as that of the original feasibility-seeking algorithm. In this work, we refine previous formulations of the superiorization method to create a more general framework, enabling target function reduction steps that do not require partial derivatives of the target function. In perturbations that use partial derivatives, the step-sizes in the perturbation phase of the superiorization method are chosen independently from the choice of the nonascent directions. This is no longer true when component-wise perturbations are employed. In that case, the step-sizes must be linked to the choice of the nonascent direction in every step. Besides presenting and validating these notions, we give a computational demonstration of superiorization with component-wise perturbations for a problem of computerized tomography image reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Rapidly convergent Steffensen-based methods for unconstrained optimization.
- Author
-
Afzalinejad, Mohammad
- Subjects
NEWTON-Raphson method ,CONJUGATE gradient methods - Abstract
A problem with rapidly convergent methods for unconstrained optimization like the Newton's method is the computational difficulties arising specially from the second derivative. In this paper, a class of methods for solving unconstrained optimization problems is proposed which implicitly applies approximations to derivatives. This class of methods is based on a modified Steffensen method for finding roots of a function and attempts to make a quadratic model for the function without using the second derivative. Two methods of this kind with non-expensive computations are proposed which just use first derivative of the function. Derivative-free versions of these methods are also suggested for the cases where the gradient formulas are not available or difficult to evaluate. The theory as well as numerical examinations confirm the rapid convergence of this class of methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Multilocal Programming: A Derivative-Free Filter Multistart Algorithm
- Author
-
Fernandes, Florbela P., Costa, M. Fernanda P., Fernandes, Edite M. G. P., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Murgante, Beniamino, editor, Misra, Sanjay, editor, Carlini, Maurizio, editor, Torre, Carmelo M., editor, Nguyen, Hong-Quang, editor, Taniar, David, editor, Apduhan, Bernady O., editor, and Gervasi, Osvaldo, editor
- Published
- 2013
- Full Text
- View/download PDF
42. Dynamics and Fractal Dimension of Steffensen-Type Methods
- Author
-
Francisco I. Chicharro, Alicia Cordero, and Juan R. Torregrosa
- Subjects
nonlinear equation ,derivative-free ,dynamical plane ,fractal dimension ,Padé-like approximant ,Industrial engineering. Management engineering ,T55.4-60.8 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this paper, the dynamical behavior of different optimal iterative schemes for solving nonlinear equations with increasing order, is studied. The tendency of the complexity of the Julia set is analyzed and referred to the fractal dimension. In fact, this fractal dimension can be shown to be a powerful tool to compare iterative schemes that estimate the solution of a nonlinear equation. Based on the box-counting algorithm, several iterative derivative-free methods of different convergence orders are compared.
- Published
- 2015
- Full Text
- View/download PDF
43. A Derivative-Free Hybrid Optimization Model for Short-Term Operation of a Multi-Objective Reservoir System Under Uncertainty.
- Author
-
Chen, Duan, Leon, Arturo S., Chen, Qiuwen, and Li, Ruonan
- Subjects
RESERVOIRS ,UNCERTAINTY ,HYDRAULIC engineering ,POPULATION-based case control - Abstract
Short-term operation of a multi-objective reservoir system under inflow uncertainty has been receiving increasing attention, however, major challenges for the optimization of this system still remain due to the multiple and often conflicting objectives, highly nonlinear constraints and uncertain parameters in which derivative information may not be directly available. Population-based optimization methods do not rely on derivatives while generally have a slow convergence. This study presents a hybrid optimization model for short-term operation of multi-objective reservoirs under uncertainty that is derivative free and has a relatively fast convergence. The model incorporates a local improvement method called Mesh Adaptive Direct Search (MADS) into a population-based method NSGA-II and has no requirement for differentiability, convexity and continuity of the optimization problem. The operation of a multi-objective and multi-reservoir system on the Columbia River under inflow uncertainty is used as a case study. Overall, the hybrid model outperforms optimization models based on either the NSGA-II only or the MADS only. The model is intended for conditions where derivative information of the optimization problem is unavailable, which could have a wide array of applications in water resources systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Derivative-free Nonlinear Version of Extended Recursive Three-step Filter for State and Parameter Estimation during Mars Entry.
- Author
-
Xiao, Mengli, Zhang, Yongbo, Fu, Huimin, and Wang, Zhihua
- Subjects
- *
KALMAN filtering , *MARS (Planet) , *ROVING vehicles (Astronautics) , *MARTIAN exploration , *NAVIGATION (Astronautics) - Abstract
Parameter uncertainties which may lead to divergence of traditional Kalman filters during Mars entry are investigated in this paper. To achieve high precision navigation, a Derivative-free Nonlinear version of an Extended Recursive Three-Step Filter (DNERTSF) is introduced, which suits nonlinear systems with arbitrary parameter uncertainties. A DNERTSF can estimate the state and the parameters simultaneously, and Jacobian and Hessian calculations are not necessary for this filter. Considering the uncertainties in atmosphere density, ballistic coefficient and lift-to-drag ratio, a numerical simulation of Mars entry navigation is carried out. Compared with the standard Unscented Kalman Filter (UKF), DNERTSF can effectively reduce the adverse effects of parameter uncertainties and achieve a high navigation accuracy performance, keeping position and velocity estimation errors at a very low level. In all, the DNERTSF in this paper shows good advantages for Mars entry navigation, providing a possible application for a future Mars pinpoint landing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. New corrective scheme for DF-SPH
- Author
-
Imin, Rahmatjan, Wei, Ye, and Iminjan, Ahmatjan
- Published
- 2020
- Full Text
- View/download PDF
46. 含参无导数有记忆迭代方法与其动力系统的构造.
- Author
-
王 婷 and 唐 烁
- Abstract
According to the usual practice that 2-step iterative methods with derivative are transformed into derivative-free schemes, a more general 2-step derivative-free iterative method was proposed. For this method the optimal order of convergence was ensured by the weight value. By means of the self-accelerating parameter and the Newton interpolation polynomial, the 2-parameter and 3-parameter iterative schemes with memory were obtained. Some of the existing 2- and 3-parameter iterative methods with memory were compared with the proposed method. The attraction domains of several schemes were presented, and the performances of several iterative schemes were compared. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. Obstacle avoidance extremum seeking control based on constrained derivative-free optimization.
- Author
-
Chantsalnyam, Tuvshinbayar, Park, Jong, Ham, Woon, Han, Chan, and Chong, Kil
- Abstract
A new control scheme based on extremum seeking control (ESC) which employs a constrained derivative-free optimization algorithm has been proposed in this paper. A theorem has been formulated to prove the convergence result of ESC based on constrained derivative-free optimization. Generalized pattern search method with filter algorithm for constraint is used to generate a sequence of ESC control state. Since generalized pattern search (GPS) method does not require continuously differentiable and Lipschitz conditions, noise cancellation algorithm is added to the proposed ESC algorithm which is then used for multi-agent robot system. The obstacles are expressed as constraint functions instead of the traditional way of calculating the performance function of obstacles. Simulation results illustrate a multi-agent obstacle avoidance system which utilized the control algorithm to avoid obstacles that appear on the path of multi-agent robots. Based on the simulation results, it can be observed that multi-agents maintain their formation as per initial condition and follow the target without colliding into obstacles while navigating in a noisy environment. Performance comparison of the proposed algorithm with a reference algorithm shows the efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. The mine blast algorithm for the structural optimization of electrical vehicle components
- Author
-
Betül Sultan Yıldız, Bursa Uludağ Üniversitesi/Mühendislik Fakültesi/Makine Mühendisliği., Yıldız, Betül Sultan, and AAL-9234-2020
- Subjects
Gravitational search ,0209 industrial biotechnology ,Derivative-free ,Computer science ,Shape optimization problem ,Design optimization ,Automotive industry ,02 engineering and technology ,Structural optimization ,Automotive engineering ,Automotive component ,020901 industrial engineering & automation ,Shape optimization ,Component (UML) ,Automotive technology ,Optimal machining parameters ,0202 electrical engineering, electronic engineering, information engineering ,Water cycle algorithm ,General Materials Science ,Control arm ,Symbiotic organisms search ,Metaheuristic ,Multiobjective optimization ,Mine blast algorithms ,business.industry ,Particle swarm optimization ,Cutting Process ,Chatter ,Turning ,Mechanical Engineering ,Finite element analysis ,Mechanical components ,Topology design ,Vehicles ,Research papers ,Materials science ,Finite element method ,Product design ,Electrical vehicles ,Mechanics of Materials ,020201 artificial intelligence & image processing ,Differential evolution ,business ,Materials science, characterization & testing ,Mine blast algorithm ,Charged system search - Abstract
The shape optimization of mechanical and automotive component plays a crucial role in the development of automotive technology. Presently, the use of derivative-free metaheuristics in combination with finite element analysis for mechanical component design is one of the most focused on topics due to its simplicity and effectiveness. In this research paper, the mine blast algorithm (MBA) is used to solve the problem of shape optimization for a vehicle door hinge to prove how the MBA can be used for solving shape optimization problems in designing electrical vehicles. The results show the advantage of the MBA for designing optimal components in the automotive industry.
- Published
- 2020
- Full Text
- View/download PDF
49. An inexact restoration derivative-free filter method for nonlinear programming.
- Author
-
Echebest, N., Schuverdt, M., and Vignau, R.
- Subjects
DERIVATIVES (Mathematics) ,FILTERS (Mathematics) ,NONLINEAR systems ,NONLINEAR programming ,ITERATIVE methods (Mathematics) ,MATHEMATICAL functions - Abstract
An inexact restoration derivative-free filter method for nonlinear programming is introduced in this paper. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimization phase, which reduces the objective function. The restoration phase is solved using a derivative-free method for solving underdetermined nonlinear systems with bound constraints, developed previously by the authors. An alternative for solving the optimization phase is considered. Theoretical convergence results and some preliminary numerical experiments are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. Delaunay-based derivative-free optimization via global surrogates, part II: convex constraints.
- Author
-
Beyhaghi, Pooriya and Bewley, Thomas
- Subjects
MATHEMATICAL optimization ,TRIANGULATION ,INTERPOLATION algorithms ,NONCONVEX programming ,NUMERICAL analysis - Abstract
The derivative-free global optimization algorithms developed in Part I of this study, for linearly constrained problems, are extended to nonconvex n-dimensional problems with convex constraints. The initial $$n+1$$ feasible datapoints are chosen in such a way that the volume of the simplex generated by these datapoints is maximized; as the algorithm proceeds, additional feasible datapoints are added in such a way that the convex hull of the available datapoints efficiently increases towards the boundaries of the feasible domain. Similar to the algorithms developed in Part I of this study, at each step of the algorithm, a search function is defined based on an interpolating function which passes through all available datapoints and a synthetic uncertainty function which characterizes the distance to the nearest datapoints. This uncertainty function, in turn, is built on the framework of a Delaunay triangulation, which is itself based on all available datapoints together with the (infeasible) vertices of an exterior simplex which completely contains the feasible domain. The search function is minimized within those simplices of this Delaunay triangulation that do not include the vertices of the exterior simplex. If the outcome of this minimization is contained within the circumsphere of a simplex which includes a vertex of the exterior simplex, this new point is projected out to the boundary of the feasible domain. For problems in which the feasible domain includes edges (due to the intersection of multiple twice-differentiable constraints), a modified search function is considered in the vicinity of these edges to assure convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.