69 results
Search Results
2. Mixed-level column augmented uniform designs.
- Author
-
Yang, Feng, Zhou, Yong-Dao, and Zhang, Aijun
- Subjects
- *
EXPERIMENTAL design , *INDUSTRIAL applications , *ALGORITHMS - Abstract
In many industrial applications, follow-up experimental designs are often used to explore the relationship between inputs (factors) and outputs (responses) step by step. Some extra factors with two or three levels may be added in the follow-up stage since they are quite important but may be neglected in the initial stage. In this paper, mixed-level column augmented uniform designs are proposed under the uniformity criterion, wrap-around L 2 -discrepancy (WD). The multi-stage augmented procedure is also investigated. We present the analytical expressions and the corresponding lower bounds on the WD of the column augmented designs. It is shown that the column augmented uniform designs are also the optimal designs under the non-orthogonality criterion, E (f N O D). Furthermore, a construction algorithm for column augmented uniform designs is provided. Some examples show that the lower bounds are tight and the construction algorithm is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Liberating the dimension for function approximation: Standard information
- Author
-
Wasilkowski, G.W. and Woźniakowski, H.
- Subjects
- *
APPROXIMATION theory , *INFORMATION modeling , *ALGORITHMS , *POLYNOMIALS , *STANDARD deviations , *VARIATE difference method - Abstract
Abstract: This is a follow-up paper of “Liberating the dimension for function approximation”, where we studied approximation of infinitely variate functions by algorithms that use linear information consisting of finitely many linear functionals. In this paper, we study similar approximation problems, however, now the algorithms can only use standard information consisting of finitely many function values. We assume that the cost of one function value depends on the number of active variables. We focus on polynomial tractability, and occasionally also study weak tractability. We present non-constructive and constructive results. Non-constructive results are based on known relations between linear and standard information for finitely variate functions, whereas constructive results are based on Smolyak’s construction generalized to the case of infinitely variate functions. Surprisingly, for many cases, the results for standard information are roughly the same as for linear information. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
4. Lower complexity bounds for interpolation algorithms
- Author
-
Giménez, Nardo, Heintz, Joos, Matera, Guillermo, and Solernó, Pablo
- Subjects
- *
INTERPOLATION , *LAGRANGE problem , *ALGORITHMS , *COMPUTATIONAL complexity , *SET theory , *HERMITE polynomials - Abstract
Abstract: We introduce and discuss a new computational model for the Hermite–Lagrange interpolation with nonlinear classes of polynomial interpolants. We distinguish between an interpolation problem and an algorithm that solves it. Our model includes also coalescence phenomena and captures a large variety of known Hermite–Lagrange interpolation problems and algorithms. Like in traditional Hermite–Lagrange interpolation, our model is based on the execution of arithmetic operations (including divisions) in the field where the data (nodes and values) are interpreted and arithmetic operations are counted at unit cost. This leads us to a new view of rational functions and maps defined on arbitrary constructible subsets of complex affine spaces. For this purpose we have to develop new tools in algebraic geometry which themselves are mainly based on Zariski’s Main Theorem and the theory of places (or equivalently: valuations). We finish this paper by exhibiting two examples of Lagrange interpolation problems with nonlinear classes of interpolants, which do not admit efficient interpolation algorithms (one of these interpolation problems requires even an exponential quantity of arithmetic operations in terms of the number of the given nodes in order to represent some of the interpolants). In other words, classic Lagrange interpolation algorithms are asymptotically optimal for the solution of these selected interpolation problems and nothing is gained by allowing interpolation algorithms and classes of interpolants to be nonlinear. We show also that classic Lagrange interpolation algorithms are almost optimal for generic nodes and values. This generic data cannot be substantially compressed by using nonlinear techniques. We finish this paper highlighting the close connection of our complexity results in Hermite–Lagrange interpolation with a modern trend in software engineering: architecture tradeoff analysis methods (ATAM). [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
5. Nearly optimal algorithms for the decomposition of multivariate rational functions and the extended Lüroth Theorem
- Author
-
Chèze, Guillaume
- Subjects
- *
ALGORITHMS , *MATHEMATICAL decomposition , *MULTIVARIATE analysis , *MATHEMATICAL functions , *FACTORIZATION , *POLYTOPES - Abstract
The extended Lüroth Theorem says that if the transcendence degree of is 1 then there exists such that is equal to . In this paper we show how to compute with a probabilistic algorithm. We also describe a probabilistic and a deterministic algorithm for the decomposition of multivariate rational functions. The probabilistic algorithms proposed in this paper are softly optimal when is fixed and tends to infinity. We also give an indecomposability test based on gcd computations and Newton’s polytope. In the last section, we show that we get a polynomial time algorithm, with a minor modification in the exponential time decomposition algorithm proposed by Gutierez–Rubio–Sevilla in 2001. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
6. Lattice rule algorithms for multivariate approximation in the average case setting
- Author
-
Kuo, Frances Y., Sloan, Ian H., and Woźniakowski, Henryk
- Subjects
- *
MULTIVARIATE analysis , *ALGORITHMS , *LATTICE theory , *GAUSSIAN processes - Abstract
Abstract: We study multivariate approximation for continuous functions in the average case setting. The space of variate continuous functions is equipped with the zero mean Gaussian measure whose covariance function is the reproducing kernel of a weighted Korobov space with the smoothness parameter and weights for . The weight moderates the behavior of functions with respect to the th variable, and small means that functions depend weakly on the th variable. We study lattice rule algorithms which approximate the Fourier coefficients of a function based on function values at lattice sample points. The generating vector for these lattice points is constructed by the component-by-component algorithm, and it is tailored for the approximation problem. Our main interest is when is large, and we study tractability and strong tractability of multivariate approximation. That is, we want to reduce the initial average case error by a factor by using a polynomial number of function values in and in the case of tractability, and only polynomial in in the case of strong tractability. Necessary and sufficient conditions on tractability and strong tractability are obtained by applying known general tractability results for the class of arbitrary linear functionals and for the class of function values. Strong tractability holds for the two classes in the average case setting iff for some positive , and tractability holds iff for some positive . The previous results for the class of function values have been non-constructive. We provide a construction in this paper and prove tractability and strong tractability error bounds for lattice rule algorithms. This paper can be viewed as a continuation of our previous paper where we studied multivariate approximation for weighted Korobov spaces in the worst case setting. Many technical results from that paper are also useful for the average case setting. The exponents of and corresponding to our error bounds are not sharp. However, for close to 1 and for slow decaying weights, we obtain almost the minimal exponent of . We also compare the results from the worst case and the average case settings in weighted Korobov spaces. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
7. Computation of local radius of information in SM-IBC identification of nonlinear systems
- Author
-
Milanese, Mario and Novara, Carlo
- Subjects
- *
NONLINEAR systems , *SYSTEMS theory , *ALGORITHMS , *FOUNDATIONS of arithmetic , *ALGEBRA - Abstract
Abstract: System identification consists in finding a model of an unknown system starting from a finite set of noise-corrupted data. A fundamental problem in this context is to asses the accuracy of the identified model. In this paper, the problem is investigated for the case of nonlinear systems within the Set Membership—Information Based Complexity framework of [M. Milanese, C. Novara, Set membership identification of nonlinear systems, Automatica 40(6) (2004) 957–975]. In that paper, a (locally) optimal algorithm has been derived, giving (locally) optimal models in nonlinear regression form. The corresponding (local) radius of information, providing the worst-case identification error, can be consequently used to measure the quality of the identified model. In the present paper, two algorithms are proposed for the computation of the local radius of information: The first provides the exact value but requires a computational complexity exponential in the dimension of the regressor space. The second is approximate but involves a polynomial (quadratic) complexity. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
8. On regularization algorithms in learning theory
- Author
-
Bauer, Frank, Pereverzev, Sergei, and Rosasco, Lorenzo
- Subjects
- *
ALGORITHMS , *DIFFERENTIAL equations , *HILBERT space , *BANACH spaces - Abstract
Abstract: In this paper we discuss a relation between Learning Theory and Regularization of linear ill-posed inverse problems. It is well known that Tikhonov regularization can be profitably used in the context of supervised learning, where it usually goes under the name of regularized least-squares algorithm. Moreover, the gradient descent algorithm was studied recently, which is an analog of Landweber regularization scheme. In this paper we show that a notion of regularization defined according to what is usually done for ill-posed inverse problems allows to derive learning algorithms which are consistent and provide a fast convergence rate. It turns out that for priors expressed in term of variable Hilbert scales in reproducing kernel Hilbert spaces our results for Tikhonov regularization match those in Smale and Zhou [Learning theory estimates via integral operators and their approximations, submitted for publication, retrievable at http://www.tti-c.org/smale.html , 2005] and improve the results for Landweber iterations obtained in Yao et al. [On early stopping in gradient descent learning, Constructive Approximation (2005), submitted for publication]. The remarkable fact is that our analysis shows that the same properties are shared by a large class of learning algorithms which are essentially all the linear regularization schemes. The concept of operator monotone functions turns out to be an important tool for the analysis. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
9. A component-by-component approach to efficient numerical integration over products of spheres
- Author
-
Hesse, Kerstin, Kuo, Frances Y., and Sloan, Ian H.
- Subjects
- *
NUMERICAL analysis , *NUMERICAL integration , *ALGORITHMS , *FUNCTIONAL analysis - Abstract
Abstract: Building upon a recent existence result of Kuo and Sloan, this paper presents a component-by-component algorithm for constructing the m points of a quasi-Monte Carlo (QMC) rule for numerical integration over the d-fold product of unit spheres . Our construction is as follows: starting with a well-chosen generating point set of m points on , the algorithm chooses a permutation of this generating point set for each sphere, one sphere at a time, so that the projection of the m QMC points onto each sphere is the same, and is just the generating point set but with the points occurring in a different order. Understandably, the quality of our QMC rule depends on the quality of both the generating point set and the successive permutations. This paper contains two key results. Firstly, assuming that the worst-case error for the generating point set in a certain Sobolev space satisfies a certain estimate, we prove inductively that the resulting QMC rule satisfies the existence result for the worst-case error bound in a d-dimensional weighted Sobolev space established non-constructively by Kuo and Sloan: specifically, the worst-case error of our QMC rule is bounded from above by , where is independent of m and d, provided that the sum of the weights is bounded independently of d. Secondly, we show that the desired estimate for the generating point set on is automatically satisfied for m sufficiently large by a spherical n-design with points (if such spherical designs exist) and by a spherical n-design with points if slightly stronger assumptions are made on the smoothness of the weighted function space. The latter task involves techniques developed by Hesse and Sloan for numerical integration in Sobolev spaces on . The construction cost for the component-by-component algorithm grows only linearly with d. However, a complete search over all permutations at each step of the construction is infeasible, thus a randomized version of the algorithm is recommended in practice. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
10. Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimizatio
- Author
-
Al-Baali, M.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS - Abstract
This paper studies recent modifications of the limited memory BFGS (L-BFGS) method for solving large scale unconstrained optimization problems. Each modification technique attempts to improve the quality of the L-BFGS Hessian by employing (extra) updates in a certain sense. Because at some iterations these updates might be redundant or worsen the quality of this Hessian, this paper proposes an updates criterion to measure this quality. Hence, extra updates are employed only to improve the poor approximation of the L-BFGS Hessian. The presented numerical results illustrate the usefulness of this criterion and show that extra updates improve the performance of the L-BFGS method substantially. [Copyright &y& Elsevier]
- Published
- 2002
- Full Text
- View/download PDF
11. Generalization properties of doubly stochastic learning algorithms.
- Author
-
Lin, Junhong and Rosasco, Lorenzo
- Subjects
- *
GENERALIZATION , *STOCHASTIC learning models , *ALGORITHMS , *HILBERT space , *REGRESSION analysis - Abstract
Abstract Doubly stochastic learning algorithms are scalable kernel methods that perform very well in practice. However, their generalization properties are not well understood and their analysis is challenging since the corresponding learning sequence may not be in the hypothesis space induced by the kernel. In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss. Particularly, we derive convergence results on generalization error for the studied algorithms either with or without an explicit penalty term. To the best of our knowledge, the derived results for the unregularized variants are the first of this kind, while the results for the regularized variants improve those in the literature. The novelties in our proof are a sample error bound that requires controlling the trace norm of a cumulative operator, and a refined analysis of bounding initial error. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Asymptotically tight worst case complexity bounds for initial-value problems with nonadaptive information.
- Author
-
Kacewicz, Bolesław
- Subjects
- *
INITIAL value problems , *EXPONENTS , *ALGORITHMS , *ASYMPTOTIC expansions , *NUMERICAL analysis , *LINEAR dependence (Mathematics) - Abstract
Abstract It is known that, for systems of initial-value problems, algorithms using adaptive information perform much better in the worst case setting than the algorithms using nonadaptive information. In the latter case, lower and upper complexity bounds significantly depend on the number of equations. However, in contrast with adaptive information, existing lower and upper complexity bounds for nonadaptive information are not asymptotically tight. In this paper, we close the gap in the complexity exponents, showing asymptotically matching bounds for nonadaptive standard information, as well as for a more general class of nonadaptive linear information. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. A note on Korobov lattice rules for integration of analytic functions.
- Author
-
Pillichshammer, Friedrich
- Subjects
- *
ANALYTIC spaces , *NUMERICAL integration , *PERIODIC functions , *ALGORITHMS , *ANALYTIC functions - Abstract
We study numerical integration for a weighted Korobov space of analytic periodic functions for which the Fourier coefficients decay exponentially fast. In particular, we are interested in how the error depends on the dimension d. Many recent papers deal with this problem or similar problems and provide matching necessary and sufficient conditions for various notions of tractability. In most cases even simple algorithms are known which allow to achieve these notions of tractability. However, there is a gap in the literature: while for the notion of exponential-weak tractability one knows matching necessary and sufficient conditions, so far no explicit algorithm has been known which yields the desired result. In this paper we close this gap and prove that Korobov lattice rules are suitable algorithms in order to achieve exponential-weak tractability for integration in weighted Korobov spaces of analytic periodic functions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. On combined component-by-component constructions of lattice point sets.
- Author
-
Laimer, Helene
- Subjects
- *
LATTICE theory , *POINT set theory , *MATHEMATICAL combinations , *NUMERICAL analysis , *ALGORITHMS - Abstract
The standard method for constructing generating vectors for good lattice point sets is the component-by-component construction. Numerical experiments have shown that the generating vectors found by these constructions sometimes tend to have recurring components, which can lead to the problem of having projections with all lattice points lying on the main diagonal. In this paper we combine methods of Dick and Kritzer to avoid this problem with a reduced fast component-by-component construction. That is, we give a variation of the standard component-by-component construction which avoids repeated components and simultaneously results in a considerable speed-up in comparison to the standard construction. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. The nonzero gain coefficients of Sobol's sequences are always powers of two.
- Author
-
Pan, Zexin and Owen, Art B.
- Subjects
- *
PLAINS , *FINITE, The , *MICROSTRUCTURE , *ALGORITHMS - Abstract
When a plain Monte Carlo estimate on n samples has variance σ 2 / n , then scrambled digital nets attain a variance that is o (1 / n) as n → ∞. For finite n and an adversarially selected integrand, the variance of a scrambled (t , m , s) -net can be at most Γ σ 2 / n for a maximal gain coefficient Γ < ∞. The most widely used digital nets and sequences are those of Sobol'. It was previously known that Γ ⩽ 2 t 3 s for any nets in base 2. For digital nets, Dick and Pillichshammer (2010) obtained the bound 2 t + s. In this paper we study digital nets in base 2 and show that Γ ⩽ 2 t + s − 1 for such nets. This bound is a simple, but apparently unnoticed, consequence of a microstructure analysis by Niederreiter and Pirsic in 2001. We obtain a sharper bound that is smaller than this for some digital nets. Our main finding is that all nonzero gain coefficients must be powers of two. A consequence of this latter fact is a simplified algorithm for computing gain coefficients of digital nets in base 2. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Computing Riemann–Roch spaces via Puiseux expansions.
- Author
-
Abelard, Simon, Berardini, Elena, Couvreur, Alain, and Lecerf, Grégoire
- Subjects
- *
PLANE curves , *ALGEBRAIC geometry , *EXPONENTS , *PROJECTIVE spaces , *PROJECTIVE planes - Abstract
Computing large Riemann–Roch spaces for plane projective curves still constitutes a major algorithmic and practical challenge. Seminal applications concern the construction of arbitrarily large algebraic geometry error correcting codes over alphabets with bounded cardinality. Nowadays such codes are increasingly involved in new areas of computer science such as cryptographic protocols and "interactive oracle proofs". In this paper, we design a new probabilistic algorithm of Las Vegas type for computing Riemann–Roch spaces of smooth divisors, in characteristic zero, and with expected complexity exponent 2.373 (a feasible exponent for linear algebra) in terms of the input size. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Average sampling numbers of multivariate periodic function spaces with a Gaussian measure.
- Author
-
Huang, Zexia and Wang, Heping
- Subjects
- *
PERIODIC functions , *STATISTICAL sampling , *GAUSSIAN measures , *SOBOLEV spaces , *ALGORITHMS , *FUNCTION spaces - Abstract
In this paper, we study average sampling numbers of the multivariate periodic function space L ̊ 2 with a Gaussian measure μ in the L q metric for 1 ≤ q ≤ ∞ , and obtain their asymptotical orders, where the Cameron–Martin space of the measure μ is an anisotropic periodic Sobolev space. Moreover, we show that in the average case setting, the Lagrange interpolating operators are asymptotically optimal linear algorithms in the L q metric for all 1 ≤ q ≤ ∞ . This is different from the situation in the worst case setting, where the Lagrange interpolating operators are not asymptotically optimal linear algorithms in the L q metric for q = 1 or ∞ . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. Two- and three-level lower bounds for mixture [formula omitted]-discrepancy and construction of uniform designs by threshold accepting.
- Author
-
Ke, Xiao, Zhang, Rong, and Ye, Hua-Jun
- Subjects
- *
MATHEMATICAL bounds , *SYSTEMS design , *HAMMING distance , *UNIFORMITY , *ALGORITHMS - Abstract
The uniform experimental design (UD), a major kind of space-filling design, is widely used in applications. The majority of UD tables (UDs) with good uniformity are generated under the centralized L 2 -discrepancy (CD) and the wrap-around L 2 -discrepancy (WD). Recently, the mixture L 2 -discrepancy (MD) is proposed and shown to be more reasonable than CD and WD in terms of uniformity. In this paper we review lower bounds for MD of two-level designs from a different point of view and provide a new lower bound. Following the same idea we obtain a lower bound for MD of three-level designs. Moreover, we construct UDs under the measurement of MD by the threshold accepting (TA) algorithm, and finally we attach two new UD tables with good properties derived from TA under the measurement of MD. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. Approximation of multivariate periodic functions by trigonometric polynomials based on rank-1 lattice sampling.
- Author
-
Kämmerer, Lutz, Potts, Daniel, and Volkmer, Toni
- Subjects
- *
APPROXIMATION theory , *TRIGONOMETRIC functions , *POLYNOMIALS , *LATTICE field theory , *ALGORITHMS - Abstract
In this paper, we present algorithms for the approximation of multivariate periodic functions by trigonometric polynomials. The approximation is based on sampling of multivariate functions on rank-1 lattices. To this end, we study the approximation of periodic functions of a certain smoothness. Our considerations include functions from periodic Sobolev spaces of generalized mixed smoothness. Recently an algorithm for the trigonometric interpolation on generalized sparse grids for this class of functions was investigated by Griebel and Hamaekers (2014). The main advantage of our method is that the algorithm is based mainly on a single one-dimensional fast Fourier transform, and that the arithmetic complexity of the algorithm depends only on the cardinality of the support of the trigonometric polynomial in the frequency domain. Therefore, we investigate trigonometric polynomials with frequencies supported on hyperbolic crosses and energy norm based hyperbolic crosses in more detail. Furthermore, we present an algorithm for sampling multivariate functions on perturbed rank-1 lattices and show the numerical stability of the suggested method. Numerical results are presented up to dimension d = 10 , which confirm the theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. Online gradient descent algorithms for functional data learning.
- Author
-
Chen, Xiaming, Tang, Bohao, Fan, Jun, and Guo, Xin
- Subjects
- *
ONLINE data processing , *ONLINE algorithms , *ONLINE education , *ALGORITHMS , *MACHINE learning , *COMPUTER performance - Abstract
Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinite-dimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with functional data. In this paper, we study reproducing kernel-based online learning algorithms for functional data, and derive convergence rates for the expected excess prediction risk under both online and finite-horizon settings of step-sizes respectively. It is well understood that nontrivial uniform convergence rates for the estimation task depend on the regularity of the slope function. Surprisingly, the convergence rates we derive for the prediction task can assume no regularity from slope. Our analysis reveals the intrinsic difference between the estimation task and the prediction task in functional data learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Tractability results for the weighted star-discrepancy.
- Author
-
Aistleitner, Christoph
- Subjects
- *
DISCREPANCY theorem , *INTEGRALS , *MATHEMATICAL functions , *SOBOLEV spaces , *PROBABILITY theory , *ALGORITHMS , *APPLIED mathematics - Abstract
Abstract: The weighted star-discrepancy has been introduced by Sloan and Woźniakowski to reflect the fact that in multidimensional integration problems some coordinates of a function may be more important than others. It provides upper bounds for the error of multidimensional numerical integration algorithms for functions belonging to weighted function spaces of Sobolev type. In the present paper, we prove several tractability results for the weighted star-discrepancy. In particular, we obtain rather sharp sufficient conditions under which the weighted star-discrepancy is strongly tractable. The proofs are probabilistic, and use empirical process theory. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
22. Approximation by quasi-interpolation operators and Smolyak's algorithm.
- Author
-
Kolomoitsev, Yurii
- Subjects
- *
SMOOTHNESS of functions , *BESOV spaces , *KERNEL functions , *PERIODIC functions , *ALGORITHMS - Abstract
We study approximation of multivariate periodic functions from Besov and Triebel–Lizorkin spaces of dominating mixed smoothness by the Smolyak algorithm constructed using a special class of quasi-interpolation operators of Kantorovich-type. These operators are defined similar to the classical sampling operators by replacing samples with the average values of a function on small intervals (or more generally with sampled values of a convolution of a given function with an appropriate kernel). In this paper, we estimate the rate of convergence of the corresponding Smolyak algorithm in the L q -norm for functions from the Besov spaces B p , θ s (T d) and the Triebel–Lizorkin spaces F p , θ s (T d) for all s > 0 and admissible 1 ≤ p , θ ≤ ∞ as well as provide analogues of the Littlewood–Paley-type characterizations of these spaces in terms of families of quasi-interpolation operators. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. On the complexity of the multivariate resultant
- Author
-
Grenet, Bruno, Koiran, Pascal, and Portier, Natacha
- Subjects
- *
COMPUTATIONAL complexity , *MULTIVARIATE analysis , *ALGEBRAIC geometry , *DECISION making , *POLYNOMIALS , *ALGORITHMS - Abstract
Abstract: The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of homogeneous equations in variables is satisfiable (the resultant is a polynomial in the system’s coefficients which vanishes if and only if the system is satisfiable). In this paper, we investigate the complexity of computing the multivariate resultant. First, we study the complexity of testing the multivariate resultant for zero. Our main result is that this problem is -hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). In null characteristic, we observe that this problem is in the Arthur–Merlin class if the generalized Riemann hypothesis holds true, while the best known upper bound in positive characteristic remains . Second, we study the classical algorithms to compute the resultant. They usually rely on the computation of the determinant of an exponential-size matrix, known as Macaulay matrix. We show that this matrix belongs to a class of succinctly representable matrices, for which testing the determinant for zero is proved -complete. This means that improving Canny’s upper bound requires either to look at the fine structure of the Macaulay matrix to find an ad hoc algorithm for computing its determinant, or to use altogether different techniques. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
24. Efficient algorithms for the longest common subsequence problem with sequential substring constraints
- Author
-
Tseng, Chiou-Ting, Yang, Chang-Biau, and Ann, Hsing-Yen
- Subjects
- *
COMPUTER networks , *ALGORITHMS , *CONSTRAINED optimization , *COMBINATORIAL optimization , *SEQUENTIAL analysis , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, we generalize the inclusion constrained longest common subsequence (CLCS) problem to the hybrid CLCS problem which is the combination of the sequence inclusion CLCS and the string inclusion CLCS, called the sequential substring constrained longest common subsequence (SSCLCS) problem. In the SSCLCS problem, we are given two strings and of lengths and , respectively, formed by alphabet and a constraint sequence formed by ordered strings with total length . The problem is that of finding the longest common subsequence of and containing as substrings and with the order of the ’s retained. This problem has two variants, depending on whether the strings in cannot overlap or may overlap. We propose algorithms with and time for the two variants. For the special case with one or two constraints, our algorithm runs in or time, respectively—an order faster than the algorithm proposed by Chen and Chao. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
25. A geometric algorithm for winding number computation with complexity analysis
- Author
-
García Zapata, Juan-Luis and Díaz Martín, Juan Carlos
- Subjects
- *
ALGORITHMS , *COMPUTATIONAL complexity , *PLANE curves , *POLYNOMIALS , *ROOTS of equations , *NUMERICAL integration , *DISCRETE geometry - Abstract
Abstract: Many methods to compute the winding number of plane curves have been proposed, often with the aim of counting the number of roots of polynomials (or, more generally, zeros of analytic functions) inside some region by using the principle of argument. In this paper we propose another method, which are not based on numerical integration, but on discrete geometry. We give conditions that ensure its correct behavior, and a complexity bound based on the distance of the curve to singular cases. Besides, we provide a modification of the algorithm that detects the proximity to a singular case of the curve. If this proximity is such that the number of operations required grows over certain threshold, set by the user, the modified algorithm returns without winding number computation, but with information about the distance to singular case. When the method is applied to polynomials, this information refers to the localization of the roots placed near the curve. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
26. Complexity bounds for second-order optimality in unconstrained optimization
- Author
-
Cartis, C., Gould, N.I.M., and Toint, Ph.L.
- Subjects
- *
COMPUTATIONAL complexity , *MATHEMATICAL optimization , *MATHEMATICAL regularization , *CUBIC equations , *ALGORITHMS , *ITERATIVE methods (Mathematics) - Abstract
Abstract: This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis et al. (2010) show that at most iterations may have to be performed for finding an iterate which is within of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
27. Geometric isomorphism check for symmetric factorial designs
- Author
-
Pang, Fang and Liu, Min-Qian
- Subjects
- *
ISOMORPHISM (Mathematics) , *ABSTRACT algebra , *ORTHOGONAL arrays , *GEOMETRIC analysis , *MATHEMATICAL analysis , *FACTORIAL experiment designs , *ALGORITHMS - Abstract
Abstract: Two designs are geometrically isomorphic if one design can be obtained from the other by reordering the runs, relabeling the factors and/or reversing the level order of one or more factors. In this paper, some new necessary and sufficient conditions for identifying geometric isomorphism of symmetric designs with prime levels are provided. A new algorithm for checking geometric isomorphism is proposed and a searching result for geometrically non-isomorphic 3-level orthogonal arrays of 18 runs is presented. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
28. Deterministic multi-level algorithms for infinite-dimensional integration on
- Author
-
Niu, Ben, Hickernell, Fred J., Müller-Gronbach, Thomas, and Ritter, Klaus
- Subjects
- *
NUMERICAL integration , *ALGORITHMS , *DIMENSION theory (Topology) , *MATHEMATICAL variables , *STATISTICAL sampling , *HILBERT space - Abstract
Abstract: Pricing a path-dependent financial derivative, such as an Asian option, requires the computation of , the expectation of a payoff function , that depends on a Brownian motion . Employing a standard series expansion of the latter problem is equivalent to the computation of the expectation of a function of the corresponding i.i.d. sequence of random coefficients. This motivates the construction and the analysis of algorithms for numerical integration with respect to a product probability measure on the sequence space . The class of integrands studied in this paper is the unit ball in a reproducing kernel Hilbert space obtained by superposition of weighted tensor product spaces of functions of finitely many variables. Combining tractability results for high-dimensional integration with the multi-level technique we obtain new algorithms for infinite-dimensional integration. These deterministic multi-level algorithms use variable subspace sampling and they are superior to any deterministic algorithm based on fixed subspace sampling with respect to the respective worst case error. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
29. On the randomized solution of initial value problems
- Author
-
Daun, Thomas
- Subjects
- *
NUMERICAL solutions to initial value problems , *NUMERICAL solutions to differential equations , *ALGORITHMS , *FRACTIONAL calculus , *MONTE Carlo method , *SMOOTHNESS of functions - Abstract
Abstract: We study the randomized solution of initial value problems for systems of ordinary differential equations Recently Heinrich and Milla (2008) presented an order optimal randomized algorithm solving this problem for -smooth input data (i.e. : the -th derivatives of satisfy a -Hölder condition). This algorithm uses function values and values of derivatives of . In this paper we present an order optimal randomized algorithm for the class of -smooth functions that uses only values of . For this purpose we show how to obtain an order optimal randomized algorithm from an order (sub)optimal deterministic one. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
30. On the tensor rank of multiplication in any extension of
- Author
-
Ballet, Stéphane and Pieltant, Julia
- Subjects
- *
MULTIPLICATION , *TENSOR algebra , *ALGORITHMS , *FINITE fields , *ABSTRACT algebra , *MATHEMATICAL functions - Abstract
Abstract: In this paper, we obtain new bounds for the tensor rank of multiplication in any extension of . In particular, it also enables us to obtain the best known asymptotic bound. To this aim, we use the generalized algorithm of type Chudnovsky with derivative evaluations on places of degree one, two and four applied on the descent over of a Garcia–Stichtenoth tower of algebraic function fields defined over . [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
31. Computation of Darboux polynomials and rational first integrals with bounded degree in polynomial time
- Author
-
Chèze, Guillaume
- Subjects
- *
POLYNOMIALS , *INTEGRALS , *ALGORITHMS , *COMPUTATIONAL complexity , *SYSTEMS theory , *EXPONENTS - Abstract
Abstract: In this paper we study planar polynomial differential systems of this form: where and , , and. A lot of properties of planar polynomial differential systems are related to irreducible Darboux polynomials of the corresponding derivation: . Darboux polynomials are usually computed with the method of undetermined coefficients. With this method we have to solve a polynomial system. We show that this approach can give rise to the computation of an exponential number of reducible Darboux polynomials. Here we show that the Lagutinskii–Pereira algorithm computes irreducible Darboux polynomials with degree smaller than , with a polynomial number, relatively to , and , binary operations. We also give a polynomial-time method to compute, if it exists, a rational first integral with bounded degree. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
32. Complexity of approximation of functions of few variables in high dimensions
- Author
-
Wojtaszczyk, P.
- Subjects
- *
SMOOTHNESS of functions , *MATHEMATICAL variables , *APPROXIMATION theory , *SENSITIVITY analysis , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
Abstract: In DeVore et al. (2011) we considered smooth functions on which depend on a much smaller number of variables or continuous functions which can be approximated by such functions. We were interested in approximating those functions when we can calculate point values at points of our choice. The number of points we needed for non-adaptive algorithms was higher than that in the adaptive case. In this paper we improve on DeVore et al. (2011) and show that in the non-adaptive case one can use the same number of points (up to a multiplicative constant depending on ) that we need in the adaptive case. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
33. Liberating the dimension
- Author
-
Kuo, Frances Y., Sloan, Ian H., Wasilkowski, Grzegorz W., and Woźniakowski, Henryk
- Subjects
- *
MULTIVARIATE analysis , *MATHEMATICAL variables , *MATHEMATICAL constants , *CYBERNETICS , *STATISTICS , *ALGORITHMS - Abstract
Abstract: Many recent papers considered the problem of multivariate integration, and studied the tractability of the problem in the worst case setting as the dimensionality increases. The typical question is: can we find an algorithm for which the error is bounded polynomially in , or even independently of ? And the general answer is: yes, if we have a suitably weighted function space. Since there are important problems with infinitely many variables, here we take one step further: we consider the integration problem with infinitely many variables–thus liberating the dimension–and we seek algorithms with small error and minimal cost. In particular, we assume that the cost for evaluating a function depends on the number of active variables. The choice of the cost function plays a crucial role in the infinite dimensional setting. We present a number of lower and upper estimates of the minimal cost for product and finite-order weights. In some cases, the bounds are sharp. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
34. Hardness of comparing two run-length encoded strings
- Author
-
Chen, Kuan-Yu, Hsu, Ping-Hui, and Chao, Kun-Mao
- Subjects
- *
MATHEMATICAL proofs , *ENCODING , *ALGORITHMS , *MATHEMATICAL analysis , *SPACETIME , *MATHEMATICAL research - Abstract
Abstract: In this paper, we consider a commonly used compression scheme called run-length encoding. We provide both lower and upper bounds for the problems of comparing two run-length encoded strings. Specifically, we prove the 3sum-hardness for both the wildcard matching problem and the -mismatch problem with run-length compressed inputs. Given two run-length encoded strings of and runs, such a result implies that it is very unlikely to devise an -time algorithm for either of them. We then present an inplace algorithm running in time for their combined problem, i.e. -mismatch with wildcards. We further demonstrate that if the aim is to report the positions of all the occurrences, there exists a stronger barrier of -time, matching the running time of our algorithm. Moreover, our algorithm can be easily generalized to a two-dimensional setting without impairing the time and space complexity. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
35. On the approximation of smooth functions using generalized digital nets
- Author
-
Baldeaux, Jan, Dick, Josef, and Kritzer, Peter
- Subjects
- *
NETS (Mathematics) , *APPROXIMATION theory , *SMOOTHNESS of functions , *ALGORITHMS , *WALSH functions , *PERIODIC functions , *LATTICE theory , *STOCHASTIC convergence - Abstract
Abstract: In this paper, we study an approximation algorithm which firstly approximates certain Walsh coefficients of the function under consideration and consequently uses a Walsh polynomial to approximate the function. A similar approach has previously been used for approximating periodic functions, using lattice rules (and Fourier polynomials), and for approximating functions in Walsh Korobov spaces, using digital nets. Here, the key ingredient is the use of generalized digital nets (which have recently been shown to achieve higher order convergence rates for the integration of smooth functions). This allows us to approximate functions with square integrable mixed partial derivatives of order in each variable. The approximation error is studied in the worst case setting in the norm. We also discuss tractability of our proposed approximation algorithm, investigate its computational complexity, and present numerical examples. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
36. Fast algorithms for multivariate interpolation and evaluation at special points
- Author
-
Kapusta, Joanna and Smarzewski, Ryszard
- Subjects
- *
MULTIVARIATE analysis , *INTERPOLATION , *ALGORITHMS , *TENSOR products , *MATHEMATICAL transformations , *COMPUTATIONAL complexity - Abstract
Abstract: In this paper we present explicit formulae for the multivariate Lagrange–Newton transformation and its inverse with respect to points , where , and belong to the field . Moreover, we derive fast algorithms for computing these transformations. The running time of them is base operations from . [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
37. Learning from uniformly ergodic Markov chains
- Author
-
Zou, Bin, Zhang, Hai, and Xu, Zongben
- Subjects
- *
MARKOV processes , *ERGODIC theory , *UNIFORM algebras , *ALGORITHMS , *MACHINE learning , *STOCHASTIC convergence , *GENERALIZABILITY theory , *MATHEMATICAL proofs - Abstract
Abstract: Evaluation for generalization performance of learning algorithms has been the main thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually established based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERM algorithm with u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
38. An algorithmic approach to finding factorial designs with generalized minimum aberration
- Author
-
Sun, Fasheng, Liu, Min-Qian, and Hao, Wenrui
- Subjects
- *
FACTORIAL experiment designs , *UNIFORMITY , *MATHEMATICAL optimization , *ALGORITHMS , *MATHEMATICS - Abstract
Abstract: Factorial designs are arguably the most widely used designs in scientific investigations. Generalized minimum aberration (GMA) and uniformity are two important criteria for evaluating both regular and non-regular designs. The generation of GMA designs is a non-trivial problem due to the sequential optimization nature of the criterion. Based on an analytical expression between the generalized wordlength pattern and a uniformity measure, this paper converts the generation of GMA designs to a constrained optimization problem, and provides effective algorithms for solving this particular problem. Moreover, many new designs with GMA or near-GMA are reported, which are also (nearly) optimal under the uniformity measure. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
39. A note on EC-tractability of multivariate approximation in weighted Korobov spaces for the standard information class.
- Author
-
Zhang, Jie
- Subjects
- *
STANDARDS , *ALGORITHMS - Abstract
In this paper we study exponential tractability of multivariate approximation problem for weighted Korobov spaces in the worst case setting. The considered algorithms are constructive and use the class Λ std of only function evaluations. We give matching necessary and sufficient conditions for notions of EC-quasi-polynomial tractability and EC-uniform weak tractability in terms of two weight parameters of the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Tractability of approximation in the weighted Korobov space in the worst-case setting — a complete picture.
- Author
-
Ebert, Adrian and Pillichshammer, Friedrich
- Subjects
- *
FUNCTION spaces , *FUNCTIONALS , *PICTURES , *ALGORITHMS , *POLYNOMIALS - Abstract
In this paper, we study tractability of L 2 -approximation of one-periodic functions from weighted Korobov spaces in the worst-case setting. The considered weights are of product form. For the algorithms we allow information from the class Λ all consisting of all continuous linear functionals and from the class Λ std , which only consists of function evaluations. We provide necessary and sufficient conditions on the weights of the function space for quasi-polynomial tractability, uniform weak tractability, weak tractability and (σ , τ) -weak tractability. Together with the already known results for strong polynomial and polynomial tractability, our findings provide a complete picture of the weight conditions for all current standard notions of tractability. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Fast amortized multi-point evaluation.
- Author
-
van der Hoeven, Joris and Lecerf, Grégoire
- Subjects
- *
ALGORITHMS , *FINITE fields , *INTERPOLATION algorithms , *POLYNOMIALS , *INTEGERS , *POINT set theory - Abstract
The efficient evaluation of multivariate polynomials at many points is an important operation for polynomial system solving. Kedlaya and Umans have recently devised a theoretically efficient algorithm for this task when the coefficients are integers or when they lie in a finite field. In this paper, we assume that the set of points where we need to evaluate is fixed and "sufficiently generic". Under these restrictions, we present a quasi-optimal algorithm for multi-point evaluation over general fields. We also present a quasi-optimal algorithm for the opposite interpolation task. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Deterministic computation of the characteristic polynomial in the time of matrix multiplication.
- Author
-
Neiger, Vincent and Pernet, Clément
- Subjects
- *
MATRIX multiplications , *POLYNOMIAL time algorithms , *ALGORITHMS , *LINEAR algebra , *POLYNOMIALS , *DETERMINISTIC algorithms - Abstract
This paper describes an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. Our algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form, and relies on new subroutines for transforming shifted reduced matrices into shifted weak Popov matrices, and shifted weak Popov matrices into shifted Popov matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Learning rates for regularized classifiers using multivariate polynomial kernels
- Author
-
Tong, Hongzhi, Chen, Di-Rong, and Peng, Lizhong
- Subjects
- *
POLYNOMIALS , *NUMERICAL analysis , *ERROR analysis in mathematics , *HILBERT space , *APPROXIMATION theory , *ALGORITHMS - Abstract
Abstract: Regularized classifiers (a leading example is support vector machine) are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the polynomial kernels are the original and also probably the most important kernels used in them. In this paper, we provide an error analysis for the regularized classifiers using multivariate polynomial kernels. We introduce Bernstein–Durrmeyer polynomials, whose reproducing kernel Hilbert space norms and approximation properties in space play a key role in the analysis of regularization error. We also introduce the standard estimation of sample error, and derive explicit learning rates for these algorithms. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
44. Optimal adaptive solution of initial-value problems with unknown singularities
- Author
-
Kacewicz, Bolesław and Przybyłowicz, Paweł
- Subjects
- *
OPTIMAL designs (Statistics) , *ALGORITHMS , *MATHEMATICAL singularities , *CHANGE-point problems - Abstract
Abstract: The optimal solution of initial-value problems in ODEs is well studied for smooth right-hand side functions. Much less is known about the optimality of algorithms for singular problems. In this paper, we study the (worst case) solution of scalar problems with a right-hand side function having r continuous bounded derivatives in , except for an unknown singular point. We establish the minimal worst case error for such problems (which depends on r similarly as in the smooth case), and define optimal adaptive algorithms. The crucial point is locating an unknown singularity of the solution by properly adapting the grid. We also study lower bounds on the error of an algorithm for classes of singular problems. In the case of a single singularity with nonadaptive information, or in the case of two or more singularities, the error of any algorithm is shown to be independent of r. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
45. Approximation of anisotropic classes by standard information
- Author
-
Yanjie, Jiang
- Subjects
- *
BESOV spaces , *FUNCTIONAL analysis , *STRUCTURAL optimization , *ALGORITHMS - Abstract
Abstract: This paper deals with the approximation problem on anisotropic Besov classes , and Besov–Wiener classes using standard information. The asymptotic decay rates of the best algorithms in the worst-case setting are determined. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
46. Finding a longest common subsequence between a run-length-encoded string and an uncompressed string
- Author
-
Liu, J.J., Wang, Y.L., and Lee, R.C.T.
- Subjects
- *
ALGORITHMS , *RECURSIVE functions , *NUMBER theory , *DECIDABILITY (Mathematical logic) - Abstract
Abstract: In this paper, we propose an time algorithm for finding a longest common subsequence of strings and with lengths and , respectively, and run-length-encoded lengths and , respectively. We propose a new recursive formula for finding a longest common subsequence of and which is in the run-length-encoded format. That is, and , where is the repeated character of run and is the number of its repetitions. There are three cases in the proposed recursive formula in which two cases are for matching . The third case is for mismatching . We will look specifically at the prior two cases that matches . To determine which case will be used when matches , we have to find a specific value which can be obtained by using another of our proposed recursive formulas. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
47. Characterizing Valiant's algebraic complexity classes
- Author
-
Malod, Guillaume and Portier, Natacha
- Subjects
- *
SET theory , *MATHEMATICAL programming , *ALGORITHMS , *ARITHMETIC - Abstract
Abstract: Valiant introduced 20 years ago an algebraic complexity theory to study the complexity of polynomial families. The basic computation model used is the arithmetic circuit, which makes these classes very easy to define and open to combinatorial techniques. In this paper we gather known results and new techniques under a unifying theme, namely the restrictions imposed upon the gates of the circuit, building a hierarchy from formulas to circuits. As a consequence we get simpler proofs for known results such as the equality of the classes VNP and or the completeness of the Determinant for VQP, and new results such as a characterization of the classes VQP and VP (which we can also apply to the Boolean class LOGCFL) or a full answer to a conjecture in Bürgisser''s book [Completeness and reduction in algebraic complexity theory, Algorithms and Computation in Mathematics, vol. 7, Springer, Berlin, 2000]. We also show that for circuits of polynomial depth and unbounded size these models all have the same expressive power and can be used to characterize a uniform version of VNP. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
48. Strong tractability of multivariate integration of arbitrary high order using digitally shifted polynomial lattice rules
- Author
-
Dick, Josef and Pillichshammer, Friedrich
- Subjects
- *
POLYNOMIALS , *MULTIVARIATE analysis , *MATRICES (Mathematics) , *ALGORITHMS - Abstract
Abstract: In this paper we prove the existence of digitally shifted polynomial lattice rules which achieve strong tractability results for Sobolev spaces of arbitrary high smoothness. The convergence rate is shown to be the best possible up to a given degree of smoothness of the integrand. Indeed we even show the existence of polynomial lattice rules which automatically adjust themselves to the smoothness of the integrand up to a certain given degree. Further we show that strong tractability under certain conditions on the weights can be obtained and that polynomial lattice rules exist for which the worst-case error can be bounded independently of the dimension. These results hold independent of the smoothness. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
49. A lower bound for the Sturm–Liouville eigenvalue problem on a quantum computer
- Author
-
Bessen, Arvid J.
- Subjects
- *
QUANTUM computers , *ALGORITHMS , *DISTRIBUTION (Probability theory) , *PARTIAL differential equations - Abstract
Abstract: We study the complexity of approximating the smallest eigenvalue of a univariate Sturm–Liouville problem on a quantum computer. This general problem includes the special case of solving a one-dimensional Schrödinger equation with a given potential for the ground state energy. The Sturm–Liouville problem depends on a function q, which, in the case of the Schrödinger equation, can be identified with the potential function . Recently Papageorgiou and Woźniakowski proved that quantum computers achieve an exponential reduction in the number of queries over the number needed in the classical worst-case and randomized settings for smooth functions q. Their method uses the (discretized) unitary propagator and arbitrary powers of it as a query (“power queries”). They showed that the Sturm–Liouville equation can be solved with power queries, while the number of queries in the worst-case and randomized settings on a classical computer is polynomial in . This proves that a quantum computer with power queries achieves an exponential reduction in the number of queries compared to a classical computer. In this paper we show that the number of queries in Papageorgiou''s and Woźniakowski''s algorithm is asymptotically optimal. In particular we prove a matching lower bound of power queries, therefore showing that power queries are sufficient and necessary. Our proof is based on a frequency analysis technique, which examines the probability distribution of the final state of a quantum algorithm and the dependence of its Fourier transform on the input. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
50. A numerical-symbolic algorithm for computing the multiplicity of a component of an algebraic set
- Author
-
Bates, Dan, Peterson, Chris, and Sommese, Andrew J.
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *MATHEMATICAL programming , *MULTIPLICITY (Mathematics) - Abstract
Abstract: Let be multivariate polynomials (with complex coefficients) in the variables . The common zero locus of these polynomials, for , determines an algebraic set. This algebraic set decomposes into a union of simpler, irreducible components. The set of polynomials imposes on each component a positive integer known as the multiplicity of the component. Multiplicity plays an important role in many practical applications. It determines roughly “how many times the component should be counted in a computation”. Unfortunately, many numerical methods have difficulty in problems where the multiplicity of a component is greater than one. The main goal of this paper is to present an algorithm for determining the multiplicity of a component of an algebraic set. The method sidesteps the numerical stability issues which have obstructed other approaches by incorporating a combined numerical-symbolic technique. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.