58 results
Search Results
2. Best Paper Award of the Journal of Complexity.
- Author
-
Novak, Erich
- Subjects
- *
PERIODICAL awards - Published
- 2023
- Full Text
- View/download PDF
3. Journal of Complexity Best Paper Award.
- Author
-
Novak, Erich
- Subjects
- *
AWARDS - Published
- 2023
- Full Text
- View/download PDF
4. Best Paper Award of the Journal of Complexity.
- Author
-
Novak, Erich
- Published
- 2022
- Full Text
- View/download PDF
5. Online regularized learning algorithm for functional data.
- Author
-
Mao, Yuan and Guo, Zheng-Chu
- Subjects
- *
ONLINE education , *ONLINE algorithms , *HILBERT space , *DECAY constants , *ERROR rates , *ITERATIVE learning control , *TIKHONOV regularization - Abstract
In recent years, functional linear models have attracted growing attention in statistics and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Bypassing the quadrature exactness assumption of hyperinterpolation on the sphere.
- Author
-
An, Congpei and Wu, Hao-Ning
- Subjects
- *
SPHERES , *CONTINUOUS functions , *GAUSSIAN quadrature formulas , *QUADRATURE domains , *POLYNOMIALS - Abstract
This paper focuses on the approximation of continuous functions on the unit sphere by spherical polynomials of degree n via hyperinterpolation. Hyperinterpolation of degree n is a discrete approximation of the L 2 -orthogonal projection of the same degree with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most 2 n. This paper aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz–Zygmund property proposed in a previous paper. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule (not necessarily with quadrature exactness). This scheme is referred to as unfettered hyperinterpolation. This paper provides a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo (QMC) design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Tractability of L2-approximation and integration in weighted Hermite spaces of finite smoothness.
- Author
-
Leobacher, Gunther, Pillichshammer, Friedrich, and Ebert, Adrian
- Subjects
- *
LITERATURE - Abstract
In this paper we consider integration and L 2 -approximation for functions over R s from weighted Hermite spaces. The first part of the paper is devoted to a comparison of several weighted Hermite spaces that appear in literature, which is interesting on its own. Then we study tractability of the integration and L 2 -approximation problem for the introduced Hermite spaces, which describes the growth rate of the information complexity when the error threshold ε tends to 0 and the problem dimension s grows to infinity. Our main results are characterizations of tractability in terms of the involved weights, which model the importance of the successive coordinate directions for functions from the weighted Hermite spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. On a class of linear regression methods.
- Author
-
Wang, Ying-Ao, Huang, Qin, Yao, Zhigang, and Zhang, Ye
- Subjects
- *
INVERSE problems , *LEAST squares - Abstract
In this paper, a unified study is presented for the design and analysis of a broad class of linear regression methods. The proposed general framework includes the conventional linear regression methods (such as the least squares regression and the Ridge regression) and some new regression methods (e.g. the Landweber regression and Showalter regression), which have recently been introduced in the fields of optimization and inverse problems. The strong consistency, the reduced mean squared error, the asymptotic Gaussian property, and the best worst case error of this class of linear regression methods are investigated. Various numerical experiments are performed to demonstrate the consistency and efficiency of the proposed class of methods for linear regression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Tamed-adaptive Euler-Maruyama approximation for SDEs with superlinearly growing and piecewise continuous drift, superlinearly growing and locally Hölder continuous diffusion.
- Author
-
Do, Minh-Thang, Ngo, Hoang-Long, and Pho, Nhat-An
- Subjects
- *
HOLDER spaces , *STOCHASTIC differential equations , *DIFFUSION coefficients - Abstract
In this paper, we consider stochastic differential equations whose drift coefficient is superlinearly growing and piecewise continuous, and whose diffusion coefficient is superlinearly growing and locally Hölder continuous. We first prove the existence and uniqueness of solution to such stochastic differential equations and then propose a tamed-adaptive Euler-Maruyama approximation scheme. We study the rate of convergence in L 1 -norm of the scheme in both finite and infinite time intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Nonlinear Tikhonov regularization in Hilbert scales for inverse learning.
- Author
-
Rastogi, Abhishake
- Subjects
- *
TIKHONOV regularization , *HILBERT space , *INVERSE problems , *REGULARIZATION parameter , *SAMPLE size (Statistics) - Abstract
In this paper, we study Tikhonov regularization scheme in Hilbert scales for a nonlinear statistical inverse problem with general noise. The regularizing norm in this scheme is stronger than the norm in the Hilbert space. We focus on developing a theoretical analysis for this scheme based on conditional stability estimates. We utilize the concept of the distance function to establish high probability estimates of the direct and reconstruction errors in the Reproducing Kernel Hilbert space setting. Furthermore, explicit rates of convergence in terms of sample size are established for the oversmoothing case and the regular case over the regularity class defined through an appropriate source condition. Our results improve upon and generalize previous results obtained in related settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A note on the CBC-DBD construction of lattice rules with general positive weights.
- Author
-
Kritzer, Peter
- Subjects
- *
COMPUTER algorithms , *ELECTRONIC information resource searching , *DATABASE searching , *VECTOR valued functions , *NUMERICAL integration - Abstract
Lattice rules are among the most prominently studied quasi-Monte Carlo methods to approximate multivariate integrals. A rank-1 lattice rule for an s -dimensional integral is specified by its generating vector z ∈ Z s and its number of points N. While there are many results on the existence of "good" rank-1 lattice rules, there are no explicit constructions of good generating vectors for dimensions s ≥ 3. Therefore one resorts to computer search algorithms. In a recent paper by Ebert et al. in the Journal of Complexity, we showed a component-by-component digit-by-digit (CBC-DBD) construction for good generating vectors for integration of functions in weighted Korobov classes equipped with product weights. Here, we generalize this result to arbitrary positive weights, answering an open question from the paper of Ebert et al. We include a section on how the algorithm can be implemented in the case of POD weights, implying that the CBC-DBD construction is competitive with the classical CBC construction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Randomized complexity of parametric integration and the role of adaption I. Finite dimensional case.
- Author
-
Heinrich, Stefan
- Subjects
- *
SOBOLEV spaces - Abstract
We study the randomized n -th minimal errors (and hence the complexity) of vector valued mean computation, which is the discrete version of parametric integration. The results of the present paper form the basis for the complexity analysis of parametric integration in Sobolev spaces, which will be presented in Part 2. Altogether this extends previous results of Heinrich and Sindambiwe (1999) [12] and Wiegand (2006) [27]. Moreover, a basic problem of Information-Based Complexity on the power of adaption for linear problems in the randomized setting is solved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Optimal recovery and generalized Carlson inequality for weights with symmetry properties.
- Author
-
Osipenko, K.Yu.
- Subjects
- *
DIFFERENTIAL operators , *SYMMETRY , *FOURIER transforms , *DIFFERENTIAL inequalities - Abstract
The paper concerns problems of the recovery of operators from noisy information in weighted L q -spaces with homogeneous weights. A number of general theorems are proved and applied to finding exact constants in multidimensional Carlson type inequalities with several weights and problems of the recovery of differential operators from a noisy Fourier transform. In particular, optimal methods are obtained for the recovery of powers of generalized Laplace operators from a noisy Fourier transform in the L p -metric. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Convergence of the Gauss-Newton method for convex composite optimization problems under majorant condition on Riemannian manifolds.
- Author
-
Ansari, Qamrul Hasan, Uddin, Moin, and Yao, Jen-Chih
- Subjects
- *
GAUSS-Newton method , *RIEMANNIAN manifolds , *QUASI-Newton methods - Abstract
In this paper, we consider convex composite optimization problems on Riemannian manifolds, and discuss the semi-local convergence of the Gauss-Newton method with quasi-regular initial point and under the majorant condition. As special cases, we also discuss the convergence of the sequence generated by the Gauss-Newton method under Lipschitz-type condition, or under γ -condition. • Study of convex composite optimization problems on Riemannian manifolds. • Convergence of the Gauss-Newton method under the majorant condition. • Special cases are studied under (i) Lipschitz-type condition, (ii) γ -condition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Convergence analysis of an optimally accurate frozen multi-level projected steepest descent iteration for solving inverse problems.
- Author
-
Mittal, Gaurav and Kumar Giri, Ankik
- Subjects
- *
PROBLEM solving , *NONLINEAR operators , *STABILITY constants , *CONVEX sets , *FAMILY stability - Abstract
In this paper, we introduce a novel projected steepest descent iterative method with frozen derivative. The classical projected steepest descent iterative method involves the computation of derivative of the nonlinear operator at each iterate. The method of this paper requires the computation of derivative of the nonlinear operator only at an initial point. We exhibit the convergence analysis of our method by assuming the conditional stability of the inverse problem on a convex and compact set. Further, by assuming the conditional stability on a nested family of convex and compact subsets, we develop a multi-level method. In order to enhance the accuracy of approximation between neighboring levels, we couple it with the growth of stability constants. This along with a suitable discrepancy criterion ensures that the algorithm proceeds from level to level and terminates within finite steps. Finally, we discuss an inverse problem on which our methods are applicable. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Functional linear regression with Huber loss.
- Author
-
Tong, Hongzhi
- Subjects
- *
HILBERT space - Abstract
In this paper, we consider the robust regression problem associated with Huber loss in the framework of functional linear model and reproducing kernel Hilbert spaces. We propose an Ivanov regularized empirical risk minimization estimation procedure to approximate the slope function of the linear model in the presence of outliers or heavy-tailed noises. By appropriately tuning the scale parameter of the Huber loss, we establish explicit rates of convergence for our estimates in terms of excess prediction risk under mild assumptions. Our study in the paper justifies the efficiency of Huber regression for functional data from a theoretical viewpoint. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Rates of approximation by ReLU shallow neural networks.
- Author
-
Mao, Tong and Zhou, Ding-Xuan
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *DEEP learning - Abstract
Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from Hölder spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with m hidden neurons can uniformly approximate functions from the Hölder space W ∞ r ([ − 1 , 1 ] d) with rates O ((log m) 1 2 + d m − r d d + 2 d + 4 ) when r < d / 2 + 2. Such rates are very close to the optimal one O (m − r d ) in the sense that d + 2 d + 4 is close to 1, when the dimension d is large. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Approximating smooth and sparse functions by deep neural networks: Optimal approximation rates and saturation.
- Author
-
Liu, Xia
- Subjects
- *
ARTIFICIAL neural networks , *SMOOTHNESS of functions , *APPROXIMATION theory - Abstract
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. The rate of convergence for sparse and low-rank quantile trace regression.
- Author
-
Tan, Xiangyong, Peng, Ling, Xiao, Peiwen, Liu, Qing, and Liu, Xiaohui
- Subjects
- *
QUANTILE regression , *LOW-rank matrices , *PANEL analysis , *REGRESSION analysis - Abstract
Trace regression models are widely used in applications involving panel data, images, genomic microarrays, etc., where high-dimensional covariates are often involved. However, the existing research involving high-dimensional covariates focuses mainly on the condition mean model. In this paper, we extend the trace regression model to the quantile trace regression model when the parameter is a matrix of simultaneously low rank and row (column) sparsity. The convergence rate of the penalized estimator is derived under mild conditions. Simulations, as well as a real data application, are also carried out for illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. The curse of dimensionality for the Lp-discrepancy with finite p.
- Author
-
Novak, Erich and Pillichshammer, Friedrich
- Subjects
- *
NUMERICAL integration , *SOBOLEV spaces , *DIFFERENTIABLE functions , *POINT set theory , *CUBES - Abstract
The L p -discrepancy is a quantitative measure for the irregularity of distribution of an N -element point set in the d -dimensional unit-cube, which is closely related to the worst-case error of quasi-Monte Carlo algorithms for numerical integration. It's inverse for dimension d and error threshold ε ∈ (0 , 1) is the minimal number of points in [ 0 , 1) d such that the minimal normalized L p -discrepancy is less or equal ε. It is well known, that the inverse of L 2 -discrepancy grows exponentially fast with the dimension d , i.e., we have the curse of dimensionality, whereas the inverse of L ∞ -discrepancy depends exactly linearly on d. The behavior of inverse of L p -discrepancy for general p ∉ { 2 , ∞ } has been an open problem for many years. In this paper we show that the L p -discrepancy suffers from the curse of dimensionality for all p in (1 , 2 ] which are of the form p = 2 ℓ / (2 ℓ − 1) with ℓ ∈ N. This result follows from a more general result that we show for the worst-case error of numerical integration in an anchored Sobolev space with anchor 0 of once differentiable functions in each variable whose first derivative has finite L q -norm, where q is an even positive integer satisfying 1 / p + 1 / q = 1. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Discrepancy bounds for normal numbers generated by necklaces in arbitrary base.
- Author
-
Hofer, Roswitha and Larcher, Gerhard
- Subjects
- *
NECKLACES , *PRIME numbers , *PETROPHYSICS - Abstract
Mordechay B. Levin (1999) has constructed a number λ which is normal in base 2, and such that the sequence ({ 2 n λ }) n = 0 , 1 , 2 , ... has very small discrepancy N ⋅ D N = O ((log N) 2). This construction technique was generalized by Becher and Carton (2019), who generated normal numbers via nested perfect necklaces, for which the same upper discrepancy estimate holds. In this paper we derive an upper discrepancy bound for so-called semi-perfect nested necklaces and show that for Levin's normal number in arbitrary prime base p this upper bound for the discrepancy is best possible. This result generalizes a previous result by the authors (2022) in base 2. Our result for Levin's normal number in any prime base might support the guess that O ((log N) 2) is the best order in N that can be achieved by a normal number, while generalizing the class of known normal numbers by introducing semi-perfect necklaces on the other hand might help for the search of normal numbers that satisfy smaller discrepancy bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. A continuous characterization of PSPACE using polynomial ordinary differential equations.
- Author
-
Bournez, Olivier, Gozzi, Riccardo, Graça, Daniel S., and Pouly, Amaury
- Subjects
- *
ORDINARY differential equations , *POLYNOMIALS , *ANALOG computers - Abstract
In this paper we provide a characterization of the complexity class PSPACE by using a purely continuous model defined with polynomial ordinary differential equations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. On Huber's contaminated model.
- Author
-
Mu, Weiyan and Xiong, Shifeng
- Subjects
- *
OUTLIER detection , *DATA modeling - Abstract
Huber's contaminated model is a basic model for data with outliers. This paper aims at addressing several fundamental problems about this model. We first study its identifiability properties. Several theorems are presented to determine whether the model is identifiable for various situations. Based on these results, we discuss the problem of estimating the parameters with observations drawn from Huber's contaminated model. A definition of estimation consistency is introduced to handle the general case where the model may be unidentifiable. This consistency is a strong robustness property. After showing that existing estimators cannot be consistent in this sense, we propose a new estimator that possesses the consistency property under mild conditions. Its adaptive version, which can simultaneously possess this consistency property and optimal asymptotic efficiency, is also provided. Numerical examples show that our estimators have better overall performance than existing estimators no matter how many outliers in the data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. On the relation of the spectral test to isotropic discrepancy and Lq-approximation in Sobolev spaces.
- Author
-
Sonnleitner, Mathias and Pillichshammer, Friedrich
- Subjects
- *
SOBOLEV spaces , *POINT set theory , *CONVEX sets , *NEIGHBORHOODS - Abstract
This paper is a follow-up to the recent paper of Pillichshammer and Sonnleitner (2020) [12]. We show that the isotropic discrepancy of a lattice point set is at most d 2 2 (d + 1) times its spectral test, thereby correcting the dependence on the dimension d and an inaccuracy in the proof of the upper bound in Theorem 2 of the mentioned paper. The major task is to bound the volume of the neighbourhood of the boundary of a convex set contained in the unit cube. Further, we characterize averages of the distance to a lattice point set in terms of the spectral test. As an application, we infer that the spectral test – and with it the isotropic discrepancy – is crucial for the suitability of the lattice point set for the approximation of Sobolev functions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Adaptive iterative hard thresholding for low-rank matrix recovery and rank-one measurements.
- Author
-
Xia, Yu and Zhou, Likai
- Subjects
- *
LOW-rank matrices , *RESTRICTED isometry property , *TECHNOLOGY convergence , *THRESHOLDING algorithms - Abstract
In low-rank matrix recovery, many kinds of measurements fail to meet the standard restricted isometry property (RIP), such as rank-one measurements, that is, [ A (X) ] i = 〈 A i , X 〉 with rank (A i) = 1 , i = 1 ,... , m. Historical iterative hard thresholding sequence for low-rank matrix recovery and rank-one measurements was taken as X n + 1 = P s (X n − μ n P t (A ⁎ sign (A (X n) − y))) , which introduced the "tail" and "head" approximations P s and P t , respectively. In this paper, we remove the term P t and provide a new iterative hard thresholding algorithm with adaptive step size (abbreviated as AIHT). The linear convergence analysis and stability results on AIHT are established under the ℓ 1 / ℓ 2 -RIP. Particularly, we discuss the rank-one Gaussian measurements under the tight upper and lower bounds on E ‖ A (X) ‖ 1 , and provide better convergence rate and sampling complexity. Besides, several empirical experiments are provided to show that AIHT performs better than the historical rank-one iterative hard thresholding method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. The area of empty axis-parallel boxes amidst 2-dimensional lattice points.
- Author
-
Lachmann, Thomas and Wiart, Jaspar
- Subjects
- *
QUADRATIC fields , *RINGS of integers , *CONTINUED fractions - Abstract
The dispersion of a point set in the unit square is the area of the largest empty axis-parallel box. In this paper we are interested in the dispersion of lattices in the plane, that is, the supremum of the area of the empty axis-parallel boxes amidst the lattice points. We introduce a framework with which to study this based on the continued fractions expansions of the lattice generators. We give necessary and sufficient conditions under which a lattice has finite dispersion. We obtain an exact formula for the dispersion of the lattices associated to subrings of the ring of integers of quadratic fields. We have tight bounds for the dispersion of a lattice based on the largest continued fraction coefficient of the generators, accurate to within one half. We provide an equivalent formulation of Zaremba's conjecture. Using this framework we are able to give very short proofs of previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Improved bounds on the gain coefficients for digital nets in prime power base.
- Author
-
Goda, Takashi and Suzuki, Kosuke
- Subjects
- *
PETRI nets , *MOSQUITO nets , *SMOOTHNESS of functions , *DUALITY theory (Mathematics) - Abstract
We study randomized quasi-Monte Carlo integration by scrambled nets. The scrambled net quadrature has long gained its popularity because it is an unbiased estimator of the true integral, allows for a practical error estimation, achieves a high order decay of the variance for smooth functions, and works even for L p -functions with any p ≥ 1. The variance of the scrambled net quadrature for L 2 -functions can be evaluated through the set of the so-called gain coefficients. In this paper, based on the system of Walsh functions and the concept of dual nets, we provide improved upper bounds on the gain coefficients for digital nets in general prime power base. Our results explain the known bound by Owen (1997) for Faure sequences, the recently improved bound by Pan and Owen (2022) for digital nets in base 2 (including Sobol' sequences as a special case), and their finding that all the nonzero gain coefficients for digital nets in base 2 must be powers of two, all in a unified way. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Central Limit Theorem for (t,s)-sequences in base 2.
- Author
-
Levin, Mordechay B.
- Subjects
- *
CENTRAL limit theorem , *RANDOM variables , *GAUSSIAN distribution - Abstract
Let (x n) n ≥ 0 be a digital (t , s) -sequence in base 2, P m = (x n) n = 0 2 m − 1 , and let D (P m , Y) be the local discrepancy of P m. Let T ⊕ Y be the digital addition of T and Y , and let M s , p (P m) = (∫ [ 0 , 1) 2 s | D (P m ⊕ T , Y) | p d T d Y) 1 / p. In this paper, we prove that D (P m ⊕ T , Y) / M s , 2 (P m) weakly converges to the standard Gaussian distribution for m → ∞ , where T , Y are uniformly distributed random variables in [ 0 , 1) s. In addition, we prove that M s , p (P m) / M s , 2 (P m) → 1 2 π ∫ − ∞ ∞ | u | p e − u 2 / 2 d u for m → ∞ , p > 0. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. The nonzero gain coefficients of Sobol's sequences are always powers of two.
- Author
-
Pan, Zexin and Owen, Art B.
- Subjects
- *
PLAINS , *FINITE, The , *MICROSTRUCTURE , *ALGORITHMS - Abstract
When a plain Monte Carlo estimate on n samples has variance σ 2 / n , then scrambled digital nets attain a variance that is o (1 / n) as n → ∞. For finite n and an adversarially selected integrand, the variance of a scrambled (t , m , s) -net can be at most Γ σ 2 / n for a maximal gain coefficient Γ < ∞. The most widely used digital nets and sequences are those of Sobol'. It was previously known that Γ ⩽ 2 t 3 s for any nets in base 2. For digital nets, Dick and Pillichshammer (2010) obtained the bound 2 t + s. In this paper we study digital nets in base 2 and show that Γ ⩽ 2 t + s − 1 for such nets. This bound is a simple, but apparently unnoticed, consequence of a microstructure analysis by Niederreiter and Pirsic in 2001. We obtain a sharper bound that is smaller than this for some digital nets. Our main finding is that all nonzero gain coefficients must be powers of two. A consequence of this latter fact is a simplified algorithm for computing gain coefficients of digital nets in base 2. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Approximate Pythagoras numbers on ⁎-algebras over [formula omitted].
- Author
-
Abbasi, Paria, Gribling, Sander, Klingler, Andreas, and Netzer, Tim
- Subjects
- *
SUM of squares , *PYTHAGOREAN theorem , *SEMIDEFINITE programming - Abstract
The Pythagoras number of a sum of squares is the shortest length among its sums of squares representations. In many algebras, for example real polynomial algebras in two or more variables, there exists no upper bound on the Pythagoras number for all sums of squares. In this paper, we study how Pythagoras numbers in ⁎-algebras over C behave with respect to small perturbations of elements. More precisely, the approximate Pythagoras number of an element is the smallest Pythagoras number among all elements in its ε -ball. We show that these approximate Pythagoras numbers are often significantly smaller than their exact versions, and allow for (almost) dimension-independent upper bounds. Our results use low-rank approximations for Gram matrices of sums of squares and estimates for the operator norm of the Gram map. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Sharp estimates for the covering numbers of the Weierstrass fractal kernel.
- Author
-
Azevedo, Douglas, Gonzalez, Karina, and Jordão, Thaís
- Subjects
- *
HILBERT space , *FUNCTION spaces , *ORTHOGRAPHIC projection , *CONTINUOUS functions , *FOURIER series , *FRACTALS - Abstract
In this paper, we present sharp estimates for the covering numbers of the embedding of the reproducing kernel Hilbert space (RKHS) associated with the Weierstrass fractal kernel into the space of continuous functions. The method we apply is based on the characterization of the infinite-dimensional RKHS generated by the Weierstrass fractal kernel and it requires estimates for the norm operator of orthogonal projections on the RKHS. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Optimal subsampling for least absolute relative error estimators with massive data.
- Author
-
Ren, Min, Zhao, Shengli, and Wang, Mingqiu
- Subjects
- *
BIG data , *LEAST squares , *STATISTICAL weighting - Abstract
Due to the limitation of computational resources, traditional statistical methods are no longer applicable to large data sets. Subsampling is a popular method which can significantly reduce computational burden. This paper considers a subsampling strategy based on the least absolute relative error in the multiplicative model for massive data. In addition, we employ the random weighting and the least squares methods to handle the problem that the asymptotic covariance of the estimator is difficult to be estimated directly. Moreover, the comparison among the least absolute relative error, least absolute deviation and least squares under the optimal subsampling strategy are given in simulation studies and real examples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Amortized multi-point evaluation of multivariate polynomials.
- Author
-
van der Hoeven, Joris and Lecerf, Grégoire
- Subjects
- *
POINT set theory - Abstract
The evaluation of a polynomial at several points is called the problem of multi-point evaluation. Sometimes, the set of evaluation points is fixed and several polynomials need to be evaluated at this set of points. Several efficient algorithms for this kind of "amortized" multi-point evaluation have been developed recently for the special cases of bivariate polynomials or when the set of evaluation points is generic. In this paper, we extend these results to the evaluation of polynomials in an arbitrary number of variables at an arbitrary set of points. We prove a softly linear complexity bound when the number of variables is fixed. Our method relies on a novel quasi-reduction algorithm for multivariate polynomials, that operates simultaneously with respect to several orderings on the monomials. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. On Newton's method for solving generalized equations.
- Author
-
Ferreira, O.P., Jean-Alexis, C., Piétrus, A., and Silva, G.N.
- Subjects
- *
NEWTON-Raphson method , *EQUATIONS , *SET-valued maps - Abstract
In this paper, we study the convergence properties of a Newton-type method for solving generalized equations under a majorant condition. To this end, we use a contraction mapping principle. More precisely, we present semi-local convergence analysis of the method for generalized equations involving a set-valued map, the inverse of which satisfying the Aubin property. Our analysis enables us to obtain convergence results under Lipschitz, Smale and Nesterov-Nemirovski's self-concordant conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Computing zeta functions of large polynomial systems over finite fields.
- Author
-
Cheng, Qi, Maurice Rojas, J., and Wan, Daqing
- Subjects
- *
ZETA functions , *SOFTWARE verification , *POLYNOMIALS , *FINITE fields , *POLYNOMIAL time algorithms , *EQUATIONS - Abstract
We improve the algorithms of Lauder-Wan [11] and Harvey [8] to compute the zeta function of a system of m polynomial equations in n variables, over the q element finite field F q , for large m. The dependence on m in the original algorithms was exponential in m. Our main result is a reduction of the dependence on m from exponential to polynomial. As an application, we speed up a doubly exponential algorithm from a recent software verification paper [3] (on universal equivalence of programs over finite fields) to singly exponential time. One key new ingredient is an effective, finite field version of the classical Kronecker theorem which (set-theoretically) reduces the number of defining equations for a polynomial system over F q when q is suitably large. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Learning rate of distribution regression with dependent samples.
- Author
-
Dong, Shunan and Sun, Wenchang
- Subjects
- *
INTEGRAL operators , *HILBERT space - Abstract
In this paper, we study the learning rate of distribution regression for strong mixing sequences. The distribution regression containing two stages of sampling aims at regressing from distributions to real valued outputs. In order to form our regressor, we embed distributions to a reproducing kernel Hilbert space by utilizing the tool of mean embedding. By using the property of integral operator and the covariance inequality for strong mixing sequence, we show that under some priori conditions of regression function, the distribution regression still reaches the optimal learning rate with dependent samples. Hence, we extend the applicable range of distribution regression to the identically distributed but dependent samples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Computing Riemann–Roch spaces via Puiseux expansions.
- Author
-
Abelard, Simon, Berardini, Elena, Couvreur, Alain, and Lecerf, Grégoire
- Subjects
- *
PLANE curves , *ALGEBRAIC geometry , *EXPONENTS , *PROJECTIVE spaces , *PROJECTIVE planes - Abstract
Computing large Riemann–Roch spaces for plane projective curves still constitutes a major algorithmic and practical challenge. Seminal applications concern the construction of arbitrarily large algebraic geometry error correcting codes over alphabets with bounded cardinality. Nowadays such codes are increasingly involved in new areas of computer science such as cryptographic protocols and "interactive oracle proofs". In this paper, we design a new probabilistic algorithm of Las Vegas type for computing Riemann–Roch spaces of smooth divisors, in characteristic zero, and with expected complexity exponent 2.373 (a feasible exponent for linear algebra) in terms of the input size. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Central Limit Theorem for the volume of the zero set of Kostlan-Shub-Smale random polynomial systems.
- Author
-
Armentano, Diego, Azaïs, Jean-Marc, Dalmao, Federico, and León, José R.
- Subjects
- *
CENTRAL limit theorem , *RANDOM sets , *POLYNOMIALS - Abstract
We establish the Central Limit Theorem, as the degree goes to infinity, for the normalized volume of the zero set of a rectangular Kostlan–Shub–Smale random polynomial system. This paper is a continuation of Central Limit Theorem for the number of real roots of Kostlan–Shub–Smale random polynomial systems by the same authors in which the case of square systems was considered. Our main tools are Kac-Rice formula and an expansion of the volume of the level set into the Itô-Wiener Chaos. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. A simplified iteratively regularized projection method for nonlinear ill-posed problems.
- Author
-
Huang, Jingyue, Luo, Xingjun, and Zhang, Rong
- Subjects
- *
NONLINEAR equations , *GAUSS-Newton method - Abstract
In this paper, we propose a projection method for the numerical solution of the simplified iteratively regularized Gauss-Newton method of nonlinear integral equations for which an a posteriori stopping rule is proposed to terminate the iteration. Such methods require only the computation of the Fréchet derivative at the initial approximation. Thus the computational work is considerably reduced. Under certain mild conditions, we give the convergence analysis and derive the order optimality. Numerical results are presented to demonstrate the efficiency and accuracy of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Kolmogorov widths of intersections of finite-dimensional balls.
- Author
-
Vasil'eva, A.A.
- Subjects
- *
UNIT ball (Mathematics) - Abstract
In the present paper, order estimates for the n th Kolmogorov widths of the intersection of homothetic copies of the unit balls ν α B p α N in l q N are obtained for n ⩽ N / 2. This result for n = N / 2 was obtained by Galeev in 1981. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Countable tensor products of Hermite spaces and spaces of Gaussian kernels.
- Author
-
Gnewuch, M., Hefter, M., Hinrichs, A., and Ritter, K.
- Subjects
- *
HILBERT space , *FUNCTION spaces , *SEQUENCE spaces , *ISOMORPHISM (Mathematics) , *HERMITE polynomials - Abstract
In recent years finite tensor products of reproducing kernel Hilbert spaces (RKHSs) of Gaussian kernels on the one hand and of Hermite spaces on the other hand have been considered in tractability analysis of multivariate problems. In the present paper we study countably infinite tensor products for both types of spaces. We show that the incomplete tensor product in the sense of von Neumann may be identified with an RKHS whose domain is a proper subset of the sequence space R N. Moreover, we show that each tensor product of spaces of Gaussian kernels having square-summable shape parameters is isometrically isomorphic to a tensor product of Hermite spaces; the corresponding isomorphism is given explicitly, respects point evaluations, and is also an L 2 -isometry. This result directly transfers to the case of finite tensor products. Furthermore, we provide regularity results for Hermite spaces of functions of a single variable. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Best and random approximation of a convex body by a polytope.
- Author
-
Prochno, J., Schütt, C., and Werner, E.M.
- Subjects
- *
POLYTOPES , *CONVEX bodies , *SURFACE area - Abstract
In this paper, we give an overview of some results concerning best and random approximation of convex bodies by polytopes. We explain how both are linked and see that random approximation is almost as good as best approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Online gradient descent algorithms for functional data learning.
- Author
-
Chen, Xiaming, Tang, Bohao, Fan, Jun, and Guo, Xin
- Subjects
- *
ONLINE data processing , *ONLINE algorithms , *ONLINE education , *ALGORITHMS , *MACHINE learning , *COMPUTER performance - Abstract
Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinite-dimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with functional data. In this paper, we study reproducing kernel-based online learning algorithms for functional data, and derive convergence rates for the expected excess prediction risk under both online and finite-horizon settings of step-sizes respectively. It is well understood that nontrivial uniform convergence rates for the estimation task depend on the regularity of the slope function. Surprisingly, the convergence rates we derive for the prediction task can assume no regularity from slope. Our analysis reveals the intrinsic difference between the estimation task and the prediction task in functional data learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Influence of numerical discretizations on hitting probabilities for linear stochastic parabolic systems.
- Author
-
Chen, Chuchu, Hong, Jialin, and Sheng, Derui
- Subjects
- *
STOCHASTIC systems , *WHITE noise , *PROBABILITY theory , *HAUSDORFF measures , *BOREL sets , *LINEAR systems - Abstract
This paper investigates the influence of standard numerical discretizations on hitting probabilities for a system of linear stochastic parabolic equations driven by space-time white noises. We establish lower and upper bounds for hitting probabilities of the associated numerical solutions of both temporal and spatial semi-discretizations in terms of Bessel–Riesz capacity and Hausdorff measure, respectively. Moreover, the critical dimensions of both temporal and spatial semi-discretizations turn out to be halves of those of the exact solution. This reveals that there exist some Borel sets A such that the probability of the event that the paths of the numerical solution hit A cannot converge to that of the exact solution when the stepsizes vanish. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Convergence rates of support vector machines regression for functional data.
- Author
-
Tong, Hongzhi
- Subjects
- *
SUPPORT vector machines , *STATISTICAL learning , *HILBERT space - Abstract
Support vector machines regression (SVMR) is an important part of statistical learning theory. The main difference between SVMR and the classical least squares regression (LSR) is that SVMR uses the ϵ -insensitive loss rather than quadratic loss to measure the empirical error. In this paper, we consider SVMR method in the field of functional data analysis under the framework of reproducing kernel Hilbert spaces. The main tool used in our theoretical analysis is the concentration inequalities for suprema of some appropriate empirical processes. As a result, we establish explicit convergence rates of the prediction risk for SVMR, which coincide with the minimax lower bound obtained recently in literature for LSR. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Approximation by quasi-interpolation operators and Smolyak's algorithm.
- Author
-
Kolomoitsev, Yurii
- Subjects
- *
SMOOTHNESS of functions , *BESOV spaces , *KERNEL functions , *PERIODIC functions , *ALGORITHMS - Abstract
We study approximation of multivariate periodic functions from Besov and Triebel–Lizorkin spaces of dominating mixed smoothness by the Smolyak algorithm constructed using a special class of quasi-interpolation operators of Kantorovich-type. These operators are defined similar to the classical sampling operators by replacing samples with the average values of a function on small intervals (or more generally with sampled values of a convolution of a given function with an appropriate kernel). In this paper, we estimate the rate of convergence of the corresponding Smolyak algorithm in the L q -norm for functions from the Besov spaces B p , θ s (T d) and the Triebel–Lizorkin spaces F p , θ s (T d) for all s > 0 and admissible 1 ≤ p , θ ≤ ∞ as well as provide analogues of the Littlewood–Paley-type characterizations of these spaces in terms of families of quasi-interpolation operators. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. The VC-dimension of axis-parallel boxes on the Torus.
- Author
-
Gillibert, P., Lachmann, T., and Müllner, C.
- Subjects
- *
TORUS , *DISCREPANCY theorem - Abstract
We show in this paper that the VC-dimension of the family of d -dimensional axis-parallel boxes and cubes on the d -dimensional torus are both asymptotically d log 2 d. This is especially surprising as in most other examples the VC-dimension usually grows linearly with d in similar settings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Bounds on Kolmogorov widths and sampling recovery for classes with small mixed smoothness.
- Author
-
Temlyakov, V. and Ullrich, T.
- Subjects
- *
SMOOTHNESS of functions , *CHARACTERISTIC functions , *UNIT ball (Mathematics) , *POLYNOMIALS - Abstract
Results on asymptotic characteristics of classes of functions with mixed smoothness are obtained in the paper. Our main interest is in estimating the Kolmogorov widths in the uniform norm of classes with small mixed smoothness. We prove the corresponding bounds for the unit balls of the trigonometric polynomials with frequencies from a hyperbolic cross. We demonstrate how our results on the Kolmogorov widths imply new upper bounds for the optimal sampling recovery in the L 2 norm of functions with small mixed smoothness. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Fast amortized multi-point evaluation.
- Author
-
van der Hoeven, Joris and Lecerf, Grégoire
- Subjects
- *
ALGORITHMS , *FINITE fields , *INTERPOLATION algorithms , *POLYNOMIALS , *INTEGERS , *POINT set theory - Abstract
The efficient evaluation of multivariate polynomials at many points is an important operation for polynomial system solving. Kedlaya and Umans have recently devised a theoretically efficient algorithm for this task when the coefficients are integers or when they lie in a finite field. In this paper, we assume that the set of points where we need to evaluate is fixed and "sufficiently generic". Under these restrictions, we present a quasi-optimal algorithm for multi-point evaluation over general fields. We also present a quasi-optimal algorithm for the opposite interpolation task. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. A note on EC-tractability of multivariate approximation in weighted Korobov spaces for the standard information class.
- Author
-
Zhang, Jie
- Subjects
- *
STANDARDS , *ALGORITHMS - Abstract
In this paper we study exponential tractability of multivariate approximation problem for weighted Korobov spaces in the worst case setting. The considered algorithms are constructive and use the class Λ std of only function evaluations. We give matching necessary and sufficient conditions for notions of EC-quasi-polynomial tractability and EC-uniform weak tractability in terms of two weight parameters of the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.