642 results on '"reproducing kernel Hilbert spaces"'
Search Results
2. Randomness of Shapes and Statistical Inference on Shapes via the Smooth Euler Characteristic Transform.
- Author
-
Meng, Kun, Wang, Jinyu, Crawford, Lorin, and Eloyan, Ani
- Abstract
AbstractIn this article, we establish the mathematical foundations for modeling the randomness of shapes and conducting statistical inference on shapes using the smooth Euler characteristic transform. Based on these foundations, we propose two Chi-squared statistic-based algorithms for testing hypotheses on random shapes. Simulation studies are presented to validate our mathematical derivations and to compare our algorithms with state-of-the-art methods to demonstrate the utility of our proposed framework. As real applications, we analyze a dataset of mandibular molars from four genera of primates and show that our algorithms have the power to detect significant shape differences that recapitulate known morphological variation across suborders. Altogether, our discussions bridge the following fields: algebraic and computational topology, probability theory and stochastic processes, Sobolev spaces and functional analysis, analysis of variance for functional data, and geometric morphometrics. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. THE OCCUPATION KERNEL METHOD FOR NONLINEAR SYSTEM IDENTIFICATION.
- Author
-
ROSENFELD, JOEL A., RUSSO, BENJAMIN P., KAMALAPURKAR, RUSHIKESH, and JOHNSON, TAYLOR T.
- Subjects
- *
NONLINEAR systems , *SYSTEM identification , *HILBERT space , *DYNAMICAL systems , *NONLINEAR dynamical systems , *CONTINUOUS functions , *KERNEL (Mathematics) - Abstract
This manuscript presents a novel approach to nonlinear system identification leveraging densely defined Liouville operators and a new "kernel" function that represents an integration functional over a reproducing kernel Hilbert space (RKHS) dubbed an occupation kernel. The manuscript thoroughly explores the concept of occupation kernels in the contexts of RKHSs of continuous functions and establishes Liouville operators over RKHS, where several dense domains are found for specific examples of this unbounded operator. The combination of these two concepts allows for the embedding of a dynamical system into an RKHS, where function-theoretic tools may be leveraged for the examination of such systems. This framework allows for trajectories of a nonlinear dynamical system to be treated as a fundamental unit of data for a nonlinear system identification routine. The approach to nonlinear system identification is demonstrated to identify parameters of a dynamical system accurately while also exhibiting a certain robustness to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Reproducing kernel Hilbert spaces cannot contain all continuous functions on a compact metric space.
- Author
-
Steinwart, Ingo
- Abstract
Given an uncountable, compact metric space X, we show that there exists no reproducing kernel Hilbert space that contains the space of all continuous functions on X. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Approximation of discrete and orbital Koopman operators over subsets and manifolds.
- Author
-
Kurdila, Andrew J., Paruchuri, Sai Tej, Powell, Nathan, Guo, Jia, Bobade, Parag, Estes, Boone, and Wang, Haoran
- Abstract
This paper introduces a kernel-based approach for constructing approximations of the Koopman operators for semiflows in discrete time and orbital Koopman operators for continuous time semiflows. The primary advantage of the proposed construction is that the approximations follow certain rates of convergence which are dependent on how data samples fill certain subsets of the state space. In particular, we derive the rate of convergence for two scenarios: (1) the data samples Ξ are dense in a compact state space X, and (2) the data samples Ξ are dense in a limiting set Ω contained in an Euclidean space. Two general classes of Koopman operator approximations are considered in this paper, referred to as projection-based approximation and data-driven approximation. Projection-based approximations assume that the underlying dynamics governing the discrete or continuous time semiflows is known. On the other hand, data-driven approximations rely samples of the semiflow states to approximate the Koopman operator. In both types of approximations, the regularity of the underlying set and the smoothness of the space of functions on which the Koopman operator acts determine the rates of approximations. In the strongest error bounds derived in the paper, it is shown that the error in approximation of the Koopman operator decays like O (h Ω n , Ω p) , where h Ω n , Ω is the fill rate of the samples Ω n in the limiting set Ω and p is an exponent related to the choice of the kernel and the smoothness of functions on which the Koopman operator acts. Such error bounds are obtained when either the limiting subset Ω = X , when it is a proper subset Ω ⊂ X that is sufficiently regular, or when it is a type of smooth manifold Ω = M . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Optimal prediction for kernel-based semi-functional linear regression.
- Author
-
Guo, Keli, Fan, Jun, and Zhu, Lixing
- Subjects
- *
HILBERT space , *FORECASTING - Abstract
This paper proposes a novel prediction approach for a semi-functional linear model comprising a functional and a nonparametric component. The study establishes the minimax optimal rates of convergence for this model, revealing that the functional component can be learned with the same minimax rate as if the nonparametric component were known and vice versa. This result can be achieved by using a double-penalized least squares method to estimate both the functional and nonparametric components within the framework of reproducing kernel Hilbert spaces. Thanks to the representer theorem, the approach also offers other desirable features, including the algorithm efficiency requiring no iterations. We also provide numerical studies to demonstrate the effectiveness of the method and validate the theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Kernel embedding of measures and low-rank approximation of integral operators.
- Author
-
Gauthier, Bertrand
- Abstract
We describe a natural coisometry from the Hilbert space of all Hilbert-Schmidt operators on a separable reproducing kernel Hilbert space (RKHS) H and onto the RKHS G associated with the squared-modulus of the reproducing kernel of H . Through this coisometry, trace-class integral operators defined by general measures and the reproducing kernel of H are isometrically represented as potentials in G , and the quadrature approximation of these operators is equivalent to the approximation of integral functionals on G . We then discuss the extent to which the approximation of potentials in RKHSs with squared-modulus kernels can be regarded as a differentiable surrogate for the characterisation of low-rank approximation of integral operators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Differentially private SGD with random features.
- Author
-
Wang, Yi-guang and Guo, Zheng-chu
- Abstract
In the realm of large-scale machine learning, it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance. Additionally, since the collected data may contain some sensitive information, it is also of great significance to study privacy-preserving machine learning algorithms. This paper focuses on the performance of the differentially private stochastic gradient descent (SGD) algorithm based on random features. To begin, the algorithm maps the original data into a low-dimensional space, thereby avoiding the traditional kernel method for large-scale data storage requirement. Subsequently, the algorithm iteratively optimizes parameters using the stochastic gradient descent approach. Lastly, the output perturbation mechanism is employed to introduce random noise, ensuring algorithmic privacy. We prove that the proposed algorithm satisfies the differential privacy while achieving fast convergence rates under some mild conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Conjugate Gradient Derived Kernel Affine Projection Method for Post-distortion in Visible Light Communication Systems.
- Author
-
Shen, Yujie, Wang, Jieling, Kang, Zihan, and Shen, Ba-Zhong
- Subjects
OPTICAL communications ,TELECOMMUNICATION systems ,VISIBLE spectra ,ERROR rates ,BIT error rate ,HILBERT space - Abstract
With the explosive growth in the demand of spectrum, conventional radio frequency systems are increasingly facing the challenge of catering to high-speed transmissions. Visible light communication (VLC) has been considered as a promising supplement technique, since it can provide efficient energy efficiency and unlimited bandwidth. However, the nonlinearity effect is one of the fundamental problems in the VLC systems, which usually brings distortion to the transmitted signal and deteriorates the system performance. Aiming at this problem, numerous post-distortion schemes are proposed, where the kernel methods over reproducing kernel Hilbert spaces are identified to have successful applications. In this paper, we present a new post-distortion method with low computational cost, by utilizing the conjugate gradient (CG) based kernel affine projection algorithm (KAPA), where the objective parameters are updated adaptively along the conjugate direction. Simulation results show that the proposed CG-KAPA based post-distorter can efficiently mitigate nonlinear impairment in VLC systems, which can provide better bit error rate performance with fast convergence over the commonly-used gradient descent algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Correction to: On Harmonic Hilbert Spaces on Compact Abelian Groups.
- Author
-
Das, Suddhasattwa, Giannakis, Dimitrios, and Montgomery, Michael R.
- Published
- 2023
- Full Text
- View/download PDF
11. Stable approximation of Helmholtz solutions in the disk by evanescent plane waves.
- Author
-
Parolin, Emile, Huybrechs, Daan, and Moiola, Andrea
- Subjects
- *
PLANE wavefronts , *HELMHOLTZ equation , *FLOATING-point arithmetic , *BIVECTORS , *DEGREES of freedom , *LINEAR systems - Abstract
Superpositions of plane waves are known to approximate well the solutions of the Helmholtz equation. Their use in discretizations is typical of Trefftz methods for Helmholtz problems, aiming to achieve high accuracy with a small number of degrees of freedom. However, Trefftz methods lead to ill-conditioned linear systems, and it is often impossible to obtain the desired accuracy in floating-point arithmetic. In this paper we show that a judicious choice of plane waves can ensure high-accuracy solutions in a numerically stable way, in spite of having to solve such ill-conditioned systems. Numerical accuracy of plane wave methods is linked not only to the approximation space, but also to the size of the coefficients in the plane wave expansion. We show that the use of plane waves can lead to exponentially large coefficients, regardless of the orientations and the number of plane waves, and this causes numerical instability. We prove that all Helmholtz fields are continuous superposition of evanescent plane waves, i.e., plane waves with complex propagation vectors associated with exponential decay, and show that this leads to bounded representations. We provide a constructive scheme to select a set of real and complex-valued propagation vectors numerically. This results in an explicit selection of plane waves and an associated Trefftz method that achieves accuracy and stability. The theoretical analysis is provided for a two-dimensional domain with circular shape. However, the principles are general and we conclude the paper with a numerical experiment demonstrating practical applicability also for polygonal domains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Left-Invertibility of Rank-One Perturbations
- Author
-
Das, Susmita, Sarkar, Jaydeb, Albrecht, Ernst, editor, Curto, Raúl, editor, Hartz, Michael, editor, and Putinar, Mihai, editor
- Published
- 2023
- Full Text
- View/download PDF
13. Local Optimisation of Nyström Samples Through Stochastic Gradient Descent
- Author
-
Hutchings, Matthew, Gauthier, Bertrand, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nicosia, Giuseppe, editor, Ojha, Varun, editor, La Malfa, Emanuele, editor, La Malfa, Gabriele, editor, Pardalos, Panos, editor, Di Fatta, Giuseppe, editor, Giuffrida, Giovanni, editor, and Umeton, Renato, editor
- Published
- 2023
- Full Text
- View/download PDF
14. A Local Douglas formula for Higher Order Weighted Dirichlet-Type Integrals.
- Author
-
Ghara, Soumitra, Gupta, Rajeev, and Reza, Md. Ramiz
- Subjects
INVARIANT subspaces ,INTEGRALS ,ALGEBRA ,HILBERT space - Abstract
We prove a local Douglas formula for higher order weighted Dirichlet-type integrals. With the help of this formula, we study the multiplier algebra of the associated higher order weighted Dirichlet-type spaces H μ , induced by an m-tuple μ = (μ 1 , ... , μ m) of finite non-negative Borel measures on the unit circle. In particular, it is shown that any weighted Dirichlet-type space of order m, for m ⩾ 3 , forms an algebra under pointwise product. We also prove that every non-zero closed M z -invariant subspace of H μ , has codimension 1 property if m ⩾ 3 or μ 2 is finitely supported. As another application of this local Douglas formula obtained in this article, it is shown that for any m ⩾ 2 , weighted Dirichlet-type space of order m does not coincide with any de Branges–Rovnyak space H (b) with equivalence of norms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. New Lower Bounds for the Integration of Periodic Functions.
- Author
-
Krieg, David and Vybíral, Jan
- Abstract
We study the integration problem on Hilbert spaces of (multivariate) periodic functions.The standard technique to prove lower bounds for the error of quadrature rules uses bump functions and the pigeon hole principle. Recently, several new lower bounds have been obtained using a different technique which exploits the Hilbert space structure and a variant of the Schur product theorem. The purpose of this paper is to (a) survey the new proof technique, (b) show that it is indeed superior to the bump-function technique, and (c) sharpen and extend the results from the previous papers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Gaussian RBF kernels via Fock spaces: quaternionic and several complex variables settings
- Author
-
De Martino, Antonino and Diki, Kamal
- Published
- 2024
- Full Text
- View/download PDF
17. Data Analysis from Empirical Moments and the Christoffel Function
- Author
-
Pauwels, Edouard, Putinar, Mihai, and Lasserre, Jean-Bernard
- Subjects
Christoffel-Darboux kernel ,Empirical measure ,Support inference ,Manifold ,Density estimation ,Reproducing kernel Hilbert spaces ,stat.ML ,Mathematical Sciences ,Information and Computing Sciences ,Numerical & Computational Mathematics - Abstract
Spectral features of the empirical moment matrix constitute a resourcefultool for unveiling properties of a cloud of points, among which, density,support and latent structures. It is already well known that the empiricalmoment matrix encodes a great deal of subtle attributes of the underlyingmeasure. Starting from this object as base of observations we combine ideasfrom statistics, real algebraic geometry, orthogonal polynomials andapproximation theory for opening new insights relevant for Machine Learning(ML) problems with data supported on singular sets. Refined concepts andresults from real algebraic geometry and approximation theory are empowering asimple tool (the empirical moment matrix) for the task of solving non-trivialquestions in data analysis. We provide (1) theoretical support, (2) numericalexperiments and, (3) connections to real world data as a validation of thestamina of the empirical moment matrix approach.
- Published
- 2021
18. Robust optimal estimation of location from discretely sampled functional data.
- Author
-
Kalogridis, Ioannis and Van Aelst, Stefan
- Subjects
- *
MEASUREMENT errors , *FUNCTIONAL analysis , *SPLINES , *HILBERT space , *DATA analysis - Abstract
Estimating location is a central problem in functional data analysis, yet most current estimation procedures either unrealistically assume completely observed trajectories or lack robustness with respect to the many kinds of anomalies one can encounter in the functional setting. To remedy these deficiencies we introduce the first class of optimal robust location estimators based on discretely sampled functional data. The proposed method is based on M‐type smoothing spline estimation with repeated measurements and is suitable for both commonly and independently observed trajectories that are subject to measurement error. We show that under suitable assumptions the proposed family of estimators is minimax rate optimal both for commonly and independently observed trajectories and we illustrate its highly competitive performance and practical usefulness in a Monte‐Carlo study and a real‐data example involving recent Covid‐19 data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Interpolation and duality in algebras of multipliers on the ball.
- Author
-
Davidson, Kenneth R. and Hartz, Michael
- Subjects
- *
MULTIPLIERS (Mathematical analysis) , *ALGEBRA , *HILBERT space , *POLYNOMIALS , *MATHEMATICAL formulas - Abstract
We study the multiplier algebras A.H/obtained as the closure of the polynomials on certain reproducing kernel Hilbert spaces H on the ball Bd of Cd. Our results apply, in particular, to the Drury-Arveson space, the Dirichlet space and the Hardy space on the ball. We first obtain a complete description of the dual and second dual spaces of A.H/in terms of the complementary bands of Henkin and totally singular measures for Mult.H/. This is applied to obtain several definitive results in interpolation. In particular, we establish a sharp peak interpolation result for compact Mult.H/-totally null sets as well as a Pick and peak interpolation theorem. Conversely, we show that a mere interpolation set is Mult.H/-totally null. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Reproducing Kernel Hilbert Space Approach to Functional Calibration of Computer Models.
- Author
-
Tuo, Rui, He, Shiyuan, Pourhabib, Arash, Ding, Yu, and Huang, Jianhua Z.
- Subjects
- *
HILBERT space , *COMPUTER simulation , *CALIBRATION , *CONTROL (Psychology) , *LEAST squares - Abstract
This article develops a frequentist solution to the functional calibration problem, where the value of a calibration parameter in a computer model is allowed to vary with the value of control variables in the physical system. The need of functional calibration is motivated by engineering applications where using a constant calibration parameter results in a significant mismatch between outputs from the computer model and the physical experiment. Reproducing kernel Hilbert spaces (RKHS) are used to model the optimal calibration function, defined as the functional relationship between the calibration parameter and control variables that gives the best prediction. This optimal calibration function is estimated through penalized least squares with an RKHS-norm penalty and using physical data. An uncertainty quantification procedure is also developed for such estimates. Theoretical guarantees of the proposed method are provided in terms of prediction consistency and consitency of estimating the optimal calibration function. The proposed method is tested using both real and synthetic data and exhibits more robust performance in prediction and uncertainty quantification than the existing parametric functional calibration method and a state-of-art Bayesian method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Convergence and finite sample approximations of entropic regularized Wasserstein distances in Gaussian and RKHS settings.
- Author
-
Minh, Hà Quang
- Subjects
- *
GAUSSIAN measures , *HILBERT space , *BOREL sets , *SOCIAL norms , *PROBABILITY measures - Abstract
This work studies the convergence and finite sample approximations of entropic regularized Wasserstein distances in the Hilbert space setting. Our first main result is that for Gaussian measures on an infinite-dimensional Hilbert space, convergence in the 2-Sinkhorn divergence is strictly weaker than convergence in the exact 2-Wasserstein distance. Specifically, a sequence of centered Gaussian measures converges in the 2-Sinkhorn divergence if the corresponding covariance operators converge in the Hilbert–Schmidt norm. This is in contrast to the previous known result that a sequence of centered Gaussian measures converges in the exact 2-Wasserstein distance if and only if the covariance operators converge in the trace class norm. In the reproducing kernel Hilbert space (RKHS) setting, the kernel Gaussian–Sinkhorn divergence, which is the Sinkhorn divergence between Gaussian measures defined on an RKHS, defines a semi-metric on the set of Borel probability measures on a Polish space, given a characteristic kernel on that space. With the Hilbert–Schmidt norm convergence, we obtain dimension-independent convergence rates for finite sample approximations of the kernel Gaussian–Sinkhorn divergence, of the same order as the Maximum Mean Discrepancy. These convergence rates apply in particular to Sinkhorn divergence between Gaussian measures on Euclidean and infinite-dimensional Hilbert spaces. The sample complexity for the 2-Wasserstein distance between Gaussian measures on Euclidean space, while dimension-dependent, is exponentially faster than the worst case scenario in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Hilbert–Schmidt regularity of symmetric integral operators on bounded domains with applications to SPDE approximations.
- Author
-
Kovács, Mihály, Lang, Annika, and Petersson, Andreas
- Subjects
- *
SYMMETRIC operators , *STOCHASTIC partial differential equations , *FRACTIONAL powers , *FUNCTION spaces , *INTEGRAL operators , *CONVEX domains - Abstract
Regularity estimates for an integral operator with a symmetric continuous kernel on a convex bounded domain are derived. The covariance of a mean-square continuous random field on the domain is an example of such an operator. The estimates are of the form of Hilbert–Schmidt norms of the integral operator and its square root, composed with fractional powers of an elliptic operator equipped with homogeneous boundary conditions of either Dirichlet or Neumann type. These types of estimates, which couple the regularity of the driving noise with the properties of the differential operator, have important implications for stochastic partial differential equations on bounded domains as well as their numerical approximations. The main tools used to derive the estimates are properties of reproducing kernel Hilbert spaces of functions on bounded domains along with Hilbert–Schmidt embeddings of Sobolev spaces. Both non-homogeneous and homogeneous kernels are considered. In the latter case, results in a general Schatten class norm are also provided. Important examples of homogeneous kernels covered by the results of the paper include the class of Matérn kernels. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. On functional logistic regression: some conceptual issues.
- Author
-
Berrendero, José R., Bueno-Larraz, Beatriz, and Cuevas, Antonio
- Abstract
The main ideas behind the classic multivariate logistic regression model make sense when translated to the functional setting, where the explanatory variable X is a function and the response Y is binary. However, some important technical issues appear (or are aggravated with respect to those of the multivariate case) due to the functional nature of the explanatory variable. First, the mere definition of the model can be questioned: While most approaches so far proposed rely on the L 2 -based model, we explore an alternative (in some sense, more general) approach, based on the theory of reproducing kernel Hilbert spaces (RKHS). The validity conditions of such RKHS-based model, and their relation with the L 2 -based one, are investigated and made explicit in two formal results. Some relevant particular cases are considered as well. Second, we show that, under very general conditions, the maximum likelihood of the logistic model parameters fails to exist in the functional case, although some restricted versions can be considered. Third, we check (in the framework of binary classification) the practical performance of some RKHS-based procedures, well-suited to our model: They are compared to several competing methods via Monte Carlo experiments and the analysis of real data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Convergence analysis of online learning algorithm with two-stage step size.
- Author
-
Nie, Weilin and Wang, Cheng
- Subjects
- *
MACHINE learning , *ONLINE education , *ONLINE algorithms , *STATISTICAL learning , *ITERATIVE learning control , *MATHEMATICAL optimization - Abstract
Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. On Harmonic Hilbert Spaces on Compact Abelian Groups.
- Author
-
Das, Suddhasattwa and Giannakis, Dimitrios
- Abstract
Harmonic Hilbert spaces on locally compact abelian groups are reproducing kernel Hilbert spaces (RKHSs) of continuous functions constructed by Fourier transform of weighted L 2 spaces on the dual group. It is known that for suitably chosen subadditive weights, every such space is a Banach algebra with respect to pointwise multiplication of functions. In this paper, we study RKHSs associated with subconvolutive functions on the dual group. Sufficient conditions are established for these spaces to be symmetric Banach ∗ -algebras with respect to pointwise multiplication and complex conjugation of functions (here referred to as RKHAs). In addition, we study aspects of the spectra and state spaces of RKHAs. Sufficient conditions are established for an RKHA on a compact abelian group G to have the same spectrum as the C ∗ -algebra of continuous functions on G. We also consider one-parameter families of RKHSs associated with semigroups of self-adjoint Markov operators on L 2 (G) , and show that in this setting subconvolutivity is a necessary and sufficient condition for these spaces to have RKHA structure. Finally, we establish embedding relationships between RKHAs and a class of Fourier–Wermer algebras that includes spaces of dominating mixed smoothness used in high-dimensional function approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Hierarchical Kernels in Deep Kernel Learning.
- Author
-
Wentao Huang, Houbao Lu, and Haizhang Zhang
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *HILBERT space , *KERNEL (Mathematics) - Abstract
Kernel methods are built upon the mathematical theory of reproducing kernels and reproducing kernel Hilbert spaces. They enjoy good interpretability thanks to the solid mathematical foundation. Recently, motivated by deep neural networks in deep learning, which construct learning functions by successive compositions of activation functions and linear functions, a class of methods termed as deep kernel learning has appeared in the literature. The core of deep kernel learning is hierarchical kernels that are constructed from a base reproducing kernel by successive compositions. In this paper, we characterize the corresponding reproducing kernel Hilbert spaces of hierarchical kernels, and study conditions ensuring that the reproducing kernel Hilbert space will be expanding as the layer of hierarchical kernels increases. The results will answer whether the expressive power of hierarchical kernels will be improving as the layer increases, and give guidance to the construction of hierarchical kernels for deep kernel learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
27. Learning Partial Differential Equations in Reproducing Kernel Hilbert Spaces.
- Author
-
Stepaniants, George
- Subjects
- *
PARTIAL differential equations , *HILBERT space , *LINEAR differential equations , *GREEN'S functions , *KERNEL functions , *KERNEL (Mathematics) - Abstract
We propose a new data-driven approach for learning the fundamental solutions (Green's functions) of various linear partial differential equations (PDEs) given sample pairs of input-output functions. Building off the theory of functional linear regression (FLR), we estimate the best-fit Green's function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS) which allows us to regularize their smoothness and impose various structural constraints. We derive a general representer theorem for operator RKHSs to approximate the original infinite-dimensional regression problem by a finite-dimensional one, reducing the search space to a parametric class of Green's functions. In order to study the prediction error of our Green's function estimator, we extend prior results on FLR with scalar outputs to the case with functional outputs. Finally, we demonstrate our method on several linear PDEs including the Poisson, Helmholtz, Schrödinger, Fokker-Planck, and heat equation. We highlight its robustness to noise as well as its ability to generalize to new data with varying degrees of smoothness and mesh discretization without any additional training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
28. On the geometry of Stein variational gradient descent.
- Author
-
Duncan, A., Nüsken, N., and Szpruch, L.
- Subjects
- *
GEOMETRY , *BAYESIAN field theory , *DISTRIBUTION (Probability theory) - Abstract
Bayesian inference problems require sampling or approximating high-dimensional probability distributions. The focus of this paper is on the recently introduced Stein variational gradient descent methodology, a class of algorithms that rely on iterated steepest descent steps with respect to a reproducing kernel Hilbert space norm. This construction leads to interacting particle systems, the mean-field limit of which is a gradient flow on the space of probability distributions equipped with a certain geometrical structure. We leverage this viewpoint to shed some light on the convergence properties of the algorithm, in particular addressing the problem of choosing a suitable positive definite kernel function. Our analysis leads us to considering certain nondifferentiable kernels with adjusted tails. We demonstrate significant performance gains of these in various numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
29. A note on simply interpolating sequences for the Dirichlet space.
- Author
-
Chalmoukis, Nikolaos
- Subjects
- *
SEQUENCE spaces , *HILBERT space - Abstract
We study simply interpolating sequences for the Dirichlet space in the unit disc. In particular we are interested in comparing three different sufficient conditions for simply interpolating sequences. The first one is the the so called one box condition, the second is the column bounded property for the associated Grammian matrix and the third one is a restricted version of the one box condition introduced by Bishop and, independently, by Marshall and Sundberg. We prove that the one box condition implies the column bounded property which in turn implies the restricted one box condition of Bishop-Marshall-Sundberg, and we give two counterexamples which show that the reverse implications fail even for weakly separated sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
30. A-DAVIS-WIELANDT-BEREZIN RADIUS INEQUALITIES.
- Author
-
GÜRDAL, Verda and HUBAN, Mualla Birgül
- Subjects
- *
HILBERT space , *LINEAR operators , *POSITIVE operators - Abstract
We consider operator V on the reproducing kernel Hilbert space H = H (Ω) over some set Ω with the reproducing kernel KH,λ (z) = K (z, λ) and define A-Davis-Wielandt-Berezin radius ηA (V ) by the formula ηA (V ) := sup { √ | ⟨V kH,λ, kH,λ⟩ A |² + ∥ V kH,λ ∥A4 : λ ∈ Ω } and ~V is the Berezin symbol of V where any positive operator A-induces a semi-inner product on H is defined by ⟨x, y⟩A = ⟨Ax, y⟩ for x, y ∈ H. We study equality of the lower bounds for A-Davis-Wielandt-Berezin radius mentioned above. We establish some lower and upper bounds for the A-Davis-WielandtBerezin radius of reproducing kernel Hilbert space operators. In addition, we get an upper bound for the A-Davis-Wielandt-Berezin radius of sum of two bounded linear operators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Gradient-Free Kernel Conditional Stein Discrepancy goodness of fit testing
- Author
-
Elham Afzali and Saman Muthukumarana
- Subjects
Goodness-of-fit testing ,Kernel Stein Discrepancy ,Reproducing Kernel Hilbert Spaces ,Kernel Stein Discrepancy for conditional density ,Gradient-Free Kernel Stein Discrepancy ,Importance sampling ,Cybernetics ,Q300-390 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this study, we propose a gradient-free statistical goodness-of-fit test for determining if a joint sample (xi,yi)is drawn from p(y|x)πxfor some density πxgiven a conditional distribution. This test is an alternative to Kernel Conditional Stein Discrepancy, which require the computation of model derivatives and are therefore impractical for complex statistical models. Our method, known as Gradient-Free Kernel Conditional Stein Discrepancy, does not require the calculation of derivatives, this makes it a great tool for tackling difficult problems such as evaluating the performance of generative models. It is able to detect convergence and divergence with the same level of accuracy as the gradient-based method. We also discuss the application of this test in importance sampling and compare its performance with two other conventional methods.
- Published
- 2023
- Full Text
- View/download PDF
32. Norms of Basic Operators in Vector Valued Model Spaces and de Branges Spaces.
- Author
-
Dhara, Kousik and Dym, Harry
- Abstract
Let Ω + be either the open unit disc or the open upper half plane or the open right half plane. In this paper, we compute the norm of the basic operator A α = Π Θ T b α | H (Θ) in the vector valued model space H (Θ) = H 2 m ⊖ Θ H 2 m associated with an m × m matrix valued inner function Θ in Ω + and show that the norm is attained. Here Π Θ denotes the orthogonal projection from the Lebesgue space L 2 m onto H (Θ) and T b α is the operator of multiplication by the elementary Blaschke factor b α of degree one with a zero at a point α ∈ Ω + . We show that if A α is strictly contractive, then its norm may be expressed in terms of the singular values of Θ (α) . We then extend this evaluation to the more general setting of vector valued de Branges spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Semi-reproducing kernel Hilbert spaces, splines and increment kriging on the sphere.
- Author
-
Bonabifard, M. R., Mosammam, A. M., and Ghaemi, M. R.
- Subjects
- *
HILBERT space , *KRIGING , *SPHERICAL functions , *SPHERES , *GEGENBAUER polynomials , *SPLINES , *SPLINE theory , *SMOOTHING (Numerical analysis) - Abstract
The concept of reproducing kernel Hilbert space does not capture the key features of the spherical smoothing problem. A semi- reproducing kernel Hilbert space (SRKHS), provides a more natural setting for the smoothing spline solution. In this paper, we carry over the concept of the SRKHS from the R d to the sphere, S d - 1 . In addition, a systematic study is made of the properties of an spherical SRKHS. Next, we present the one to one correspondence between increment-reproducing kernels and conditionally positive definite functions and its consequences on spherical optimal smoothing. The smoothing and interpolation issues on the sphere are considered in the proposed SRKHS setting. Finally, a simulation study is done to illustrate the proposed methodology and an analysis of world average temperature from 1963 to 1967 and 1993–1997 is done using the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. System of Non-Linear Volterra Integral Equations in a Direct-Sum of Hilbert Spaces.
- Author
-
Hassan, Jabar S., Majeed, Haider A., and Arif, Ghassan Ezzulddin
- Subjects
- *
INTEGRAL equations , *HILBERT space , *UNIVALENT functions , *SUBORDINATION (Psychology) , *COEFFICIENTS (Statistics) - Abstract
We use the contraction mapping theorem to present the existence and uniqueness of solutions in a short time to a system of non-linear Volterra integral equations in a certain type of direct-sum H[a, b] of a Hilbert space V[a, b]. We extend the local existence and uniqueness of solutions to the global existence and uniqueness of solutions to the proposed problem. Because the kernel function is a transcendental function in H[a, b] on the interval [a, b], the results are novel and very important in numerical approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Left-Invertibility of Rank-One Perturbations.
- Author
-
Das, Susmita and Sarkar, Jaydeb
- Abstract
For each isometry V acting on some Hilbert space and a pair of vectors f and g in the same Hilbert space, we associate a nonnegative number c(V; f, g) defined by c (V ; f , g) = (‖ f ‖ 2 - ‖ V ∗ f ‖ 2 ) ‖ g ‖ 2 + | 1 + ⟨ V ∗ f , g ⟩ | 2.
We prove that the rank-one perturbation V + f ⊗ g is left-invertible if and only if c (V ; f , g) ≠ 0.
We also consider examples of rank-one perturbations of isometries that are shift on some Hilbert space of analytic functions. Here, shift refers to the operator of multiplication by the coordinate function z. Finally, we examine D + f ⊗ g , where D is a diagonal operator with nonzero diagonal entries and f and g are vectors with nonzero Fourier coefficients. We prove that D + f ⊗ g is left-invertible if and only if D + f ⊗ g is invertible. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. A uniform kernel trick for high and infinite-dimensional two-sample problems.
- Author
-
Cárcamo, Javier, Cuevas, Antonio, and Rodríguez, Luis-Alberto
- Subjects
- *
ASYMPTOTIC distribution , *KERNEL functions , *NULL hypothesis , *EMPIRICAL research , *HILBERT space - Abstract
We use a suitable version of the so-called "kernel trick" to devise two-sample tests, especially focussed on high-dimensional and functional data. Our proposal entails a simplification of the practical problem of selecting an appropriate kernel function. Specifically, we apply a uniform variant of the kernel trick which involves the supremum within a class of kernel-based distances. We obtain the asymptotic distribution of the test statistic under the null and alternative hypotheses. The proofs rely on empirical processes theory, combined with the delta method and Hadamard directional differentiability techniques, and functional Karhunen–Loève-type expansions of the underlying processes. This methodology has some advantages over other standard approaches in the literature. We also give some experimental insight into the performance of our proposal compared to other kernel-based approaches (the original proposal by Borgwardt et al. (2006) and some variants based on splitting methods) as well as tests based on energy distances (Rizzo and Székely, 2017). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. System of Non-Linear Volterra Integral Equations in a Direct-Sum of Hilbert Spaces
- Author
-
Jabar Hassan, Haider Majeed, and Ghassan Ezzulddin Arif
- Subjects
system of non-linear integral equations ,Reproducing kernel Hilbert spaces ,Fixed point theorem ,Physics ,QC1-999 - Abstract
We use the contraction mapping theorem to present the existence and uniqueness of solutions in a short time to a system of non-linear Volterra integral equations in a certain type of direct-sum H[a; b] of a Hilbert space V[a; b]. We extend the local existence and uniqueness of solutions to the global existence and uniqueness of solutions to the proposed problem. Because the kernel function is a transcendental function in H[a; b] on the interval [a; b], the results are novel and very important in numerical approximation.
- Published
- 2022
- Full Text
- View/download PDF
38. Computation of open-loop inputs for uniformly ensemble controllable systems.
- Author
-
Schönlein, Michael
- Subjects
INTEGRAL equations ,LINEAR equations ,LINEAR systems ,HILBERT space ,NEIGHBORHOODS - Abstract
This paper presents computational methods for families of linear systems depending on a parameter. Such a family is called ensemble controllable if for any family of parameter-dependent target states and any neighborhood of it there is a parameter-independent input steering the origin into the neighborhood. Assuming that a family of systems is ensemble controllable we present methods to construct suitable open-loop input functions. Our approach to solve this infinite-dimensional task is based on a combination of methods from the theory of linear integral equations and finite-dimensional control theory. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Segal–Bargmann Transforms Associated to a Family of Coupled Supersymmetries.
- Author
-
Williams, Cameron L.
- Abstract
The Segal–Bargmann transform is a Lie algebra and Hilbert space isomorphism between real and complex representations of the oscillator algebra. The Segal–Bargmann transform is useful in time-frequency analysis as it is closely related to the short-time Fourier transform. The Segal–Bargmann space provides a useful example of a reproducing kernel Hilbert space. Coupled supersymmetries (coupled SUSYs) are generalizations of the quantum harmonic oscillator that have a built-in supersymmetric nature and enjoy similar properties to the quantum harmonic oscillator. In this paper, we will develop Segal–Bargmann transforms for a specific class of coupled SUSYs which includes the quantum harmonic oscillator as a special case. We will show that the associated Segal–Bargmann spaces are distinct from the usual Segal–Bargmann space: their associated weight functions are no longer Gaussian and are spanned by stricter subsets of the holomorphic polynomials. The coupled SUSY Segal–Bargmann spaces provide new examples of reproducing kernel Hilbert spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Transporting positive definiteness.
- Author
-
SZAFRANIEC, FRANCISZEK HUGON
- Abstract
This is a rough guide to the topic which might be worthy to develop further on. Expanding positive definiteness beyond its presupposed scope is intriguing due to a number of possible applications. The paper though looking at the first glance a little bit sketchy may provide a basis for further research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Multivariable Beurling-Lax representations: the commutative and free noncommutative settings.
- Author
-
BALL, JOSEPH A. and BOLOTNIKOV, VLADIMIR
- Abstract
The original theorem of Beurling asserts that any invariant subspace for the shift operator (multiplication by the coordinate function χ(λ) = λ) on the Hardy space over the unit disk can be represented as an inner function times H². We survey various approaches (including ideas and techniques from engineering systems theory and reproducing kernel Hilbert spaces) beyond Beurling's original approach developed over the years for proving this result and then focus on understanding how these approaches can be adapted to handle the more delicate situation where the Hardy-space shift is replaced by the shift operator on a weighted Bergman space over the unit disk. We then indicate how all these results can be extended further to the setting of freely noncommutative shift-operator tuples on a weighted Bergman space in several freely noncommuting indeterminates. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Adaptive estimation of external fields in reproducing kernel Hilbert spaces.
- Author
-
Guo, Jia, Kepler, Michael E., Tej Paruchuri, Sai, Wang, Hoaran, Kurdila, Andrew J., and Stilwell, Daniel J.
- Subjects
- *
DISTRIBUTED parameter systems , *EVOLUTION equations , *SENSOR networks , *HILBERT space , *KERNEL (Mathematics) - Abstract
Summary: This article studies the distributed parameter system that governs adaptive estimation by mobile sensor networks of external fields in a reproducing kernel Hilbert space (RKHS). The article begins with the derivation of conditions that guarantee the well‐posedness of the ideal, infinite dimensional governing equations of evolution for the centralized estimation scheme. Subsequently, convergence of finite dimensional approximations is studied. Rates of convergence in all formulations are established using history‐dependent bases defined from translates of the RKHS kernel that are centered at sample points along the agent trajectories. Sufficient conditions are derived that ensure that the finite dimensional approximations of the ideal estimator equations converge at a rate that is bounded by the fill distance of samples in the agents' assigned subdomains. The article concludes with examples of simulations and experiments that illustrate the qualitative performance of the introduced algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Gradient Learning under Tilted Empirical Risk Minimization.
- Author
-
Liu, Liyuan, Song, Biqin, Pan, Zhibin, Yang, Chuanwu, Xiao, Chi, and Li, Weifu
- Subjects
- *
HILBERT space - Abstract
Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Quantile Regression with Gaussian Kernels
- Author
-
Wang, Baobin, Hu, Ting, Yin, Hong, Fan, Jianqing, editor, and Pan, Jianxin, editor
- Published
- 2020
- Full Text
- View/download PDF
45. Asymptotics for M-type smoothing splines with non-smooth objective functions.
- Author
-
Kalogridis, Ioannis
- Abstract
M-type smoothing splines are a broad class of spline estimators that include the popular least-squares smoothing spline but also spline estimators that are less susceptible to outlying observations and model misspecification. However, available asymptotic theory only covers smoothing spline estimators based on smooth objective functions and consequently leaves out frequently used resistant estimators such as quantile and Huber-type smoothing splines. We provide a general treatment in this paper and, assuming only the convexity of the objective function, show that the least-squares (super-)convergence rates can be extended to M-type estimators whose asymptotic properties have not been hitherto described. We further show that auxiliary scale estimates may be handled under significantly weaker assumptions than those found in the literature and we establish optimal rates of convergence for the derivatives, which have not been obtained outside the least-squares framework. A simulation study and a real-data example illustrate the competitive performance of non-smooth M-type splines in relation to the least-squares spline on regular data and their superior performance on data that contain anomalies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Toeplitz Operators on the Fock Space with Quasi-Radial Symbols.
- Author
-
Dewage, Vishwa and Ólafsson, Gestur
- Abstract
The Fock space F (C n) is the space of holomorphic functions on C n that are square-integrable with respect to the Gaussian measure on C n . This space plays an important role in several subfields of analysis and representation theory. In particular, it has for a long time been a model to study Toeplitz operators. Esmeral and Maximenko showed in 2016 that radial Toeplitz operators on F (C) generate a commutative C ∗ -algebra which is isometrically isomorphic to the C ∗ -algebra C b , u (N 0 , ρ 1) . In this article, we extend the result to k-quasi-radial symbols acting on the Fock space F (C n) . We calculate the spectra of the said Toeplitz operators and show that the set of all eigenvalue functions is dense in the C ∗ -algebra C b , u (N 0 k , ρ k) of bounded functions on N 0 k which are uniformly continuous with respect to the square-root metric. In fact, the C ∗ -algebra generated by Toeplitz operators with quasi-radial symbols is C b , u (N 0 k , ρ k) . [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Analyzing relevance vector machines using a single penalty approach.
- Author
-
Dixit, Anand and Roy, Vivekananda
- Subjects
- *
GIBBS sampling , *MACHINERY , *HILBERT space - Abstract
Relevance vector machine (RVM) is a popular sparse Bayesian learning model typically used for prediction. Recently it has been shown that improper priors assumed on multiple penalty parameters in RVM may lead to an improper posterior. Currently in the literature, the sufficient conditions for posterior propriety of RVM do not allow improper priors over the multiple penalty parameters. In this article, we propose a single penalty relevance vector machine (SPRVM) model in which multiple penalty parameters are replaced by a single penalty and we consider a semi‐Bayesian approach for fitting the SPRVM. The necessary and sufficient conditions for posterior propriety of SPRVM are more liberal than those of RVM and allow for several improper priors over the penalty parameter. Additionally, we also prove the geometric ergodicity of the Gibbs sampler used to analyze the SPRVM model and hence can estimate the asymptotic standard errors associated with the Monte Carlo estimate of the means of the posterior predictive distribution. Such a Monte Carlo standard error cannot be computed in the case of RVM, since the rate of convergence of the Gibbs sampler used to analyze RVM is not known. The predictive performance of RVM and SPRVM is compared by analyzing two simulation examples and three real life datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Online regularized learning algorithm for functional data.
- Author
-
Mao, Yuan and Guo, Zheng-Chu
- Subjects
- *
ONLINE education , *ONLINE algorithms , *HILBERT space , *DECAY constants , *ERROR rates , *ITERATIVE learning control , *TIKHONOV regularization - Abstract
In recent years, functional linear models have attracted growing attention in statistics and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A kernel framework for learning differential equations and their solution operators.
- Author
-
Long, Da, Mrvaljević, Nicole, Zhe, Shandian, and Hosseini, Bamdad
- Subjects
- *
OPERATOR equations , *DIFFERENTIAL equations , *FUNCTIONAL differential equations , *NONLINEAR dynamical systems , *HILBERT space - Abstract
This article presents a three-step kernel framework for regression of the functional form of differential equations (DEs) and learning their solution operators. Given a training set consisting of pairs of noisy DE solutions and source/boundary terms on a mesh: (i) kernel smoothing is utilized to denoise the data and approximate derivatives of the solution; (ii) This information is then used in a kernel regression model to learn the functional form of the DE; (iii) The learned DE is then used within a numerical solver to approximate the solution of the DE with a new source term or initial data, thereby constituting an operator learning framework. Numerical experiments compare the method to state-of-the-art algorithms. In DE learning our framework matches the performance of Sparse Identification of nonlinear Dynamical Systems (SINDy) while in operator learning the method has superior performance compared to well-established neural network methods in low training data regimes. • A three step optimal recovery framework for learning differential equations and their solution operators • An efficient and simple implementation based on the theory of reproducing kernel Hilbert spaces • Various numerical experiments benchmark the methodology against state-of-the-art [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Reconstructing Group Wavelet Transform From Feature Maps With a Reproducing Kernel Iteration.
- Author
-
Barbieri, Davide
- Subjects
VISUAL cortex ,WAVELETS (Mathematics) ,HARMONIC analysis (Mathematics) ,WAVELET transforms ,HILBERT space - Abstract
In this article, we consider the problem of reconstructing an image that is downsampled in the space of its SE (2) wavelet transform, which is motivated by classical models of simple cell receptive fields and feature preference maps in the primary visual cortex. We prove that, whenever the problem is solvable, the reconstruction can be obtained by an elementary project and replace iterative scheme based on the reproducing kernel arising from the group structure, and show numerical results on real images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.