40 results on '"reproducing kernel Hilbert spaces"'
Search Results
2. Vector-valued Spline Method for the Spherical Multiple-shell Electro-magnetoencephalography Problem
- Author
-
S Leweke, O Hauk, V Michel, Leweke, S [0000-0002-5861-3673], Michel, V [0000-0002-2551-0491], and Apollo - University of Cambridge Repository
- Subjects
magnetoencephalography ,inverse problems ,Applied Mathematics ,vector spherical splines ,Numerical Analysis (math.NA) ,Functional Analysis (math.FA) ,Computer Science Applications ,Theoretical Computer Science ,ill-posed problems ,Mathematics - Functional Analysis ,41A15, 42C10, 45B05, 46C07, 46N40, 47A52, 65D07, 65R30, 65R32 ,regularization methods ,reproducing kernel Hilbert spaces ,Signal Processing ,FOS: Mathematics ,Mathematics - Numerical Analysis ,Mathematical Physics ,electroencephalography - Abstract
Human brain activity is based on electrochemical processes, which can only be measured invasively. Therefore, quantities such as magnetic flux density (MEG) or electric potential differences (EEG) are measured non-invasively in medicine and research. The reconstruction of the neuronal current from the measurements is a severely ill-posed problem though its visualization is one of the main research tools in cognitive neuroscience. Here, using an isotropic multiple-shell model for the geometry of the head and a quasi-static approach for modeling the electro-magnetic processes, we derive a novel vector-valued spline method based on reproducing kernel Hilbert spaces. The presented vector spline method follows the path of former spline approaches and provides classical minimum norm properties. In addition, it minimizes the (infinite-dimensional) Tikhonov-Philips functional handling the instability of the inverse problem. This optimization problem reduces to solving a finite-dimensional system of linear equations without loss of information. It results in a unique solution which takes into account that only the harmonic and solenoidal component of the current affects the measurements. Besides, we prove a convergence result: the solution achieved by the vector spline method converges to the generator of the data as the number of measurements increases. The vector splines are applied to the inversion of synthetic test cases, where the irregularly distributed data situation could be handled very well. Combined with parameter choice methods, numerical results are shown with and without additional Gaussian white noise. Former approaches based on scalar splines are outperformed by the vector splines results with respect to the normalized root mean square error. Finally, reasonable results with respect to physiological expectations for real data are shown., for associated MATLAB files, see https://github.com/SarahLeweke/rkhs-splines
- Published
- 2021
3. Estimation of linear operators from scattered impulse responses
- Author
-
Paul Escande, Jérémie Bigot, Pierre Weiss, Institut de Mathématiques de Bordeaux (IMB), Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1 (UB)-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS), Département d'Ingénierie des Systèmes Complexes (DISC), Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO), Institut des Technologies Avancées en sciences du Vivant (ITAV), Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS), Institut de Mathématiques de Toulouse UMR5219 (IMT), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS), MODIM PRES ToulouseOPTIMUS RITC Toulouse, Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS), Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées, Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS), Bordeaux INP - BINP (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Institut National des Sciences Appliquées de Toulouse - INSA (FRANCE), Institut Supérieur de l'Aéronautique et de l'Espace - ISAE-SUPAERO (FRANCE), Université de Bordeaux 1 (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), Université de Bordeaux 2 - Victor Segalen (FRANCE), Université de Bordeaux (FRANCE), PRIMO (ITAV), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut des Technologies Avancées en sciences du Vivant (ITAV), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), and Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut des Technologies Avancées en sciences du Vivant (ITAV)
- Subjects
FOS: Computer and information sciences ,47A58, 41A15, 41A25, 68W25, 62H12, 65T60, 94A20 ,010103 numerical & computational mathematics ,Impulse (physics) ,Estimator ,01 natural sciences ,estimator ,Radial basis functions ,convergence rate ,Operator (computer programming) ,Autre ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,Scattered approximation ,Radial basis function ,Minimax ,41A25 ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,AMS classifications: 47A58 ,Applied Mathematics ,radial basis functions ,Numerical Analysis (math.NA) ,Rate of convergence ,[INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT] ,symbols ,62H12 ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[MATH.MATH-NA]Mathematics [math]/Numerical Analysis [math.NA] ,Smoothing ,minimax ,Computer Science - Information Theory ,65T60 ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,symbols.namesake ,numerical ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Numerical complexity ,Reproducing Kernel Hilbert Spaces ,Information Theory (cs.IT) ,010102 general mathematics ,Integral operator ,Hilbert space ,[MATH.MATH-IT]Mathematics [math]/Information Theory [math.IT] ,scattered approximation ,68W25 ,94A20 ,Convergence rate ,complexity ,41A15 - Abstract
International audience; We provide a new estimator of integral operators with smooth kernels, obtained from a set of scattered and noisy impulse responses. The proposed approach relies on the formalism of smoothing in reproducing kernel Hilbert spaces and on the choice of an appropriate regularization term that takes the smoothness of the operator into account. It is numerically tractable in very large dimensions. We study the estimator's robustness to noise and analyze its approximation properties with respect to the size and the geometry of the dataset. In addition, we show minimax optimality of the proposed estimator.
- Published
- 2019
4. Weighted p-regular kernels for reproducing kernel Hilbert spaces and Mercer Theorem
- Author
-
Agud Albesa, Lucia, Calabuig, J. M., and Sánchez Pérez, Enrique Alfonso
- Subjects
Pure mathematics ,Current (mathematics) ,Function space ,Kernel operator ,Space (mathematics) ,Measure (mathematics) ,symbols.namesake ,Factorization ,Computer Science::General Literature ,Representation (mathematics) ,Reproducing kernel Hilbert spaces ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,Computer Science::Information Retrieval ,Applied Mathematics ,Integral operator ,Astrophysics::Instrumentation and Methods for Astrophysics ,Hilbert space ,Computer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing) ,Representation ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,symbols ,MATEMATICA APLICADA ,Mercer Theorem ,Analysis ,Kernel (category theory) - Abstract
[EN] Let (X, Sigma, mu) be a finite measure space and consider a Banach function space Y(mu). Motivated by some previous papers and current applications, we provide a general framework for representing reproducing kernel Hilbert spaces as subsets of Kothe Bochner (vectorvalued) function spaces. We analyze operator-valued kernels Gamma that define integration maps L-Gamma between Kothe-Bochner spaces of Hilbert-valued functions Y(mu; kappa). We show a reduction procedure which allows to find a factorization of the corresponding kernel operator through weighted Bochner spaces L-P(gd mu; kappa) and L-P (hd mu; kappa) - where 1/p + 1/p' = 1 - under the assumption of p-concavity of Y(mu). Equivalently, a new kernel obtained by multiplying Gamma by scalar functions can be given in such a way that the kernel operator is defined from L-P (mu; kappa) to L-P (mu; kappa) in a natural way. As an application, we prove a new version of Mercer Theorem for matrix-valued weighted kernels., The second author acknowledges the support of the Ministerio de Economia y Competitividad (Spain), under project MTM2014-53009-P (Spain). The third author acknowledges the support of the Ministerio de Ciencia, Innovacion y Universidades (Spain), Agencia Estatal de Investigacion, and FEDER under project MTM2016-77054-C2-1-P (Spain).
- Published
- 2019
5. Estimating Koopman operators for nonlinear dynamical systems: a nonparametric approach
- Author
-
Alessandro Chiuso and Francesco Zanini
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Reproducing Kernel Hilbert Spaces ,Computer science ,Koopman Operator ,Nonparametric statistics ,System Identification ,Space (mathematics) ,Machine Learning (cs.LG) ,Operator (computer programming) ,Control and Systems Engineering ,Kernel (statistics) ,Dynamic mode decomposition ,Gaussian Processes ,Non linear systems ,Leverage (statistics) ,Applied mathematics ,Embedding ,Reproducing kernel Hilbert space - Abstract
The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems, but working in infinite dimensional spaces. Dynamic Mode Decomposition and Extended Dynamic Mode Decomposition are amongst the most popular finite dimensional approximation. In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework. To do so, we leverage the RKHS as a suitable space for learning the Koopman dynamics, thanks to its intrinsic finite-dimensional nature, shaped by data. We finally establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions. We provide also simulations for comparison with standard procedures., Pre-print submitted for 19th IFAC Symposium, System Identification: learning models for decision and control
- Published
- 2021
6. Reproducing kernel Hailbert space associated with a unitary representation of a groupoid
- Author
-
Zbigniew Pasternak-Winiarski, Monika Drewnik, and Tomasz Miller
- Subjects
46E22, 20L05, 22A22, 22D10 ,01 natural sciences ,Unitary state ,symbols.namesake ,0103 physical sciences ,FOS: Mathematics ,convolution ,group ,0101 mathematics ,Mathematics ,unitary representation ,010308 nuclear & particles physics ,Positive-definite kernel ,Group (mathematics) ,Mathematics::Operator Algebras ,Applied Mathematics ,010102 general mathematics ,Hilbert space ,Locally compact group ,groupoid ,Functional Analysis (math.FA) ,Algebra ,Mathematics - Functional Analysis ,Computational Mathematics ,Unitary representation ,reproducing kernel Hilbert spaces ,Computational Theory and Mathematics ,haar measure ,Kernel (statistics) ,symbols ,Reproducing kernel Hilbert space - Abstract
The aim of the paper is to create a link between the theory of reproducing kernel Hilbert spaces (RKHS) and the notion of a unitary representation of a group or of a groupoid. More specifically, it is demonstrated on one hand, how to construct a positive definite kernel and an RKHS for a given unitary representation of a group(oid), and on the other hand how to retrieve the unitary representation of a group or a groupoid from a positive definite kernel defined on that group(oid) with the help of the Moore-Aronszajn theorem. The kernel constructed from the group(oid) representation is inspired by the kernel defined in terms of the convolution of functions on a locally compact group. Several illustrative examples of reproducing kernels related with unitary representations of groupoids are discussed in detail. The paper is concluded with the brief overview of the possible applications of the proposed constructions., 17 pages, 2 figures, 1 table
- Published
- 2021
7. Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel
- Author
-
Lorenzo Rosasco, Ernesto De Vito, Stefano Vigogna, Valeriya Naumova, and Zeljko Kereta
- Subjects
Wavelets, Frames, Reproducing kernel Hilbert spaces, Regularization, Learning theory ,FOS: Computer and information sciences ,Learning theory ,General Mathematics ,Population ,Monte Carlo method ,Stability (learning theory) ,Machine Learning (stat.ML) ,010103 numerical & computational mathematics ,Wavelets ,01 natural sciences ,Regularization (mathematics) ,Wavelet ,Statistics - Machine Learning ,Regularization ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,education ,Reproducing kernel Hilbert spaces ,Mathematics ,education.field_of_study ,Applied Mathematics ,010102 general mathematics ,Frame (networking) ,Functional Analysis (math.FA) ,Mathematics - Functional Analysis ,Sobolev space ,Frames ,Kernel (statistics) ,42C15, 42C40, 65T60, 46E22, 47A52, 68T05 ,Analysis - Abstract
We introduce a construction of multiscale tight frames on general domains. The frame elements are obtained by spectral filtering of the integral operator associated with a reproducing kernel. Our construction extends classical wavelets as well as generalized wavelets on both continuous and discrete non-Euclidean structures such as Riemannian manifolds and weighted graphs. Moreover, it allows to study the relation between continuous and discrete frames in a random sampling regime, where discrete frames can be seen as Monte Carlo estimates of the continuous ones. Pairing spectral regularization with learning theory, we show that a sample frame tends to its population counterpart, and derive explicit finite-sample rates on spaces of Sobolev and Besov regularity. Our results prove the stability of frames constructed on empirical data, in the sense that all stochastic discretizations have the same underlying limit regardless of the set of initial training samples.
- Published
- 2020
- Full Text
- View/download PDF
8. Data analysis from empirical moments and the Christoffel function
- Author
-
Mihai Putinar, Edouard Pauwels, Jean B. Lasserre, Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées, University of California [Santa Barbara] (UCSB), University of California, Équipe Méthodes et Algorithmes en Commande (LAAS-MAC), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J), European Project: 666981,H2020,ERC-2014-ADG,TAMING(2015), Argumentation, Décision, Raisonnement, Incertitude et Apprentissage (IRIT-ADRIA), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Institut de Mathématiques de Toulouse UMR5219 (IMT), Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS), PNRIA, ANR-19-P3IA-0004,ANITI,Artificial and Natural Intelligence Toulouse Institute(2019), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Toulouse Mind & Brain Institut (TMBI), Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), University of California [Santa Barbara] (UC Santa Barbara), University of California (UC), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS), ANR-11-LABX-0040,CIMI,Centre International de Mathématiques et d'Informatique (de Toulouse)(2011), and Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse III - Paul Sabatier (UPS)
- Subjects
FOS: Computer and information sciences ,data analysis ,62G05 62G07 68T05 58C35 46E22 42C05 47B32 ,Numerical & Computational Mathematics ,Machine Learning (stat.ML) ,Moment matrix ,010103 numerical & computational mathematics ,01 natural sciences ,Measure (mathematics) ,Mathematical Sciences ,[STAT.ML]Statistics [stat]/Machine Learning [stat.ML] ,Statistics - Machine Learning ,Simple (abstract algebra) ,Information and Computing Sciences ,Christoffel-Darboux kernel ,Real algebraic geometry ,0101 mathematics ,Empirical measure ,Reproducing kernel Hilbert spaces ,orthogonal polynomials ,Mathematics ,Approximation theory ,Christoffel symbols ,Applied Mathematics ,Function (mathematics) ,Manifold ,stat.ML ,Algebra ,Computational Mathematics ,Density estimation ,Computational Theory and Mathematics ,Support inference ,Orthogonal polynomials ,Christoffel function ,Analysis - Abstract
International audience; Spectral features of the empirical moment matrix constitute a resourceful tool for unveiling properties of a cloud of points, among which, density, support and latent structures. It is already well known that the empirical moment matrix encodes a great deal of subtle attributes of the underlying measure. Starting from this object as base of observations we combine ideas from statistics, real algebraic geometry, orthogonal polynomials and approximation theory for opening new insights relevant for Machine Learning (ML) problems with data supported on singular sets. Refined concepts and results from real algebraic geometry and approximation theory are empowering a simple tool (the empirical moment matrix) for the task of solving non-trivial questions in data analysis. We provide (1) theoretical support, (2) numerical experiments and, (3) connections to real world data as a validation of the stamina of the empirical moment matrix approach.
- Published
- 2020
9. Optimal classification of Gaussian processes in homo- and heteroscedastic settings
- Author
-
Carlos Ramos-Carreño, José L. Torrecilla, Manuel A. Sánchez-Montañés, Alberto Suárez, UAM. Departamento de Ingeniería Informática, and UAM. Departamento de Matemáticas
- Subjects
Statistics and Probability ,Matemáticas ,Gaussian processes ,010103 numerical & computational mathematics ,Interval (mathematics) ,01 natural sciences ,Theoretical Computer Science ,010104 statistics & probability ,symbols.namesake ,Quadratic equation ,Orthogonality ,Optimal classification ,Applied mathematics ,Limit (mathematics) ,0101 mathematics ,Gaussian process ,Reproducing kernel Hilbert spaces ,Probability measure ,Mathematics ,Informática ,Stochastic process ,Functional data analysis ,Computational Theory and Mathematics ,Discriminant ,Near-perfect classification ,symbols ,Statistics, Probability and Uncertainty - Abstract
A procedure to derive optimal discrimination rules is formulated for binary functional classification problems in which the instances available for induction are characterized by random trajectories sampled from different Gaussian processes, depending on the class label. Specifically, these optimal rules are derived as the asymptotic form of the quadratic discriminant for the discretely monitored trajectories in the limit that the set of monitoring points becomes dense in the interval on which the processes are defined. The main goal of this work is to provide a detailed analysis of such optimal rules in the dense monitoring limit, with a particular focus on elucidating the mechanisms by which near-perfect classification arises. In the general case, the quadratic discriminant includes terms that are singular in this limit. If such singularities do not cancel out, one obtains near-perfect classification, which means that the error approaches zero asymptotically, for infinite sample sizes. This singular limit is a consequence of the orthogonality of the probability measures associated with the stochastic processes from which the trajectories are sampled. As a further novel result of this analysis, we formulate rules to determine whether two Gaussian processes are equivalent or mutually singular (orthogonal), The research has been supported by the Spanish Ministry of Economy, Industry, and Competitiveness—State Research Agency, Projects MTM2016-78751-P and TIN2016-76406-P(AEI/FEDER, UE), and Comunidad Autónoma de Madrid, Project S2017/BMD-3688
- Published
- 2020
10. Reproducing Properties of Differentiable Mercer-Like Kernels on the Sphere.
- Author
-
Jordão, T. and Menegatto, V.A.
- Subjects
- *
SPHERES , *GAUSSIAN processes , *HILBERT space , *SMOOTHING (Numerical analysis) , *KERNEL functions , *EMBEDDINGS (Mathematics) , *APPLIED mathematics - Abstract
We study differentiability of functions in the reproducing kernel Hilbert space (RKHS) associated with a smooth Mercer-like kernel on the sphere. We show that differentiability up to a certain order of the kernel yields both, differentiability up to the same order of the elements in the series representation of the kernel and a series representation for the corresponding derivatives of the kernel. These facts are used to embed the RKHS into spaces of differentiable functions and to deduce reproducing properties for the derivatives of functions in the RKHS. We discuss compactness and boundedness of the embedding and some applications to Gaussian-like kernels. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
11. Data Based Construction of Kernels for Semi-Supervised Learning With Less Labels
- Author
-
Hrushikesh N. Mhaskar, Vasyl Yu. Semenov, Sergei V. Pereverzyev, and Evgeniya V. Semenova
- Subjects
Statistics and Probability ,semi-supervised learning ,Computer science ,010103 numerical & computational mathematics ,Semi-supervised learning ,01 natural sciences ,Regularization (mathematics) ,reproducing kernel hilbert spaces ,Tikhonov regularization ,010104 statistics & probability ,0101 mathematics ,business.industry ,gender identification ,Applied Mathematics ,Small number ,lcsh:T57-57.97 ,Pattern recognition ,laplace-beltrami operator ,Data set ,machine learning ,tikhonov regularization ,Laplace–Beltrami operator ,Diffusion geometry ,lcsh:Applied mathematics. Quantitative methods ,Artificial intelligence ,lcsh:Probabilities. Mathematical statistics ,business ,lcsh:QA273-280 - Abstract
This paper deals with the problem of semi-supervised learning using a small number of training samples. Traditional kernel based methods utilize either a fixed kernel or a combination of judiciously chosen kernels from a fixed dictionary. In contrast, we construct a data-dependent kernel utilizing the Mercer components of different kernels constructed using ideas from diffusion geometry, and use a regularization technique with this kernel with adaptively chosen parameters. Our algorithm is illustrated using a few well-known data sets as well as a data set for automatic gender identification. For some of these data sets, we obtain a zero test error using only a minimal number of training samples.
- Published
- 2019
12. Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces
- Author
-
Lorenzo Rosasco, Ernesto De Vito, and Nicole Mücke
- Subjects
heat kernels ,Reproducing kernel Hilbert spaces ,Computer Science::Machine Learning ,FOS: Computer and information sciences ,Pure mathematics ,Diffusion (acoustics) ,Computer Science - Machine Learning ,Statistics::Theory ,Machine Learning (stat.ML) ,010103 numerical & computational mathematics ,01 natural sciences ,Machine Learning (cs.LG) ,symbols.namesake ,Statistics::Machine Learning ,Statistics - Machine Learning ,FOS: Mathematics ,0101 mathematics ,Mathematics ,Applied Mathematics ,010102 general mathematics ,Hilbert space ,Riemannian manifold ,Functional Analysis (math.FA) ,Sobolev space ,Mathematics - Functional Analysis ,Kernel (statistics) ,symbols ,Analysis ,Reproducing kernel Hilbert space - Abstract
We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold. In particular, we discuss under which condition Sobolev spaces are RKHS and characterize their reproducing kernels. Further, we introduce and discuss a class of smoother RKHS that we call diffusion spaces. We illustrate the general results with a number of detailed examples. While connections between Sobolev spaces, differential operators and RKHS are well known in the Euclidean setting, here we present a self-contained study of analogous connections for Riemannian manifolds. By collecting a number of results in unified a way, we think our study can be useful for researchers interested in the topic.
- Published
- 2019
- Full Text
- View/download PDF
13. Kernel methods for the approximation of discrete-time linear autonomous and control systems
- Author
-
Boumediene Hamzi and Fritz Colonius
- Subjects
Identification ,Topological entropy ,Computer science ,General Chemical Engineering ,General Physics and Astronomy ,Control ,Parameter estimation ,Linear discrete-time equations ,State space ,Applied mathematics ,General Materials Science ,ddc:510 ,Reproducing Kernel Hilbert spaces ,General Environmental Science ,Science & Technology ,Series (mathematics) ,General Engineering ,Linear control systems ,METRIC-SPACES ,Multidisciplinary Sciences ,Kernel method ,Function approximation ,Discrete time and continuous time ,Control system ,Science & Technology - Other Topics ,General Earth and Planetary Sciences ,Autonomous system (mathematics) ,Riccati equations - Abstract
Methods from learning theory are used in the state space of linear dynamical and control systems in order to estimate relevant matrices and some relevant quantities such as the topological entropy. An application to stabilization via algebraic Riccati equations is included by viewing a control system as an autonomous system in an extended space of states and control inputs. Kernel methods are the main techniques used in this paper and the approach is illustrated via a series of numerical examples. The advantage of using kernel methods is that they allow to perform function approximation from data and, as illustrated in this paper, allow to approximate linear discrete-time autonomous and control systems from data.
- Published
- 2019
14. Worst-case optimal approximation with increasingly flat Gaussian kernels
- Author
-
Toni Karvonen, Simo Särkkä, Sensor Informatics and Medical Technology, Department of Electrical Engineering and Automation, Aalto-yliopisto, and Aalto University
- Subjects
Gaussian ,MathematicsofComputing_NUMERICALANALYSIS ,Gaussian kernel ,010103 numerical & computational mathematics ,01 natural sciences ,symbols.namesake ,Gaussian quadrature ,Gaussian function ,FOS: Mathematics ,Applied mathematics ,Degree of a polynomial ,Mathematics - Numerical Analysis ,0101 mathematics ,Reproducing kernel Hilbert spaces ,Mathematics ,Applied Mathematics ,Hilbert space ,Numerical Analysis (math.NA) ,Functional Analysis (math.FA) ,010101 applied mathematics ,Mathematics - Functional Analysis ,Computational Mathematics ,Kernel (statistics) ,symbols ,Worst-case analysis ,Reproducing kernel Hilbert space ,Interpolation - Abstract
We study worst-case optimal approximation of positive linear functionals in reproducing kernel Hilbert spaces induced by increasingly flat Gaussian kernels. This provides a new perspective and some generalisations to the problem of interpolation with increasingly flat radial basis functions. When the evaluation points are fixed and unisolvent, we show that the worst-case optimal method converges to a polynomial method. In an additional one-dimensional extension, we allow also the points to be selected optimally and show that in this case convergence is to the unique Gaussian quadrature–type method that achieves the maximal polynomial degree of exactness. The proofs are based on an explicit characterisation of the reproducing kernel Hilbert space of the Gaussian kernel in terms of exponentially damped polynomials.
- Published
- 2019
- Full Text
- View/download PDF
15. Reproducing kernel Hilbert space compactification of unitary evolution groups
- Author
-
Joanna Slawinska, Suddhasattwa Das, Dimitrios Giannakis, Finnish Center for Artificial Intelligence (FCAI), Department of Computer Science, and Complex Systems Computation Group
- Subjects
Pure mathematics ,Spectral theory ,SPECTRAL PROPERTIES ,DYNAMIC-MODE DECOMPOSITION ,010103 numerical & computational mathematics ,Dynamical Systems (math.DS) ,Borel functional calculus ,01 natural sciences ,SYSTEMS ,CONVERGENCE ,MAPS ,111 Mathematics ,FOS: Mathematics ,Ergodic theory ,Orthonormal basis ,0101 mathematics ,Mathematics - Dynamical Systems ,Reproducing kernel Hilbert spaces ,Koopman operators ,Mathematics ,Pointwise ,Applied Mathematics ,010102 general mathematics ,Spectrum (functional analysis) ,Ergodic dynamical systems ,113 Computer and information sciences ,Compact operator ,37M99, 37M25, 37A30, 47A60, 28D10, 37A10, 47A35, 37M10 ,REDUCTION ,LORENZ ATTRACTOR ,Perron-Frobenius operators ,APPROXIMATION ,KOOPMAN ,Reproducing kernel Hilbert space - Abstract
A framework for coherent pattern extraction and prediction of observables of measure-preserving, ergodic dynamical systems with both atomic and continuous spectral components is developed. This framework is based on an approximation of the generator of the system by a compact operator W τ on a reproducing kernel Hilbert space (RKHS). A key element of this approach is that W τ is skew-adjoint (unlike regularization approaches based on the addition of diffusion), and thus can be characterized by a unique projection-valued measure, discrete by compactness, and an associated orthonormal basis of eigenfunctions. These eigenfunctions can be ordered in terms of a Dirichlet energy on the RKHS, and provide a notion of coherent observables under the dynamics akin to the Koopman eigenfunctions associated with the atomic part of the spectrum. In addition, the regularized generator has a well-defined Borel functional calculus allowing the construction of a unitary evolution group { e t W τ } t ∈ R on the RKHS, which approximates the unitary Koopman evolution group of the original system. We establish convergence results for the spectrum and Borel functional calculus of the regularized generator to those of the original system in the limit τ → 0 + . Convergence results are also established for a data-driven formulation, where these operators are approximated using finite-rank operators obtained from observed time series. An advantage of working in spaces of observables with an RKHS structure is that one can perform pointwise evaluation and interpolation through bounded linear operators, which is not possible in L p spaces. This enables the evaluation of data-approximated eigenfunctions on previously unseen states, as well as data-driven forecasts initialized with pointwise initial data (as opposed to probability densities in L p ). The pattern extraction and prediction framework is numerically applied to ergodic dynamical systems with atomic and continuous spectra, namely a quasiperiodic torus rotation, the Lorenz 63 system, and the Rossler system.
- Published
- 2018
- Full Text
- View/download PDF
16. Just Interpolate: Kernel 'Ridgeless' Regression Can Generalize
- Author
-
Alexander Rakhlin and Tengyuan Liang
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,68Q32 ,Mathematics - Statistics Theory ,Machine Learning (stat.ML) ,spectral decay ,Statistics Theory (math.ST) ,01 natural sciences ,Upper and lower bounds ,Regularization (mathematics) ,Machine Learning (cs.LG) ,010104 statistics & probability ,kernel methods ,62G08 ,Statistics - Machine Learning ,implicit regularization ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics ,Minimum-norm interpolation ,Covariance ,data-dependent bounds ,Kernel method ,reproducing kernel Hilbert spaces ,Kernel (statistics) ,Statistics, Probability and Uncertainty ,MNIST database ,Test data ,high dimensionality - Abstract
In the absence of explicit regularization, Kernel "Ridgeless" Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function, and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset., Comment: 28 pages, 8 figures
- Published
- 2018
- Full Text
- View/download PDF
17. Ensemble and Multiple Kernel Regressors : Which Is Better?
- Author
-
Akira Tanaka, Hideyuki Imai, Hirofumi Takebayashi, Mineichi Kudo, and Ichigaku Takigawa
- Subjects
business.industry ,Applied Mathematics ,generalization error ,Pattern recognition ,ensemble kernel regressor ,Computer Graphics and Computer-Aided Design ,Kernel principal component analysis ,Kernel method ,multiple kernel regressor ,reproducing kernel Hilbert spaces ,Kernel embedding of distributions ,Variable kernel density estimation ,Polynomial kernel ,kernel regression ,Signal Processing ,Radial basis function kernel ,Kernel smoother ,Kernel regression ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Mathematics - Abstract
For the last few decades, learning with multiple kernels, represented by the ensemble kernel regressor and the multiple kernel regressor, has attracted much attention in the field of kernel-based machine learning. Although their efficacy was investigated numerically in many works, their theoretical ground is not investigated sufficiently, since we do not have a theoretical framework to evaluate them. In this paper, we introduce a unified framework for evaluating kernel regressors with multiple kernels. On the basis of the framework, we analyze the generalization errors of the ensemble kernel regressor and the multiple kernel regressor, and give a sufficient condition for the ensemble kernel regressor to outperform the multiple kernel regressor in terms of the generalization error in noise-free case. We also show that each kernel regressor can be better than the other without the sufficient condition by giving examples, which supports the importance of the sufficient condition.
- Published
- 2015
18. Kernel Metrics on Normal Cycles and Application to Curve Matching
- Author
-
Pierre Roussillon, Joan Alexis Glaunès, Mathématiques Appliquées Paris 5 (MAP5 - UMR 8145), Université Paris Descartes - Paris 5 (UPD5)-Institut National des Sciences Mathématiques et de leurs Interactions (INSMI)-Centre National de la Recherche Scientifique (CNRS), ANR-11-BS01-014-01,TOMMI,Transport Optimal et Modèles Multiphysiques de l'Image(2011), MAP5 - Mathématiques Appliquées à Paris 5 (MAP5), Université Paris Descartes - Paris 5 (UPD5) - Institut National des Sciences Mathématiques et de leurs Interactions - Centre National de la Recherche Scientifique (CNRS), ANR-11-BS01-0014,TOMMI,Transport Optimal et Modèles Multiphysiques de l'Image(2011), Mathématiques Appliquées à Paris 5 ( MAP5 - UMR 8145 ), Université Paris Descartes - Paris 5 ( UPD5 ) -Institut National des Sciences Mathématiques et de leurs Interactions-Centre National de la Recherche Scientifique ( CNRS ), Glaunès, Joan Alexis, and Transport Optimal et Modèles Multiphysiques de l'Image - - TOMMI2011 - ANR-11-BS01-0014 - BLANC - VALID
- Subjects
Curvature Measures ,[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Normal Cycles ,[ INFO.INFO-TI ] Computer Science [cs]/Image Processing ,0202 electrical engineering, electronic engineering, information engineering ,curve registration ,[ SDV.IB.IMA ] Life Sciences [q-bio]/Bioengineering/Imaging ,currents ,Image registration ,Mathematics ,[INFO.INFO-BI] Computer Science [cs]/Bioinformatics [q-bio.QM] ,Applied Mathematics ,geometric measure theory ,Computational anatomy ,Geometric measure theory ,[INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV] ,Kernel (image processing) ,[MATH.MATH-DG]Mathematics [math]/Differential Geometry [math.DG] ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[ SCCO.NEUR ] Cognitive science/Neuroscience ,[SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC] ,020201 artificial intelligence & image processing ,[MATH.MATH-DG] Mathematics [math]/Differential Geometry [math.DG] ,LDDMM ,Large deformation diffeomorphic metric mapping ,General Mathematics ,Diffeomorphic Mapping ,diffeomorphic models ,Curvature ,Measure (mathematics) ,varifolds ,normal cycle ,[ INFO.INFO-BI ] Computer Science [cs]/Bioinformatics [q-bio.QM] ,[INFO.INFO-TI] Computer Science [cs]/Image Processing ,[SDV.NEU] Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC] ,0101 mathematics ,Shape Registration ,Discrete mathematics ,Brain Imaging ,Reproducing Kernel Hilbert Spaces ,Euclidean space ,[SCCO.NEUR]Cognitive science/Neuroscience ,Morphometry ,[SCCO.NEUR] Cognitive science/Neuroscience ,curve matching ,[ MATH.MATH-DG ] Mathematics [math]/Differential Geometry [math.DG] ,Algebra ,AMS subject classifications 53C65,49Q15, 28A75,37K65,46E22,65D18, 62H35,68U10,68U05 ,computational anatomy ,[SDV.IB.IMA] Life Sciences [q-bio]/Bioengineering/Imaging ,[ SDV.NEU ] Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC] ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] - Abstract
International audience; In this work we introduce a new dissimilarity measure for shape registration using the notion of normal cycles, a concept from geometric measure theory which allows to generalize curvature for non smooth subsets of the euclidean space. Our construction is based on the definition of kernel metrics on the space of normal cycles which take explicit expressions in a discrete setting. This approach is closely similar to previous works based on currents and varifolds [26, 9]. We derive the computational setting for discrete curves in R 3 , using the Large Deformation Diffeomorphic Metric Mapping framework as model for deformations. We present synthetic and real data experiments and compare with the currents and varifolds approaches. Introduction. Many applications in medical image analysis require a coherent alignment of images as a pre-processing step, using efficient rigid or non-rigid registration algorithms. Moreover, in the field of computational anatomy, the estimation of optimal deformations between images, or geometric structures segmented from the images, is a building block for any statistical analysis of the anatomical variability of organs. Non-rigid registration is classically tackled down by minimizing a functional composed of two terms, one enforcing regularity of the mapping, and the data-attachment term which evaluates dissimilarity between shapes. Defining good data-attachment terms is important, as it may improve the minimization process, and focus the registration on the important features of the shapes to be matched. In [26, 16] a new framework for dissimilarity measures between sub-manifolds was proposed using kernel metrics defined on spaces of currents. This setting is now commonly used in computational anatomy ; its advantages lie in its simple implementation and the fact that it provides a common framework for continuous and discrete shapes (see [11] for a computational analysis of currents and their numerical implementation). However, currents are oriented objects and thus a consistent orientation of shapes is needed for a coherent matching. Moreover, due to this orientation property , artificial cancellation can occur with shapes with high local variations. To deal with this problem, a more advanced model based on varifolds has been introduced recently [8]. Varifolds are measures over fields of non-oriented linear subspaces. See [8], chap. 3 for an exhaustive analysis. In this work, we propose to use a second-order model called normal cycle for defining shape dissimilarities. The normal cycle of a submanifold X is the current associated with its normal bundle N X. The normal cycle encodes second order, i.e. curvature information of X; more precisely one can compute integrals of curvatures by evaluating the normal cycle over simple differential forms. Moreover, it has a canonical orientation which is independent of the orientation of X (in fact X does not need to be oriented)
- Published
- 2016
19. Reproducing kernel Hilbert spaces and variable metric algorithms in PDE constrained shape optimisation
- Author
-
Martin Eigel and Kevin Sturm
- Subjects
Control and Optimization ,49Q10 ,35J15, 46E22, 49Q10, 49K20, 49K40 ,010103 numerical & computational mathematics ,01 natural sciences ,radial kernels ,Domain (mathematical analysis) ,symbols.namesake ,FOS: Mathematics ,Shape optimization ,0101 mathematics ,49K40 ,Mathematics - Optimization and Control ,49K20 ,Mathematics ,gradient method ,Applied Mathematics ,010102 general mathematics ,Hilbert space ,Expression (mathematics) ,variable metric ,reproducing kernel Hilbert spaces ,Optimization and Control (math.OC) ,Kernel (statistics) ,35J15 ,Metric (mathematics) ,Mathematik ,shape optimization ,symbols ,Focus (optics) ,Algorithm ,Gradient method ,Software ,46E22 - Abstract
In this paper we investigate and compare different gradient algorithms designed for the domain expression of the shape derivative. Our main focus is to examine the usefulness of kernel reproducing Hilbert spaces for PDE constrained shape optimisation problems. We show that radial kernels provide convenient formulas for the shape gradient that can be efficiently used in numerical simulations. The shape gradients associated with radial kernels depend on a so called smoothing parameter that allows a smoothness adjustment of the shape during the optimisation process. Besides, this smoothing parameter can be used to modify the movement of the shape. The theoretical findings are verified in a number of numerical experiments.
- Published
- 2016
20. Regularized linear system identification using atomic, nuclear and kernel-based norms: The role of the stability constraint
- Author
-
Gianluigi Pillonetto, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao, and Lennart Ljung
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Mathematical optimization ,Bayesian probability ,Atomic and nuclear norms ,Bayesian interpretation of regularization ,Gaussian processes ,Hankel operator ,Kernel-based regularization ,Lasso ,Linear system identification ,Reproducing kernel Hilbert spaces ,Control and Systems Engineering ,Electrical and Electronic Engineering ,010103 numerical & computational mathematics ,02 engineering and technology ,Systems and Control (eess.SY) ,01 natural sciences ,Regularization (mathematics) ,Oracle ,Machine Learning (cs.LG) ,symbols.namesake ,020901 industrial engineering & automation ,Reglerteknik ,FOS: Electrical engineering, electronic engineering, information engineering ,Applied mathematics ,0101 mathematics ,Gaussian process ,Impulse response ,Mathematics ,Estimator ,Control Engineering ,Spline (mathematics) ,Computer Science - Learning ,symbols ,Computer Science - Systems and Control - Abstract
Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection. (C) 2016 Elsevier Ltd. All rights reserved. Funding Agencies|MIUR FIRB project [RBFR12M3AC]; Progetto di Ateneo [CPDA147754/14]; Linnaeus Center CADICS; Swedish Research Council; ERC advanced grant LEARN [267381]; European Research Council; Swedish Research Council (VR) [2014-5894]
- Published
- 2016
21. Learning gradients via an early stopping gradient descent method
- Author
-
Xin Guo
- Subjects
Mathematics(all) ,Numerical Analysis ,Mathematical optimization ,Early stopping ,Function space ,Applied Mathematics ,General Mathematics ,Regularization perspectives on support vector machines ,Gradient learning ,Approximation error ,Tikhonov regularization ,Dimension (vector space) ,Ranking ,Applied mathematics ,Partial derivative ,Gradient descent ,Reproducing kernel Hilbert spaces ,Analysis ,Mathematics - Abstract
We propose an early stopping algorithm for learning gradients. The motivation is to choose “useful” or “relevant” variables by a ranking method according to norms of partial derivatives in some function spaces. In the algorithm, we used an early stopping technique, instead of the classical Tikhonov regularization, to avoid over-fitting.After stating dimension-dependent learning rates valid for any dimension of the input space, we present a novel error bound when the dimension is large. Our novelty is the independence of power index of the learning rates on the dimension of the input space.
- Published
- 2010
22. Derivative reproducing properties for kernel methods in learning theory
- Author
-
Ding-Xuan Zhou
- Subjects
Derivative reproducing ,Representer theorem ,Computer Science::Machine Learning ,Learning theory ,Applied Mathematics ,Mathematical analysis ,Kernel principal component analysis ,Computational Mathematics ,Kernel method ,Kernel embedding of distributions ,Polynomial kernel ,Kernel (statistics) ,Hermite learning and semi-supervised learning ,Radial basis function kernel ,Applied mathematics ,Reproducing kernel Hilbert spaces ,Mathematics ,Reproducing kernel Hilbert space - Abstract
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered.
- Published
- 2008
23. Fully online classification by regularization
- Author
-
Gui-Bo Ye and Ding-Xuan Zhou
- Subjects
Mathematical optimization ,Early stopping ,Applied Mathematics ,Regularization perspectives on support vector machines ,Semi-supervised learning ,Backus–Gilbert method ,Classification algorithm ,Tikhonov regularization ,Online learning ,Error analysis ,Regularization ,Hinge loss ,Proximal gradient methods for learning ,Reproducing kernel Hilbert spaces ,Algorithm ,Mathematics ,Reproducing kernel Hilbert space - Abstract
In this paper we consider fully online learning algorithms for classification generated from Tikhonov regularization schemes associated with general convex loss functions and reproducing kernel Hilbert spaces. For such a fully online algorithm, the regularization parameter in each learning step changes. This is the essential difference from the partially online algorithm which uses a fixed regularization parameter. We first present a novel approach to the drift error incurred by the change of the regularization parameter. Then we estimate the error of the learning process for the strong approximation in the reproducing kernel Hilbert space. Finally, learning rates are derived from decays of the regularization error. The convexity of the loss function plays an important role in our analysis. Concrete learning rates are given for the hinge loss and the support vector machine q -norm loss.
- Published
- 2007
24. Multivariate integration in weighted Hilbert spaces based on Walsh functions and weighted Sobolev spaces
- Author
-
Josef Dick and Friedrich Pillichshammer
- Subjects
Statistics and Probability ,Discrete mathematics ,Numerical Analysis ,Control and Optimization ,Algebra and Number Theory ,Applied Mathematics ,General Mathematics ,Hilbert space ,Kernel principal component analysis ,Sobolev inequality ,Sobolev space ,symbols.namesake ,Digital nets ,Multivariate integration ,Kernel embedding of distributions ,Kernel (statistics) ,symbols ,Interpolation space ,Reproducing Kernel Hilbert spaces ,Reproducing kernel Hilbert space ,Mathematics - Abstract
We introduce a weighted reproducing kernel Hilbert space which is based on Walsh functions. The worst-case error for integration in this space is studied, especially with regard to (t, m, s)-nets. It is found that there exists a digital (t, m, s)-net, which achieves a strong tractability worst-case error bound under certain condition on the weights.We also investigate the worst-case error of integration in weighted Sobolev spaces. As the main tool we define a digital shift invariant kernel associated to the kernel of the weighted Sobolev space. This allows us to study the mean square worst-case error of randomly digitally shifted digital (t, m, s)- nets. As this digital shift invariant kernel is almost the same as the kernel for the Hilbert space based on Walsh functions, we can derive results for the weighted Sobolev space based on the analysis of the Walsh function space. We show that there exists a (t, m, s)-net which achieves the best possible convergence order for integration in weighted Sobolev spaces and are strongly tractable under the same condition on the weights as for lattice rules.
- Published
- 2005
25. On analytic sampling theory
- Author
-
Lance L. Littlejohn and Antonio G. García
- Subjects
Analytic space ,Sampling series ,Function space ,Applied Mathematics ,Analytic Hilbert space-valued functions ,Mathematical analysis ,Hilbert space ,Field (mathematics) ,Function (mathematics) ,Combinatorics ,Computational Mathematics ,symbols.namesake ,Bounded function ,symbols ,Orthonormal basis ,Reproducing kernel Hilbert spaces ,Reproducing kernel Hilbert space ,Mathematics - Abstract
Let (H,〈ċċ〉 H ) be a complex, separable Hilbert space with orthonormal basis {x n } n=1 ∞ and let Ω be a domain in C, the field of complex numbers. Suppose K is a H-valued function defined on Ω. For each x ∈ H, define f x (z)= 〈K(z),x〉H and let H denote the collection of all such functions f x . In this paper, we endow H with a structure of a reproducing kernel Hilbert space. Furthermore, we show that each element in H is analytic on Ω if and only if K is analytic on Ω or, equivalently, if and only if 〈K(z),x n 〉 is analytic for each n ∈ N and ||K(ċ)||H is bounded on all compact subsets of Ω. In this setting, an abstract version of the analytic Kramer theorem is exhibited. Some examples considering different H spaces are given to illustrate these new results.
- Published
- 2004
26. Hybrid wavelet-support vector classification of waveforms
- Author
-
Gabriele Steidl and Daniel J. Strauss
- Subjects
Support vector machines ,business.industry ,Applied Mathematics ,Pattern recognition ,Wavelets ,Filter bank ,Waveform recognition ,Novelty detection ,Hybrid algorithm ,Support vector machine ,Radial basis functions ,Frames ,Computational Mathematics ,ComputingMethodologies_PATTERNRECOGNITION ,Wavelet ,Adapted filter banks ,Margin (machine learning) ,Preprocessor ,Radial basis function ,Artificial intelligence ,business ,Reproducing kernel Hilbert spaces ,Algorithm ,Mathematics - Abstract
The support vector machine (SVM) represents a new and very promising technique for machine learning tasks involving classification, regression or novelty detection. Improvements of its generalization ability can be achieved by incorporating prior knowledge of the task at hand.We propose a new hybrid algorithm consisting of signal-adapted wavelet decompositions and hard margin SVMs for waveform classification. The adaptation of the wavelet decompositions is tailored for hard margin SV classifiers with radial basis functions as kernels. It allows the optimization of the representation of the data before training the SVM and does not suffer from computationally expensive validation techniques.We assess the performance of our algorithm against the background of current concerns in medical diagnostics, namely the classification of endocardial electrograms and the detection of otoacoustic emissions. Here the performance of hard margin SVMs can significantly be improved by our adapted preprocessing step.
- Published
- 2002
27. Shape deformation analysis from the optimal control viewpoint
- Author
-
Laurent Younes, Sylvain Arguillère, Alain Trouvé, Emmanuel Trélat, Laboratoire Jacques-Louis Lions (LJLL), Université Pierre et Marie Curie - Paris 6 (UPMC)-Université Paris Diderot - Paris 7 (UPD7)-Centre National de la Recherche Scientifique (CNRS), Centre de Mathématiques et de Leurs Applications (CMLA), École normale supérieure - Cachan (ENS Cachan)-Centre National de la Recherche Scientifique (CNRS), Center for Imaging Science (CIS), and Johns Hopkins University (JHU)
- Subjects
General Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Pontryagin's minimum principle ,symbols.namesake ,optimal control ,Pontryagin maximum principle ,shape deformation analysis ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Mathematics ,Applied mathematics ,58E99, 49Q10, 46E22, 49J15, 62H35, 53C22, 58D05 ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,ComputingMethodologies_COMPUTERGRAPHICS ,Applied Mathematics ,Numerical analysis ,Hilbert space ,Optimal control ,reproducing kernel Hilbert spaces ,Optimization and Control (math.OC) ,Norm (mathematics) ,symbols ,020201 artificial intelligence & image processing ,Vector field ,[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC] ,Solving the geodesic equations ,geodesic equations ,Shape analysis (digital geometry) - Abstract
A crucial problem in shape deformation analysis is to determine a deformation of a given shape into another one, which is optimal for a certain cost. It has a number of applications in particular in medical imaging. In this article we provide a new general approach to shape deformation analysis, within the framework of optimal control theory, in which a deformation is represented as the flow of diffeomorphisms generated by time-dependent vector fields. Using reproducing kernel Hilbert spaces of vector fields, the general shape deformation analysis problem is specified as an infinite-dimensional optimal control problem with state and control constraints. In this problem, the states are diffeomorphisms and the controls are vector fields, both of them being subject to some constraints. The functional to be minimized is the sum of a first term defined as geometric norm of the control (kinetic energy of the deformation) and of a data-attachment term providing a geometric distance to the target shape. This point of view has several advantages. First, it allows one to model general constrained shape analysis problems, which opens new issues in this field. Second, using an extension of the Pontryagin maximum principle, one can characterize the optimal solutions of the shape deformation problem in a very general way as the solutions of constrained geodesic equations. Finally, recasting general algorithms of optimal control into shape analysis yields new efficient numerical methods in shape deformation analysis. Overall, the optimal control point of view unifies and generalizes different theoretical and numerical approaches to shape deformation problems, and also allows us to design new approaches. The optimal control problems that result from this construction are infinite dimensional and involve some constraints, and thus are nonstandard. In this article we also provide a rigorous and complete analysis of the infinite-dimensional shape space problem with constraints and of its finite-dimensional approximations.
- Published
- 2014
- Full Text
- View/download PDF
28. Uniform Distribution, Discrepancy, and Reproducing Kernel Hilbert Spaces
- Author
-
Clemens Amstler and Peter Zinterhof
- Subjects
Statistics and Probability ,Numerical Analysis ,Control and Optimization ,Algebra and Number Theory ,Hilbert manifold ,Representer theorem ,Applied Mathematics ,General Mathematics ,Mathematical analysis ,Hilbert space ,abstract uniform distribution ,symbols.namesake ,reproducing kernel Hilbert spaces ,Kernel embedding of distributions ,Unit cube ,Kernel (statistics) ,discrepancy ,numerical integration ,symbols ,Reproducing kernel Hilbert space ,Mathematics ,Bergman kernel - Abstract
In this paper we define a notion of uniform distribution and discrepancy of sequences in an abstract set E through reproducing kernel Hilbert spaces of functions on E. In the case of the finite-dimensional unit cube these discrepancies are very closely related to the worst case error obtained for numerical integration of functions in a reproducing kernel Hilbert space. In the compact case we show that the discrepancy tends to zero if and only if the sequence is uniformly distributed in our sense. Next we prove an existence theorem for such uniformly distributed sequences and investigate the relation to the classical notion of uniform distribution. Some examples conclude this paper.
- Published
- 2001
29. An extension of Mercer theorem to matrix-valued measurable kernels
- Author
-
Veronica Umanità, Silvia Villa, and Ernesto De Vito
- Subjects
Discrete mathematics ,Hilbert manifold ,Representer theorem ,Applied Mathematics ,Hilbert space ,Compact operator ,Compact operator on Hilbert space ,symbols.namesake ,reproducing kernel Hilbert spaces ,Mercer theorem ,symbols ,Projection-valued measure ,Mathematics ,Reproducing kernel Hilbert space ,Bergman kernel - Abstract
We extend the classical Mercer theorem to reproducing kernel Hilbert spaces whose elements are functions from a measurable space X into Cn. Given a finite measure μ on X, we represent the reproducing kernel K as a convergent series in terms of the eigenfunctions of a suitable compact operator depending on K and μ. Our result holds under the mild assumption that K is measurable and the associated Hilbert space is separable. Furthermore, we show that X has a natural second countable topology with respect to which the eigenfunctions are continuous and such that the series representing K uniformly converges to K on compact subsets of X×X, provided that the support of μ is X.
- Published
- 2013
30. The Kramer sampling theorem revisited
- Author
-
M. A. Hernández-Medina, Antonio G. García, María José Muñoz-Bouzo, and Ministerio de Ciencia e Innovación (España)
- Subjects
Semi-inner products ,Matemáticas ,Duality (mathematics) ,01 natural sciences ,Domain (mathematical analysis) ,Lagrange-type interpolation series ,Sampling formulas ,Reproducing distributions ,Nyquist–Shannon sampling theorem ,0101 mathematics ,Differential (infinitesimal) ,Reproducing kernel Hilbert spaces ,Mathematics ,Discrete mathematics ,Telecomunicaciones ,Partial differential equation ,Kramer kernels ,Applied Mathematics ,010102 general mathematics ,Zero-removing property ,Sampling (statistics) ,Reproducing kernel Banach spaces ,010101 applied mathematics ,Kernel (algebra) ,Reproducing kernel Hilbert space - Abstract
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. Besides, it has been the cornerstone for a significant mathematical literature on the topic of sampling theorems associated with differential and difference problems. In this work we provide, in an unified way, new and old generalizations of this result corresponding to various different settings; all these generalizations are illustrated with examples. All the different situations along the paper share a basic approach: the functions to be sampled are obtaining by duality in a separable Hilbert space through an -valued kernel K defined on an appropriate domain. This work has been supported by the grant MTM2009–08345 from the Spanish Ministerio de Ciencia e Innovación (MICNN). Publicado
- Published
- 2013
31. Optimal randomized multilevel algorithms for infinite-dimensional integration on function spaces with ANOVA-type decomposition
- Author
-
Jan Baldeaux and Michael Gnewuch
- Subjects
Numerical Analysis ,multilevel algorithms ,ANOVA decomposition ,randomized algorithms ,numerical integration ,reproducing kernel Hilbert spaces ,scrambled polynomial lattice rules ,Function space ,Applied Mathematics ,Hilbert space ,Numerical Analysis (math.NA) ,Integration problem ,Randomized algorithm ,Sobolev space ,Computational Mathematics ,symbols.namesake ,Lattice (order) ,symbols ,FOS: Mathematics ,Mathematics - Numerical Analysis ,Algorithm ,Mathematics - Abstract
In this paper, we consider the infinite-dimensional integration problem on weighted reproducing kernel Hilbert spaces with norms induced by an underlying function space decomposition of ANOVA-type. The weights model the relative importance of different groups of variables. We present new randomized multilevel algorithms to tackle this integration problem and prove upper bounds for their randomized error. Furthermore, we provide in this setting the first non-trivial lower error bounds for general randomized algorithms, which, in particular, may be adaptive or non-linear. These lower bounds show that our multilevel algorithms are optimal. Our analysis refines and extends the analysis provided in [F. J. Hickernell, T. M\"uller-Gronbach, B. Niu, K. Ritter, J. Complexity 26 (2010), 229-254], and our error bounds improve substantially on the error bounds presented there. As an illustrative example, we discuss the unanchored Sobolev space and employ randomized quasi-Monte Carlo multilevel algorithms based on scrambled polynomial lattice rules., Comment: 31 pages, 0 figures
- Published
- 2012
32. Self-adjoint, unitary, and normal weighted composition operators in several variables
- Author
-
Trieu Le
- Subjects
Nuclear operator ,Applied Mathematics ,Mathematical analysis ,Spectral theorem ,Operator theory ,Fourier integral operator ,Compact operator on Hilbert space ,Quasinormal operator ,Self-adjoint operators ,Functional Analysis (math.FA) ,Mathematics - Functional Analysis ,Normal operators ,Hermitian adjoint ,FOS: Mathematics ,47B38, 47B15, 47B33 ,Weighted composition operators ,Operator norm ,Reproducing kernel Hilbert spaces ,Analysis ,Mathematics - Abstract
We study weighted composition operators on Hilbert spaces of analytic functions on the unit ball with kernels of the form ( 1 − 〈 z , w 〉 ) − γ for γ > 0 . We find necessary and sufficient conditions for the adjoint of a weighted composition operator to be a weighted composition operator or the inverse of a weighted composition operator. We then obtain characterizations of self-adjoint and unitary weighted composition operators. Normality of these operators is also investigated.
- Published
- 2012
- Full Text
- View/download PDF
33. Quaternionic Hilbert spaces and a von Neumann inequality
- Author
-
H. Turgay Kaptanog and Daniel Alpay
- Subjects
Pure mathematics ,Hilbert manifold ,47S10 ,Tensor product of Hilbert spaces ,Drury-Arveson space ,Primary 47A60 ,Von Neumann's theorem ,symbols.namesake ,Reproducing kernel Hilbert spaces ,Mathematics ,Numerical Analysis ,Tensor products ,Mathematics::Operator Algebras ,Applied Mathematics ,Topological tensor product ,Mathematical analysis ,Quaternionic Hilbert spaces ,Hilbert space ,Von Neumann inequality ,Rigged Hilbert space ,Computational Mathematics ,47B32 ,symbols ,Abelian von Neumann algebra ,Secondary 46A32 ,Analysis ,Reproducing kernel Hilbert space - Abstract
We show that Drury's proof of the generalisation of the von Neumann inequality to the case of contractive rows of N-tuples of commuting operators still holds in the quaternionic case. The arguments require a seemingly new result on tensor products of quaternionic Hilbert spaces. © 2012 Copyright Taylor and Francis Group, LLC.
- Published
- 2012
34. Large deviation estimates of the crossing probability for pinned Gaussian processes
- Author
-
Lucia Caramellino and Barbara Pacchiarotti
- Subjects
Statistics and Probability ,Monte Carlo method ,60F10, 60G15, 65C05 ,01 natural sciences ,symbols.namesake ,010104 statistics & probability ,FOS: Mathematics ,0101 mathematics ,Gaussian process ,Reproducing kernel Hilbert spaces ,Conditioned Gaussian process ,Mathematics ,Fractional Brownian motion ,Computer Science::Information Retrieval ,Applied Mathematics ,Probability (math.PR) ,Mathematical analysis ,010102 general mathematics ,Brownian excursion ,Brownian bridge ,Exit time probability ,Large deviations ,Settore MAT/06 - Probabilita' e Statistica Matematica ,Reflected Brownian motion ,Diffusion process ,symbols ,Large deviations theory ,Mathematics - Probability - Abstract
The paper deals with the asymptotic behavior of the bridge of a Gaussian process conditioned to stay in $n$ fixed points at $n$ fixed past instants. In particular, functional large deviation results are stated for small time. Several examples are considered: integrated or not fractional Brownian motion, $m$-fold integrated Brownian motion. As an application, the asymptotic behavior of the exit probability is studied and used for the practical purpose of the numerical computation, via Monte Carlo methods, of the hitting probability up to a given time., 33 pages. Keywords: conditioned Gaussian processes; reproducing kernel Hilbert spaces; large deviations; exit time probabilities; Monte Carlo methods
- Published
- 2008
35. On a class of generalized integrands
- Author
-
Marzia De Donno
- Subjects
Statistics and Probability ,Convergence of semimartingales ,Generalized integrands ,Infinite dimensional stochastic integration ,Measure-valued integrands ,Reproducing kernel Hilbert spaces ,Stochastic process ,Applied Mathematics ,Mathematical analysis ,Special class ,Settore SECS-S/06 - METODI MATEMATICI DELL'ECONOMIA E DELLE SCIENZE ATTUARIALI E FINANZIARIE ,Stochastic integral ,INFINITE-DIMENSIONAL STOCHASTIC INTEGRATION ,Stochastic integration ,CONVERGENCE OF SEMIMARTINGALES ,GENERALIZED INTEGRANDS ,MEASURE-VALUED INTEGRANDS ,REPRODUCING KERNEL HILBERT SPACES ,Semimartingale ,Square-integrable function ,Applied mathematics ,Continuous parameter ,Statistics, Probability and Uncertainty ,Martingale (probability theory) ,Mathematics - Abstract
In the framework of the theory of stochastic integration with respect to a family of semimartingales depending on a continuous parameter, introduced by De Donno and Pratelli as a mathematical background to the theory of bond markets, we analyze a special class of integrands that preserve some nice properties of the finite-dimensional stochastic integral. In particular, we focus our attention on the class of processes considered by Mikulevicius and Rozovskii for the case of a locally square integrable cylindrical martingale and which includes an appropriate set of measure-valued processes.
- Published
- 2007
36. Frames, Riesz bases, and sampling expansions in Banach spaces via semi-inner products
- Author
-
Jun Zhang and Haizhang Zhang
- Subjects
Pure mathematics ,Semi-inner products ,Approximation property ,Eberlein–Šmulian theorem ,Banach space ,010103 numerical & computational mathematics ,Banach manifold ,01 natural sciences ,Shannonʼs sampling expansions ,0101 mathematics ,C0-semigroup ,Lp space ,Reproducing kernel Hilbert spaces ,Mathematics ,Discrete mathematics ,Mathematics::Functional Analysis ,Riesz–Fischer sequences ,Applied Mathematics ,010102 general mathematics ,Infinite-dimensional vector function ,Riesz bases ,Reproducing kernel Banach spaces ,Gaussian kernels ,Frames ,Banach spaces ,Interpolation space ,Duality mappings ,Bessel sequences - Abstract
Frames in a Banach space B were defined as a sequence in its dual space B ⁎ in some recent references. We propose to define them as a collection of elements in B by making use of semi-inner products. Classical theory on frames and Riesz bases is generalized under this new perspective. We then aim at establishing the Shannon sampling theorem in Banach spaces. The existence of such expansions in translation invariant reproducing kernel Hilbert and Banach spaces is discussed.
- Full Text
- View/download PDF
37. Hermite learning with gradient data
- Author
-
Ding-Xuan Zhou, Lei Shi, and Xin Guo
- Subjects
Representer theorem ,Hermite polynomials ,Learning theory ,Applied Mathematics ,Mathematical analysis ,Hilbert space ,MathematicsofComputing_NUMERICALANALYSIS ,Integral operator ,Block matrix ,Operator theory ,Computational Mathematics ,symbols.namesake ,Hermite learning ,Approximation error ,Kernel (statistics) ,symbols ,Applied mathematics ,Coefficient matrix ,Sampling operator ,Reproducing kernel Hilbert spaces ,Mathematics - Abstract
The problem of learning from data involving function values and gradients is considered in a framework of least-square regularized regression in reproducing kernel Hilbert spaces. The algorithm is implemented by a linear system with the coefficient matrix involving both block matrices for generating Graph Laplacians and Hessians. The additional data for function gradients improve learning performance of the algorithm. Error analysis is done by means of sampling operators for sample error and integral operators in Sobolev spaces for approximation error.
- Full Text
- View/download PDF
38. Tractability of quasilinear problems I: General results
- Author
-
Henryk Woźniakowski and Arthur G. Werschulz
- Subjects
Mathematics(all) ,Finite-order weights ,General Mathematics ,Tractability ,Mathematics::Analysis of PDEs ,Quasi-linear problems ,Dirichlet distribution ,Domain (mathematical analysis) ,symbols.namesake ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Argument (linguistics) ,Reproducing kernel Hilbert spaces ,Mathematics ,High-dimensional problems ,Numerical Analysis ,Series (mathematics) ,Applied Mathematics ,Complexity ,Lipschitz continuity ,Computer science ,Algebra ,Product (mathematics) ,symbols ,Computational problem ,Analysis ,Schrödinger's cat - Abstract
The tractability of multivariate problems has usually been studied only for the approximation of linear operators. In this paper we study the tractability of quasilinear multivariate problems. That is, we wish to approximate nonlinear operators Sd(·,·) that depend linearly on the first argument and satisfy a Lipschitz condition with respect to both arguments. Here, both arguments are functions of d variables. Many computational problems of practical importance have this form. Examples include the solution of specific Dirichlet, Neumann, and Schrödinger problems. We show, under appropriate assumptions, that quasilinear problems, whose domain spaces are equipped with product or finite-order weights, are tractable or strongly tractable in the worst case setting.This paper is the first part in a series of papers. Here, we present tractability results for quasilinear problems under general assumptions on quasilinear operators and weights. In future papers, we shall verify these assumptions for quasilinear problems such as the solution of specific Dirichlet, Neumann, and Schrödinger problems.
- Full Text
- View/download PDF
39. Learning rates for regularized classifiers using multivariate polynomial kernels
- Author
-
Lizhong Peng, Hongzhi Tong, and Di-Rong Chen
- Subjects
Statistics and Probability ,Numerical Analysis ,Algebra and Number Theory ,Control and Optimization ,business.industry ,Representer theorem ,Bernstein–Durrmeyer polynomials ,General Mathematics ,Applied Mathematics ,Regularization perspectives on support vector machines ,Pattern recognition ,Kernel principal component analysis ,Learning rates ,Tikhonov regularization ,Polynomial kernel ,Kernel embedding of distributions ,Kernel (statistics) ,Polynomial kernels ,Regularized classifiers ,Artificial intelligence ,business ,Algorithm ,Reproducing kernel Hilbert spaces ,Mathematics ,Reproducing kernel Hilbert space - Abstract
Regularized classifiers (a leading example is support vector machine) are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the polynomial kernels are the original and also probably the most important kernels used in them. In this paper, we provide an error analysis for the regularized classifiers using multivariate polynomial kernels. We introduce Bernstein–Durrmeyer polynomials, whose reproducing kernel Hilbert space norms and approximation properties in L1 space play a key role in the analysis of regularization error. We also introduce the standard estimation of sample error, and derive explicit learning rates for these algorithms.
- Full Text
- View/download PDF
40. Orthogonality from disjoint support in reproducing kernel Hilbert spaces
- Author
-
Haizhang Zhang
- Subjects
Computer Science::Machine Learning ,Pure mathematics ,Representer theorem ,Applied Mathematics ,Mathematical analysis ,Disjoint support ,Hilbert space ,Disjoint sets ,Reproducing kernels ,Kernel principal component analysis ,Sobolev space ,symbols.namesake ,Statistics::Machine Learning ,Kernel embedding of distributions ,Translation invariant kernels ,Sobolev spaces ,symbols ,Invariant (mathematics) ,Orthogonality ,Reproducing kernel Hilbert spaces ,Analysis ,Reproducing kernel Hilbert space ,Mathematics - Abstract
We investigate reproducing kernel Hilbert spaces (RKHS) where two functions are orthogonal whenever they have disjoint support. Necessary and sufficient conditions in terms of feature maps for the reproducing kernel are established. We also present concrete examples of finite dimensional RKHS and RKHS with a translation invariant reproducing kernel. In particular, it is shown that a Sobolev space has the orthogonality from disjoint support property if and only if it is of integer index.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.