124 results on '"Linear projection"'
Search Results
2. Feature screening for ultrahigh-dimensional binary classification via linear projection
- Author
-
Peng Lai, Mingyue Wang, Fengli Song, and Yanqiu Zhou
- Subjects
ultrahigh-dimensional data ,linear projection ,marginal score test ,feature screening ,sure screening property ,Mathematics ,QA1-939 - Abstract
Linear discriminant analysis (LDA) is one of the most widely used methods in discriminant classification and pattern recognition. However, with the rapid development of information science and technology, the dimensionality of collected data is high or ultrahigh, which causes the failure of LDA. To address this issue, a feature screening procedure based on the Fisher's linear projection and the marginal score test is proposed to deal with the ultrahigh-dimensional binary classification problem. The sure screening property is established to ensure that the important features could be retained and the irrelevant predictors could be eliminated. The finite sample properties of the proposed procedure are assessed by Monte Carlo simulation studies and a real-life data example.
- Published
- 2023
- Full Text
- View/download PDF
3. Feature screening for ultrahigh-dimensional binary classification via linear projection.
- Author
-
Lai, Peng, Wang, Mingyue, Song, Fengli, and Zhou, Yanqiu
- Subjects
FISHER discriminant analysis ,MONTE Carlo method ,PATTERN recognition systems - Abstract
Linear discriminant analysis (LDA) is one of the most widely used methods in discriminant classification and pattern recognition. However, with the rapid development of information science and technology, the dimensionality of collected data is high or ultrahigh, which causes the failure of LDA. To address this issue, a feature screening procedure based on the Fisher's linear projection and the marginal score test is proposed to deal with the ultrahigh-dimensional binary classification problem. The sure screening property is established to ensure that the important features could be retained and the irrelevant predictors could be eliminated. The finite sample properties of the proposed procedure are assessed by Monte Carlo simulation studies and a real-life data example. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Linear projection‐based chemical exchange saturation transfer parameter estimation.
- Author
-
Glang, Felix, Fabian, Moritz S., German, Alexander, Khakzar, Katrin M., Mennecke, Angelika, Liebert, Andrzej, Herz, Kai, Liebig, Patrick, Kasper, Burkhard S., Schmidt, Manuel, Zuazua, Enrique, Nagel, Armin M., Laun, Frederik B., Dörfler, Arnd, Scheffler, Klaus, and Zaiss, Moritz
- Subjects
MAGNETIZATION transfer ,PARAMETER estimation ,FEATURE selection ,CURVE fitting ,BRAIN tumors - Abstract
Isolated evaluation of multiparametric in vivo chemical exchange saturation transfer (CEST) MRI often requires complex computational processing for both correction of B0 and B1 inhomogeneity and contrast generation. For that, sufficiently densely sampled Z‐spectra need to be acquired. The list of acquired frequency offsets largely determines the total CEST acquisition time, while potentially representing redundant information. In this work, a linear projection‐based multiparametric CEST evaluation method is introduced that offers fast B0 and B1 inhomogeneity correction, contrast generation and feature selection for CEST data, enabling reduction of the overall measurement time. To that end, CEST data acquired at 7 T in six healthy subjects and in one brain tumor patient were conventionally evaluated by interpolation‐based inhomogeneity correction and Lorentzian curve fitting. Linear regression was used to obtain coefficient vectors that directly map uncorrected data to corrected Lorentzian target parameters. L1‐regularization was applied to find subsets of the originally acquired CEST measurements that still allow for such a linear projection mapping. The linear projection method allows fast and interpretable mapping from acquired raw data to contrast parameters of interest, generalizing from healthy subject training data to unseen healthy test data and to the tumor patient dataset. The L1‐regularization method shows that a fraction of the acquired CEST measurements is sufficient to preserve tissue contrasts, offering up to a 2.8‐fold reduction of scan time. Similar observations as for the 7‐T data can be made for data from a clinical 3‐T scanner. Being a fast and interpretable computation step, the proposed method is complementary to neural networks that have recently been employed for similar purposes. The scan time acceleration offered by the L1‐regularization ("CEST‐LASSO") constitutes a step towards better applicability of multiparametric CEST protocols in a clinical context. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. JUAN DE BORGOÑA Y LOS INICIOS DEL CLASICISMO EN LA CATEDRAL DE TOLEDO.
- Author
-
Blanco Mozo, Juan Luis
- Subjects
CLASSICISM ,FORMAL languages ,ART critics ,RENAISSANCE ,DIOCESES - Abstract
Copyright of Espacio, Tiempo y Forma. Serie VII, Historia del Arte is the property of Editorial UNED and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
6. On Resolution Matrices.
- Author
-
An, Meijian
- Subjects
MATRIX inversion ,STOCHASTIC matrices ,LINEAR operators ,MATRICES (Mathematics) ,TIKHONOV regularization - Abstract
Solution appraisal, which has been realized on the basis of projections from the true medium to the solution, is an essential procedure in practical studies, especially in computer tomography. The projection operator in a linear problem or its linear approximation in a nonlinear problem is the resolution matrix for the solution (or model). Practical applications of a resolution matrix can be used to quantitatively retrieve the resolvability of the medium, the constrainability of the solution parameters, and the relationship between the solution and the factors in the study system. A given row vector of the matrix for a solution parameter can be used to quantify the resolvability, deviation from expectation, and difference between that solution parameter and its neighbor from the main-diagonal element, row-vector sum, and difference between neighboring elements in the row vector, respectively. The resolution length of a solution parameter should be estimated from the row vector, although it may be unreliable when the vector is unstable (e.g., due to errors). Comparatively, the resolution lengths that are estimated from the column vectors of the observation-constrained parameters are reliable in this instance. Previous studies have generally employed either the direct resolution matrix or the hybrid resolution matrix as the model resolution matrix. The direct resolution matrix and hybrid resolution matrix in an inversion with damping (or general Tikhonov regularization) are Gramian (e.g., symmetric). The hybrid resolution matrix in an inversion using zero-row-sum regularization matrices (e.g., higher-order Tikhonov regularizations) is one-row-sum but is not a stochastic matrix. When the two resolution matrices appear in iterative nonlinear inversions, they are not a projection of the solution, but rather the gradient of the projection or a projection of the solution improvement immediately after a given iteration. Regardless, their resultant resolution lengths in iterative nonlinear inversions of surface-wave dispersion remain similar to those from the projection of the solution. The solution is influenced by various factors in the study, but the direct resolution matrix is derived only from the observation matrix, whereas the hybrid resolution matrix is derived from the observation and regularization matrices. The limitations imply that the appropriateness using the two resolution matrices may be questionable in practical applications. Here we propose a new complete resolution matrix to overcome the limitations, in which all of the factors (e.g., errors) in linear or nonlinear (inverse or non-inverse) studies can be incorporated. Insights on all of the above are essential for ensuring a reliable and appropriate application of the resolution matrix to appraise the model/solution and understand the relationship between the solution and all of the factors in the study system, which is also important for improving the system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Linear dimensionality reduction method based on topological properties.
- Author
-
Yao, Yuqin, Meng, Hua, Gao, Yang, Long, Zhiguo, and Li, Tianrui
- Subjects
- *
TOPOLOGICAL property , *MACHINE learning , *DATA mining - Abstract
Dimensionality reduction is an important data preprocessing technique that has been extensively studied in machine learning and data mining. Locality Preserving Projection (LPP) is a widely used linear unsupervised dimensionality reduction method, which maps high-dimensional data into low-dimensional subspace through linear transformation. Although various variants of LPP have been proposed to tackle different drawbacks of LPP, it is identified in this article that LPP does not possess the important topological property of translation invariance, that is, the linear transformation given by LPP is strongly related to the relative position between the data and the origin of the coordinate system. In this article, we theoretically analyze the reason why this drawback exists in LPP and propose to resolve it by introducing a kind of centralization to the model. Moreover, as topological properties are prominent information to characterize the structure of the data, this article proposes a further improvement of LPP to maintain topological connectivity of data after dimensionality reduction. Experiments on multiple synthetic and real-world datasets show that the new model incorporating topological properties outperforms not only the original LPP model but also several other classic linear or non-linear dimensionality reduction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. A zero-estimator approach for estimating the signal level in a high-dimensional model-free setting.
- Author
-
Livne, Ilan, Azriel, David, and Goldberg, Yair
- Subjects
- *
CONDITIONAL expectations , *U-statistics , *PERFORMANCE theory , *SIGNALS & signaling , *NOISE - Abstract
We study a high-dimensional regression setting under the assumption of known covariate distribution. We aim at estimating the amount of explained variation in the response by the best linear function of the covariates (the signal level). In our setting, neither sparsity of the coefficient vector, nor normality of the covariates or linearity of the conditional expectation are assumed. We present an unbiased and consistent estimator and then improve it by using a zero-estimator approach, where a zero-estimator is a statistic whose expected value is zero. More generally, we present an algorithm based on the zero estimator approach that in principle can improve any given estimator. We study some asymptotic properties of the proposed estimators and demonstrate their finite sample performance in a simulation study. • High-dimensional regression setting under the assumption of known covariate distribution. • No assumptions regarding the sparsity of the coefficient vector, normality of the covariates, or linearity of the conditional expectation. • Estimating the amount of explained variation in the response (the signal level) by the best linear function of the covariates. • Zero-estimators are used to improve initial estimators of the signal and noise levels. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Improving nonnegative matrix factorization with advanced graph regularization.
- Author
-
Zhang, Xiaoxia, Chen, Degang, Yu, Hong, Wang, Guoyin, Tang, Houjun, and Wu, Kesheng
- Subjects
- *
NONNEGATIVE matrices , *MATRIX decomposition , *SPARSE matrices , *RECOMMENDER systems , *COST functions , *MATRICES (Mathematics) - Abstract
• A new regularizer is proposed based on a linear projection. • Two iterative update procedures are developed for minimizing the new objective function. • Various experiments verify the superiority of the proposed algorithm. Nonnegative Matrix Factorization (NMF) produces interpretable solutions for many applications including collaborative filtering. Typically, regularization is needed to address issues such as overfitting and interpretability, especially for collaborative filtering where the rating matrices are sparse. However, the existing regularizers are typically constructed from the factorization results instead of the rating matrices. Intuitively, we regard these existing regularizers as representing either user factors or item factors and anticipate that a more holistic regularizer could improve the effectiveness of NMF. To this end, we propose a graph regularizer based on a linear projection of the rating matrix, and call the resulting method: Linear Projection and Graph Regularized Nonnegative Matrix Factorization (LPGNMF). We develop two iterative methods to minimize the cost function and derive two update rules named LPGNMF and F-LPGNMF. Additionally, we prove the value of the objective function decreases with LPGNMF and converges to a fixed point with F-LPGNMF. Finally, we test these methods against a number of NMF algorithms on different data sets and show both LPGNMF and F-LPGNMF always achieve smaller errors based on two different error measures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Covariate Assisted Principal regression for covariance matrix outcomes.
- Author
-
Zhao, Yi, Wang, Bingkai, Mostofsky, Stewart H, Caffo, Brian S, and Luo, Xi
- Subjects
- *
COVARIANCE matrices , *FUNCTIONAL magnetic resonance imaging , *COMPUTER simulation , *BRAIN , *MAGNETIC resonance imaging , *REGRESSION analysis , *RESEARCH funding , *ALGORITHMS - Abstract
In this study, we consider the problem of regressing covariance matrices on associated covariates. Our goal is to use covariates to explain variation in covariance matrices across units. As such, we introduce Covariate Assisted Principal (CAP) regression, an optimization-based method for identifying components associated with the covariates using a generalized linear model approach. We develop computationally efficient algorithms to jointly search for common linear projections of the covariance matrices, as well as the regression coefficients. Under the assumption that all the covariance matrices share identical eigencomponents, we establish the asymptotic properties. In simulation studies, our CAP method shows higher accuracy and robustness in coefficient estimation over competing methods. In an example resting-state functional magnetic resonance imaging study of healthy adults, CAP identifies human brain network changes associated with subject demographics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. The number of irreducible components of the locus of nonbirational projection centers.
- Author
-
Noma, Atsushi
- Subjects
- *
LOCUS (Mathematics) - Abstract
We work over an algebraically closed field of characteristic zero. For a nondegenerate projective variety X ⊆ P N , the locus of points from which X is projected nonbirationally onto its image, is called the Segre locus of X. The purpose here is to give an upper bound of the number of the irreducible components of the Segre locus of a projective variety in terms of its invariants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Prototype Discriminative Learning for Face Image Set Classification
- Author
-
Wang, Wen, Wang, Ruiping, Shan, Shiguang, Chen, Xilin, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Lai, Shang-Hong, editor, Lepetit, Vincent, editor, Nishino, Ko, editor, and Sato, Yoichi, editor
- Published
- 2017
- Full Text
- View/download PDF
13. 基于结构信息相似度的线性投影灰度化算法.
- Author
-
陈广秋, 王冰雪, 刘 美, and 刘广文
- Subjects
VISUAL perception ,ALGORITHMS ,CANNING & preserving ,PROBLEM solving ,PIXELS - Abstract
Copyright of Journal of Jilin University (Science Edition) / Jilin Daxue Xuebao (Lixue Ban) is the property of Zhongguo Xue shu qi Kan (Guang Pan Ban) Dian zi Za zhi She and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
14. Uncertainty-Aware Principal Component Analysis.
- Author
-
Gortler, Jochen, Spinner, Thilo, Streeb, Dirk, Weiskopf, Daniel, and Deussen, Oliver
- Subjects
PRINCIPAL components analysis ,MULTIVARIATE analysis ,DIMENSION reduction (Statistics) ,COVARIANCE matrices ,DATA reduction ,GAUSSIAN distribution - Abstract
We present a technique to perform dimensionality reduction on data that is subject to uncertainty. Our method is a generalization of traditional principal component analysis (PCA) to multivariate probability distributions. In comparison to non-linear methods, linear dimensionality reduction techniques have the advantage that the characteristics of such probability distributions remain intact after projection. We derive a representation of the PCA sample covariance matrix that respects potential uncertainty in each of the inputs, building the mathematical foundation of our new method: uncertainty-aware PCA. In addition to the accuracy and performance gained by our approach over sampling-based strategies, our formulation allows us to perform sensitivity analysis with regard to the uncertainty in the data. For this, we propose factor traces as a novel visualization that enables to better understand the influence of uncertainty on the chosen principal components. We provide multiple examples of our technique using real-world datasets. As a special case, we show how to propagate multivariate normal distributions through PCA in closed form. Furthermore, we discuss extensions and limitations of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Visualizing Document Similarity
- Author
-
Cao, Nan, Cui, Weiwei, Christensen, Henrik, Series editor, Nebel, Bernhard, Series editor, Yang, Qiang, Series editor, Cao, Nan, and Cui, Weiwei
- Published
- 2016
- Full Text
- View/download PDF
16. Enhance Fuzzy Vault Security Using Nonrandom Chaff Point Generator
- Author
-
Nguyen, Minh Tan, Truong, Quang Hai, Dang, Tran Khanh, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Dang, Tran Khanh, editor, Wagner, Roland, editor, Neuhold, Erich, editor, Takizawa, Makoto, editor, Küng, Josef, editor, and Thoai, Nam, editor
- Published
- 2014
- Full Text
- View/download PDF
17. Local Vein Texton Learning for Finger Vein Recognition
- Author
-
Yang, Lu, Yang, Gongping, Yin, Yilong, Dong, Lumei, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Sun, Zhenan, editor, Shan, Shiguang, editor, Sang, Haifeng, editor, Zhou, Jie, editor, Wang, Yunhong, editor, and Yuan, Weiqi, editor
- Published
- 2014
- Full Text
- View/download PDF
18. A Unified Framework for Over-Clocking Linear Projections on FPGAs under PVT Variation
- Author
-
Duarte, Rui Policarpo, Bouganis, Christos-Savvas, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Kobsa, Alfred, editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Weikum, Gerhard, editor, Goehringer, Diana, editor, Santambrogio, Marco Domenico, editor, Cardoso, João M. P., editor, and Bertels, Koen, editor
- Published
- 2014
- Full Text
- View/download PDF
19. An accelerator for support vector machines based on the local geometrical information and data partition.
- Author
-
Song, Yunsheng, Liang, Jiye, and Wang, Feng
- Abstract
The support vector machines (SVM) is difficult to deal with large datasets for its low training efficiency. One of the important solutions has been developed by dividing a whole dataset into smaller subsets with data partition and combining the results of the classifiers over the divided subsets. However, traditional data partition approaches are difficult to preserve the class boundary of the dataset or control the size of divided subsets, so that their performance will be greatly influenced. To overcome this difficulty, we propose an accelerator for SVM algorithm based on the local geometrical information. In this algorithm, the feature space is divided into several regions with the approximately equal number of training instances by linear projection, and then each SVM classifier trained over the extended region only predicts the unlabeled instances within that original region. The proposed algorithm can not only hold the decision boundary of the raw data, but also saves a lot of execution time for implementing it in a parallel environment. Furthermore, the number of instances within each divided regions can be effectively controlled; it is conducive to choose the complexity of the execution in each of the processors. Experiments show that the classification performance of the proposed algorithm compares favorably with four state-of-the-art algorithms with the least training time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Representation of Compact Linear Operators
- Author
-
Edmunds, David E., Evans, W. Desmond, Ball, Joseph A., Series editor, Dym, Harry, Series editor, Kaashoek, Marinus A., Series editor, Langer, Heinz, Series editor, Tretter, Christiane, Series editor, Edmunds, David E., and Evans, W. Desmond
- Published
- 2013
- Full Text
- View/download PDF
21. Exclusive Visual Descriptor Quantization
- Author
-
Zhang, Yu, Wu, Jianxin, Lin, Weiyao, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Lee, Kyoung Mu, editor, Matsushita, Yasuyuki, editor, Rehg, James M., editor, and Hu, Zhanyi, editor
- Published
- 2013
- Full Text
- View/download PDF
22. Linear Projection Method Based on Information Theoretic Learning
- Author
-
Vera, Pablo A., Estévez, Pablo A., Principe, Jose C., Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Diamantaras, Konstantinos, editor, Duch, Wlodek, editor, and Iliadis, Lazaros S., editor
- Published
- 2010
- Full Text
- View/download PDF
23. 基于直线投影特征的镜头畸变校正方法.
- Author
-
杨 麒, 李天伟, 黄 谦, and 李 伟
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
24. Non-greedy Max–min Large Margin based on L1-norm.
- Author
-
Chen, Si-Bao, Zuo, Chong, Ding, Chris, and Luo, Bin
- Subjects
- *
LINEAR systems , *DIMENSIONAL reduction algorithms , *ROBUST control , *ITERATIVE methods (Mathematics) , *MATHEMATICAL optimization - Abstract
In recent years, there have been several L1-norm-based linear projection methods and max–min-based dimensionality reduction methods, which show robustness to outliers and noises and show large margin for discrimination. In this paper, we propose L1-norm-based max–min large margin (MLM-L1) for linear projection-based dimensionality reduction. It makes use of the robustness of L1-norm to outliers and noises and the max–min idea for large margin. A non-greedy iterative algorithm (NMLM-L1) is proposed to solve the optimization problem of the proposed MLM-L1. Experiments on several face image databases show that the proposed method has better classification performance than its closely related methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Stochastic discriminant analysis for linear supervised dimension reduction.
- Author
-
Juuti, Mika, Corona, Francesco, and Karhunen, Juha
- Subjects
- *
STOCHASTIC analysis , *DIMENSION reduction (Statistics) , *BIG data , *PATTERN perception , *DIGITAL images - Abstract
In this paper, we consider a linear supervised dimension reduction method for classification settings: stochastic discriminant analysis (SDA). This method matches similarities between points in the projection space with those in a response space. The similarities are represented by transforming distances between points to joint probabilities using a transformation which resembles Student’s t-distribution. The matching is done by minimizing the Kullback–Leibler divergence between the two probability distributions. We compare the performance of our SDA method against several state-of-the-art methods for supervised linear dimension reduction. In our experiments, we found that the performance of the SDA method is often better and typically at least equal to the compared methods. We have made experiments with various types of data sets having low, medium, or high dimensions and quite different numbers of samples, and with both sparse and dense data sets. If there are several classes in the studied data set, the low-dimensional projections computed using our SDA method provide often higher classification accuracies than the compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
- Author
-
Ran Li, Xiaomeng Duan, Xu Li, Wei He, and Yanling Li
- Abstract
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. PROJECTIVE VARIETIES WITH NONBIRATIONAL LINEAR PROJECTIONS AND APPLICATIONS.
- Author
-
ATSUSHI NOMA
- Subjects
- *
ZERO (The number) , *ALGEBRA , *MATHEMATICAL mappings , *LINEAR operators , *ASYMPTOTES - Abstract
We work over an algebraically closed field of characteristic zero. The purpose of this paper is to characterize a nondegenerate projective variety X with a linear projection which induces a nonbirational map to its image. As an application, for smooth X of degree d and codimension e, we prove the "semiampleness" of the (d - e + 1)th twist of the ideal sheaf. This improves a linear bound of the regularity of smooth projective varieties by Bayer-Mumford-Bertram-Ein-Lazarsfeld, and gives an asymptotic regularity bound. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Visualization by Linear Projections as Information Retrieval
- Author
-
Peltonen, Jaakko, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Príncipe, José C., editor, and Miikkulainen, Risto, editor
- Published
- 2009
- Full Text
- View/download PDF
29. Case Examples of Imaging
- Author
-
Scherzer, Otmar, Grasmair, Markus, Grossauer, Harald, Haltmeier, Markus, Lenzen, Frank, Antman, S.S., editor, Marsden, J.E., editor, Sirovich, L., editor, Scherzer, Otmar, Grasmair, Markus, Grossauer, Harald, Haltmeier, Markus, and Lenzen, Frank
- Published
- 2009
- Full Text
- View/download PDF
30. LDR-LLE: LLE with Low-Dimensional Neighborhood Representation
- Author
-
Goldberg, Yair, Ritov, Ya’acov, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Bebis, George, editor, Boyle, Richard, editor, Parvin, Bahram, editor, Koracin, Darko, editor, Remagnino, Paolo, editor, Porikli, Fatih, editor, Peters, Jörg, editor, Klosowski, James, editor, Arns, Laura, editor, Chun, Yu Ka, editor, Rhyne, Theresa-Marie, editor, and Monroe, Laura, editor
- Published
- 2008
- Full Text
- View/download PDF
31. Projection Pursuit Constructive Neural Networks Based on Quality of Projected Clusters
- Author
-
Grochowski, Marek, Duch, Włodzisław, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Kůrková, Véra, editor, Neruda, Roman, editor, and Koutník, Jan, editor
- Published
- 2008
- Full Text
- View/download PDF
32. SODA-Boosting and Its Application to Gender Recognition
- Author
-
Xu, Xun, Huang, Thomas S., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Zhou, S. Kevin, editor, Zhao, Wenyi, editor, Tang, Xiaoou, editor, and Gong, Shaogang, editor
- Published
- 2007
- Full Text
- View/download PDF
33. K-Separability
- Author
-
Duch, Włodzisław, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Kollias, Stefanos D., editor, Stafylopatis, Andreas, editor, Duch, Włodzisław, editor, and Oja, Erkki, editor
- Published
- 2006
- Full Text
- View/download PDF
34. The transform of Laplace, orthogonal transformations, moving fields
- Author
-
Andrei V. Pavlov
- Subjects
linear projection ,orthogonal transformations ,moving fields ,возникновение периодичности аналитических функций ,transform of Laplace ,обращение преобразования Лапласа ,оптимальное линейное приближение ,ортогональные преобразования - Abstract
В статье доказано, что с точки зрения чисел и нового скалярного произведения диагонали произвольного ромба можно считать одинаковыми как результат ортогонального преобразования, переводящего стороны в диагонали в случае, когда длины векторов измеряются в одних и тех же единицах измерения длин на сторонах. Во второй части статьи приведен некоторый класс функций, значения которых восстанавливают только по известным положительным значениям преобразований Лапласа од данных функций. В третьей части статьи приведены примеры, когда отображения точек комплексной плоскости становятся периодичными с произвольным периодом независимо от исходного аналитического отображения (с точки зрения введения двух систем координат)., It is proved in article, that from point of numbers and new scalar work of diagonal of arbitrary rhombus it is possible to consider identical as a result of the orthogonal transformation, when lengths of vectors are measured in the same units of measuring as on sides of the rhombus. In the second part of article some class of functions is resulted: the values of the functions restore on the known positive values of the transform of Laplace. In the third part of article the examples are resulted, when a function of points of complex plane become periodic with the arbitrary period (from point of some introduction of two systems of co-ordinates).
- Published
- 2022
35. A Method for Visual Cluster Validation
- Author
-
Hennig, Christian, Bock, H.-H., editor, Gaul, W., editor, Vichi, M., editor, Arabie, Ph., editor, Baier, D., editor, Critchley, F., editor, Decker, R., editor, Diday, E., editor, Greenacre, M., editor, Lauro, C., editor, Meulman, J., editor, Monari, P., editor, Nishisato, S., editor, Ohsumi, N., editor, Opitz, O., editor, Ritter, G., editor, Schader, M., editor, Weihs, C., editor, Weihs, Claus, editor, and Gaul, Wolfgang, editor
- Published
- 2005
- Full Text
- View/download PDF
36. Self-organizing Map Initialization
- Author
-
Attik, Mohammed, Bougrain, Laurent, Alexandre, Frédéric, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Duch, Włodzisław, editor, Kacprzyk, Janusz, editor, Oja, Erkki, editor, and Zadrożny, Sławomir, editor
- Published
- 2005
- Full Text
- View/download PDF
37. Riesz Space and Fuzzy Upcrossing Theorems
- Author
-
Kuo, Wen-Chi, Labuschagne, Coenraad C. A., Watson, Bruce A., Kacprzyk, Janusz, editor, López-Díaz, Miguel, Gil, María Á., Grzegorzewski, Przemysław, Hryniewicz, Olgierd, and Lawry, Jonathan
- Published
- 2004
- Full Text
- View/download PDF
38. Spatial Representation of Dissimilarity Data via Lower-Complexity Linear and Nonlinear Mappings
- Author
-
Pekalska, Elżbieta, Duin, Robert P. W., Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Caelli, Terry, editor, Amin, Adnan, editor, Duin, Robert P. W., editor, de Ridder, Dick, editor, and Kamel, Mohamed, editor
- Published
- 2002
- Full Text
- View/download PDF
39. Omnidirectional Vision for Appearance-Based Robot Localization
- Author
-
Kröse, B. J. A., Vlassis, N., Bunschoten, R., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Hager, Gregory D., editor, Christensen, Henrik Iskov, editor, Bunke, Horst, editor, and Klein, Rolf, editor
- Published
- 2002
- Full Text
- View/download PDF
40. Two-Dimensional Discriminant Locality Preserving Projection Based on ℓ1-norm Maximization.
- Author
-
Chen, Si-Bao, Wang, Jing, Liu, Cai-Yin, and Luo, Bin
- Subjects
- *
DISCRIMINANT analysis , *MAXIMUM entropy method , *DIMENSION reduction (Statistics) , *IMAGE processing , *OUTLIERS (Statistics) , *FEATURE extraction - Abstract
In this paper, a new linear dimensionality reduction method named Two-Dimensional Discriminant Locality Preserving Projection Based on ℓ 1 -norm Maximization (2DDLPP-L1) is proposed for preprocessing of image data. 2DDLPP-L1 makes full use of the robustness of ℓ 1 -norm to noises and outliers. Furthermore, 2DDLPP-L1 is a 2D-based method which extracts image features directly from image matrices, avoiding instability and high complexity of matrix computation. Two graphs, separation graph and cohesiveness graph, are constructed with feature vectors as vertices to represent the inter-class separation and intra-class cohesiveness. An iterative algorithm with proof of convergence is proposed to solve the optimal projection matrix. Experiments on several face image databases demonstrate that the performance and robustness of 2DDLPP-L1 are better than its related methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Learning Linear Representation of Space Partitioning Trees Based on Unsupervised Kernel Dimension Reduction.
- Author
-
Shan, Hongming, Zhang, Junping, and Kruger, Uwe
- Abstract
Space partitioning trees, which sequentially divide and subdivide a space into disjoint subsets using splitting hyperplanes, play a key role in accelerating the query of samples in the cybernetics and computer vision domains. Associated methods, however, suffer from the curse of dimensionality or stringent assumptions on the data distribution. This paper presents a new concept, termed kernel dimension reduction-tree (KDR-tree), that relies on linear projections computed based on an unsupervised kernel dimension reduction approach. The proposed concept does not rely on any assumption on the data distribution and can capture higher-order statistical information encapsulated within the data. This paper then develops two variants of the KDR-tree concept: 1) to handle residual data [i.e., the residual-based KDR-tree (rKDR-tree) algorithm] and 2) to cope with larger datasets, [i.e., the sampling-based KDR-tree (sKDR-tree) algorithm]. By directly comparing the KDR-tree concept to competitive techniques, involving several benchmark datasets, this paper shows that the sKDR-tree yields a better performance for non-Gaussian distributed datasets. Based on the analysis of three datasets, this paper highlights, experimentally, that the rKDR-tree has the potential to discover the intrinsic dimension. This paper also provides a theoretical analysis about the KDR-tree concept to outline why it outperforms existing techniques if the data distribution is non-Gaussian. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
42. Further Applications
- Author
-
Flenner, Hubert, O’Carroll, Liam, Vogel, Wolfgang, Flenner, Hubert, O’Carroll, Liam, and Vogel, Wolfgang
- Published
- 1999
- Full Text
- View/download PDF
43. Linear regression based projections for dimensionality reduction.
- Author
-
Chen, Si-Bao, Luo, Bin, and Ding, Chris H.Q.
- Subjects
- *
REGRESSION analysis , *K-nearest neighbor classification , *IMAGE databases , *PATTERN recognition systems , *INFORMATION science - Abstract
Highlights • Linear Regression based Projections (LRP) is proposed for dimensionality reduction. • LRP does not need to manually choose the neighborhood size in constructing graph. • A discriminative L2-graph is computed using label information of training data. • Two types of weights are investigated to construct discriminative L2-graph. • LRP is much faster since it computes edge weights using class-specific samples. Abstract In graph embedding based dimensionality reduction methods, the number of K-nearest neighbors is usually needed to be manually chosen in high dimensional space. Graph construction by different number of K-nearest neighbor changes dramatically and seriously affects the performance of graph embedding based dimensionality reduction. How to automatically construct a graph is very important. In this paper, first, a discriminative L2-graph is investigated. It computes the edge weights using the class-specific samples and weighted ridge regression, avoiding manually choosing the K-nearest neighbors in traditional graph construction. Second, a discriminative L2-graph based dimensionality reduction method is proposed, named Linear Regression based Projections (LRP). LRP minimizes the ratio between the local compactness information and the total separability information to seek the optimal projection matrix. LRP is much faster than its counterparts, Sparsity Preserving Projections (SPP) and Collaborative Representation based Projections (CRP), since LRP is supervised and computes edge weights using class-specific samples while SPP and CRP are unsupervised and compute edge weights using all samples. The experimental results on benchmark face image databases show that the proposed LRP outperforms many existing representative linear dimensionality reduction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. ON LINEAR PROJECTIONS OF QUADRATIC VARIETIES.
- Author
-
BRODMANN, MARKUS and PARK, EUISUNG
- Subjects
- *
QUADRICS , *LOCUS (Mathematics) , *SECANT function , *MORPHISMS (Mathematics) , *QUADRATIC equations - Abstract
We study simple outer linear projections of projective varieties whose homogeneous vanishing ideal is defined by quadrics which satisfy the condition K2. We extend results on simple outer linear projections of rational normal scrolls. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. Nilpotent Approximations and Optimal Trajectories
- Author
-
Bressan, Alberto, Byrnes, Christopher I., editor, Amari, S.-I., editor, Anderson, B. D. O., editor, Åström, Karl Johan, editor, Aubin, Jean-Pierre, editor, Banks, H. T., editor, Baras, John S., editor, Bensoussan, A., editor, Burns, John, editor, Chen, Han-Fu, editor, Davis, M. H. A., editor, Fleming, Wendell, editor, Fliess, Michel, editor, Glover, Keith, editor, Hinrichsen, Diederich, editor, Isidori, Alberto, editor, Jakubczyk, B., editor, Kimura, Hidenori, editor, Krener, Arthur J., editor, Kunita, H., editor, Kurzhansky, Alexandre, editor, Kushner, Harold J., editor, Lindquist, Anders, editor, Manitius, Andrzej, editor, Martin, Clyde F., editor, Mitter, Sanjoy, editor, Picci, Giorgio, editor, Pshenichnyj, Boris, editor, Sussmann, H. J., editor, Tarn, T. J., editor, Tikhomirov, V. M., editor, Varaiya, Pravin P., editor, Willems, Jan C., editor, Wonham, W. M., editor, Bonnard, Bernard, editor, Bride, Bernard, editor, Gauthier, Jean-Paul, editor, and Kupka, Ivan, editor
- Published
- 1991
- Full Text
- View/download PDF
46. Costs of AIDS in a Developing Area: Indirect and Direct Costs of AIDS in Puerto Rico
- Author
-
Shepard, D. S., Davis, K., editor, van Eimeren, W., editor, Schwefel, Detlef, editor, Leidl, Reiner, editor, Rovira, Joan, editor, and Drummond, Michael F., editor
- Published
- 1990
- Full Text
- View/download PDF
47. Fire in the Tropical Rain Forest of the Amazon Basin
- Author
-
Fearnside, P. M., Billings, W. D., editor, Golley, F., editor, Lange, O. L., editor, Olson, J. S., editor, Remmert, H., editor, and Goldammer, Johann Georg, editor
- Published
- 1990
- Full Text
- View/download PDF
48. A linear projection approach to environment modeling for robust speech recognition.
- Author
-
Tsao, Yu, Huang, Chien-Lin, Matsuda, Shigeki, Hori, Chiori, and Kashioka, Hideki
- Abstract
Use of a linear projection (LP) function to transform multiple sets of acoustic models into a single set of acoustic models is proposed for characterizing testing environments for robust automatic speech recognition. The LP function is an extension of the linear regression (LR) function used in maximum likelihood linear regression (MLLR) and maximum a posteriori linear regression (MAPLR) by incorporating local information in the ensemble acoustic space to enhance the environment modeling capacity. To estimate the nuisance parameters of the LP function, we developed maximum likelihood LP (MLLP) and maximum a posteriori LP (MAPLP) and derived a set of integrated prior (IP) densities for MAPLP. The IP densities integrate multiple knowledge sources from the training set, previously seen speech data, current utterance, and a prepared tree structure. We evaluated the proposed MLLP and MAPLP on the Aurora-2 database in an unsupervised model adaptation manner. Experimental results show that the LP function outperforms the LR function with both ML- and MAP-based estimates over different test conditions. Moreover, because the MAP-based estimate can handle over-fittings well, MAPLP has clear improvements over MLLP. Compared to the baseline result, MAPLP provides a significant 10.99% word error rate reduction. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
49. Linear projections of joint symmetry and independence applied to exact testing treatment effects based on multidimensional outcomes.
- Author
-
Vexler, Albert and Zou, Li
- Subjects
- *
DISTRIBUTION (Probability theory) , *TREATMENT effectiveness , *SYMMETRY , *DATA distribution , *ABSOLUTE value , *RANDOM variables - Abstract
The growing need for analyzing multivariate aspects of joint data distributions is reinforced by a diversity of experiments based on dependent outcomes. In this sense, different contexts of joint symmetry of data distributions have been dealt with extensively in both theory and practice. Univariate characterizations of properties of multivariate distributions can allow the reduction of the original problem to a substantially simpler one. We focus on research scenarios when vectors x and Ax are identically distributed, where A is a diagonal matrix and absolute values of A 's elements equal to one. It is shown that these scenarios are attractive in new characterizations of joint or mutual independence between random variables. We establish projections of the joint symmetry and independence via the one-dimensional symmetry of linear combinations of x 's components and their interactions. These projections are the most revealing of the multivariate data distribution. The usefulness of the linear projections is exemplified by constructing an efficient nonparametric exact test for joint treatment effects. In this framework, an algorithm for implementing linear projection-based tests is proven. Numerical studies based on generated vectors and a real dataset show that the proposed test can exhibit high and stable power characteristics. The present method can be also used for testing independence between symmetric random vectors. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. GENERIC INNER PROJECTIONS OF PROJECTIVE VARIETIES AND AN APPLICATION TO THE POSITIVITY OF DOUBLE POINT DIVISORS.
- Author
-
NOMA, ATSUSHI
- Subjects
- *
VERONESE surfaces , *GRASSMANN manifolds , *ZARISKI surfaces , *METRIC projections , *ALGEBRAIC equations - Abstract
Let X ⊆ ℙN be a smooth nondegenerate projective variety of dimension n ≥ 2, codimension e and degree d with the canonical line bundle ωX defined over an algebraically closed field of characteristic zero. The purpose here is to prove that the base locus of |OX(d - n - e - 1) ⊗ ω⋁X | is at most a finite set, except in a few cases. To describe the exceptional cases, we classify (not necessarily smooth) projective varieties whose generic inner projections have exceptional divisors. As applications, we prove the (d - e)-regularity of OX, Property (Nk-d+e) for OX(k), and inequalities for the delta and sectional genera. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.