5 results on '"She, Yiyuan"'
Search Results
2. Reinforced Robust Principal Component Pursuit.
- Author
-
Brahma, Pratik Prabhanjan, She, Yiyuan, Li, Shijie, Li, Jiade, and Wu, Dapeng
- Subjects
- *
MULTIPLE correspondence analysis (Statistics) , *HUMAN facial recognition software , *MANIFOLDS (Mathematics) , *SUBSPACE identification (Mathematics) , *BIG data - Abstract
High-dimensional data present in the real world is often corrupted by noise and gross outliers. Principal component analysis (PCA) fails to learn the true low-dimensional subspace in such cases. This is the reason why robust versions of PCA, which put a penalty on arbitrarily large outlying entries, are preferred to perform dimension reduction. In this paper, we argue that it is necessary to study the presence of outliers not only in the observed data matrix but also in the orthogonal complement subspace of the authentic principal subspace. In fact, the latter can seriously skew the estimation of the principal components. A reinforced robustification of principal component pursuit is designed in order to cater to the problem of finding out both types of outliers and eliminate their influence on the final subspace estimation. Simulation results under different design situations clearly show the superiority of our proposed method as compared with other popular implementations of robust PCA. This paper also showcases possible applications of our method in critically tough scenarios of face recognition and video background subtraction. Along with approximating a usable low-dimensional subspace from real-world data sets, the technique can capture semantically meaningful outliers. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. Why Deep Learning Works: A Manifold Disentanglement Perspective.
- Author
-
Brahma, Pratik Prabhanjan, Wu, Dapeng, and She, Yiyuan
- Subjects
MACHINE learning ,ARTIFICIAL neural networks ,REPRESENTATION theory ,DATA mining ,PERFORMANCE evaluation - Abstract
Deep hierarchical representations of the data have been found out to provide better informative features for several machine learning applications. In addition, multilayer neural networks surprisingly tend to achieve better performance when they are subject to an unsupervised pretraining. The booming of deep learning motivates researchers to identify the factors that contribute to its success. One possible reason identified is the flattening of manifold-shaped data in higher layers of neural networks. However, it is not clear how to measure the flattening of such manifold-shaped data and what amount of flattening a deep neural network can achieve. For the first time, this paper provides quantitative evidence to validate the flattening hypothesis. To achieve this, we propose a few quantities for measuring manifold entanglement under certain assumptions and conduct experiments with both synthetic and real-world data. Our experimental results validate the proposition and lead to new insights on deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. Learning Topology and Dynamics of Large Recurrent Neural Networks.
- Author
-
She, Yiyuan, He, Yuejia, and Wu, Dapeng
- Subjects
- *
ARTIFICIAL neural networks , *LYAPUNOV functions , *DYNAMICAL systems , *TOPOLOGY , *STOCHASTIC convergence - Abstract
Large-scale recurrent networks have drawn increasing attention recently because of their capabilities in modeling a large variety of real-world phenomena and physical mechanisms. This paper studies how to identify all authentic connections and estimate system parameters of a recurrent network, given a sequence of node observations. This task becomes extremely challenging in modern network applications, because the available observations are usually very noisy and limited, and the associated dynamical system is strongly nonlinear. By formulating the problem as multivariate sparse sigmoidal regression, we develop simple-to-implement network learning algorithms, with rigorous convergence guarantee in theory, for a variety of sparsity-promoting penalty forms. A quantile variant of progressive recurrent network screening is proposed for efficient computation and allows for direct cardinality control of network topology in estimation. Moreover, we investigate recurrent network stability conditions in Lyapunov's sense, and integrate such stability constraints into sparse network learning. Experiments show excellent performance of the proposed algorithms in network topology identification and forecasting. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
5. The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms.
- Author
-
Bunea, Florentina, Lederer, Johannes, and She, Yiyuan
- Subjects
SQUARE root ,COMPUTER algorithms ,PARAMETER estimation ,REGRESSION analysis ,PREDICTION theory ,STOCHASTIC convergence - Abstract
We introduce and study the group square-root lasso (GSRL) method for estimation in high dimensional sparse regression models with group structure. The new estimator minimizes the square root of the residual sum of squares plus a penalty term proportional to the sum of the Euclidean norms of groups of the regression parameter vector. The net advantage of the method over the existing group lasso-type procedures consists in the form of the proportionality factor used in the penalty term, which for GSRL is independent of the variance of the error terms. This is of crucial importance in models with more parameters than the sample size, when estimating the variance of the noise becomes as difficult as the original problem. We show that the GSRL estimator adapts to the unknown sparsity of the regression vector, and has the same optimal estimation and prediction accuracy as the GL estimators, under the same minimal conditions on the model. This extends the results recently established for the square-root lasso, for sparse regression without group structure. Moreover, as a new type of result for square-root lasso methods, with or without groups, we study correct pattern recovery, and show that it can be achieved under conditions similar to those needed by the lasso or group-lasso-type methods, but with a simplified tuning strategy. We implement our method via a new algorithm, with proved convergence properties, which, unlike existing methods, scales well with the dimension of the problem. Our simulation studies support strongly our theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.