1,971 results on '"Cichocki, Andrzej"'
Search Results
52. Cross-Modal Attention Preservation with Self-Contrastive Learning for Composed Query-Based Image Retrieval
- Author
-
Li, Shenshen, primary, Xu, Xing, additional, Jiang, Xun, additional, Shen, Fumin, additional, Sun, Zhe, additional, and Cichocki, Andrzej, additional
- Published
- 2024
- Full Text
- View/download PDF
53. Corrections to “Randomized Algorithms for Computation of Tucker Decomposition and Higher Order SVD (HOSVD)”
- Author
-
Ahmadi-Asl, Salman, primary, Abukhovich, Stanislav, additional, Asante-Mensah, Maame G., additional, Cichocki, Andrzej, additional, Phan, Anh Huy, additional, Tanaka, Tohishisa, additional, and Oseledets, Ivan, additional
- Published
- 2024
- Full Text
- View/download PDF
54. Leveraging Spatio Temporal Estimation for Online Adaptive Steady State Visual Evoked Potential Recognition
- Author
-
Jin, Jing, primary, He, Xinjie, additional, Allison, Brendan Z, additional, Qin, Ke, additional, Wang, Xingyu, additional, and Cichocki, Andrzej, additional
- Published
- 2024
- Full Text
- View/download PDF
55. A Time Local Weighted Transformation Recognition Framework for Steady State Visual Evoked Potentials based Brain Computer Interfaces
- Author
-
Qin, Ke, primary, Xu, Ren, additional, Li, Shurui, additional, Wang, Xingyu, additional, Cichocki, Andrzej, additional, and Jin, Jing, additional
- Published
- 2024
- Full Text
- View/download PDF
56. MOCNN: A Multiscale Deep Convolutional Neural Network for ERP-Based Brain-Computer Interfaces
- Author
-
Jin, Jing, primary, Xu, Ruitian, additional, Daly, Ian, additional, Zhao, Xueqing, additional, Wang, Xingyu, additional, and Cichocki, Andrzej, additional
- Published
- 2024
- Full Text
- View/download PDF
57. A Survey of Neurodynamic Optimization
- Author
-
Xia, Youshen, primary, Liu, Qingshan, additional, Wang, Jun, additional, and Cichocki, Andrzej, additional
- Published
- 2024
- Full Text
- View/download PDF
58. On the robustness of EEG tensor completion methods
- Author
-
Duan, Feng, Jia, Hao, Zhang, ZhiWen, Feng, Fan, Tan, Ying, Dai, YangYang, Cichocki, Andrzej, Yang, ZhengLu, Caiafa, Cesar F., Sun, Zhe, and Solé-Casals, Jordi
- Published
- 2021
- Full Text
- View/download PDF
59. Tensor Networks for Latent Variable Analysis. Part I: Algorithms for Tensor Train Decomposition
- Author
-
Phan, Anh-Huy, Cichocki, Andrzej, Uschmajew, Andre, Tichavsky, Petr, Luta, George, and Mandic, Danilo
- Subjects
Computer Science - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long studied in quantum physics and scientific computing. In this study, we present novel algorithms and applications of tensor network decompositions, with a particular focus on the tensor train decomposition and its variants. The novel algorithms developed for the tensor train decomposition update, in an alternating way, one or several core tensors at each iteration, and exhibit enhanced mathematical tractability and scalability to exceedingly large-scale data tensors. The proposed algorithms are tested in classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, and achieve superior performance over the widely used truncated algorithms for tensor train decomposition.
- Published
- 2016
60. Optimized motor imagery paradigm based on imagining Chinese characters writing movement
- Author
-
Qiu, Zhaoyang, Allison, Brendan Z., Jin, Jing, Zhang, Yu, Wang, Xingyu, Li, Wei, and Cichocki, Andrzej
- Subjects
Computer Science - Human-Computer Interaction ,Quantitative Biology - Neurons and Cognition - Abstract
Motor imagery (MI) is a mental representation of motor behavior that has been widely used as a control method for a brain-computer interface (BCI), allowing communication for the physically impaired. The performance of MI based BCI mainly depends on the subject's ability to self-modulate EEG signals. Proper training can help naive subjects learn to modulate brain activity proficiently. However, training subjects typically involves abstract motor tasks and is time-consuming. To improve the performance of naive subjects during motor imagery, a novel paradigm was presented that would guide naive subjects to modulate brain activity effectively. In this new paradigm, pictures of the left or right hand were used as cues for subjects to finish the motor imagery task. Fourteen healthy subjects (11 male, aged 22-25 years, mean 23.6+/-1.16) participated in this study. The task was to imagine writing a Chinese character. Specifically, subjects could imagine hand movements following the sequence of writing strokes in the Chinese character. This paradigm was meant to find an effective and familiar action for most Chinese people, to provide them with a specific, extensively practiced task and help them modulate brain activity. Results showed that the writing task paradigm yielded significantly better performance than the traditional arrow paradigm (p<0.001). Questionnaire replies indicated that most subjects thought the new paradigm was easier and more comfortable. The proposed new motor imagery paradigm could guide subjects to help them modulate brain activity effectively. Results showed that there were significant improvements using new paradigm, both in classification accuracy and usability.
- Published
- 2016
61. Tensor Ring Decomposition
- Author
-
Zhao, Qibin, Zhou, Guoxu, Xie, Shengli, Zhang, Liqing, and Cichocki, Andrzej
- Subjects
Computer Science - Numerical Analysis ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Data Structures and Algorithms - Abstract
Tensor networks have in recent years emerged as the powerful tools for solving the large-scale optimization problems. One of the most popular tensor network is tensor train (TT) decomposition that acts as the building blocks for the complicated tensor networks. However, the TT decomposition highly depends on permutations of tensor dimensions, due to its strictly sequential multilinear products over latent cores, which leads to difficulties in finding the optimal TT representation. In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition. The key advantage of TR model is the circular dimensional permutation invariance which is gained by employing the trace operation and treating the latent cores equivalently. TR model can be viewed as a linear combination of TT decompositions, thus obtaining the powerful and generalized representation abilities. For optimization of latent cores, we present four different algorithms based on the sequential SVDs, ALS scheme, and block-wise ALS techniques. Furthermore, the mathematical properties of TR model are investigated, which shows that the basic multilinear algebra can be performed efficiently by using TR representaions and the classical tensor decompositions can be conveniently transformed into the TR representation. Finally, the experiments on both synthetic signals and real-world datasets were conducted to evaluate the performance of different algorithms.
- Published
- 2016
62. Numerical CP Decomposition of Some Difficult Tensors
- Author
-
Tichavsky, Petr, Phan, Anh Huy, and Cichocki, Andrzej
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Numerical Analysis ,Statistics - Computation - Abstract
In this paper, a numerical method is proposed for canonical polyadic (CP) decomposition of small size tensors. The focus is primarily on decomposition of tensors that correspond to small matrix multiplications. Here, rank of the tensors is equal to the smallest number of scalar multiplications that are necessary to accomplish the matrix multiplication. The proposed method is based on a constrained Levenberg-Marquardt optimization. Numerical results indicate the rank and border ranks of tensors that correspond to multiplication of matrices of the size 2x3 and 3x2, 3x3 and 3x2, 3x3 and 3x3, and 3x4 and 4x3. The ranks are 11, 15, 23 and 29, respectively. In particular, a novel algorithm for multiplying the matrices of the sizes 3x3 and 3x2 with 15 multiplications is presented.
- Published
- 2016
63. MERACLE: Constructive Layer-Wise Conversion of a Tensor Train into a MERA
- Author
-
Batselier, Kim, Cichocki, Andrzej, and Wong, Ngai
- Published
- 2021
- Full Text
- View/download PDF
64. Linked Component Analysis from Matrices to High Order Tensors: Applications to Biomedical Data
- Author
-
Zhou, Guoxu, Zhao, Qibin, Zhang, Yu, Adalı, Tülay, Xie, Shengli, and Cichocki, Andrzej
- Subjects
Computer Science - Computational Engineering, Finance, and Science ,Computer Science - Learning ,Computer Science - Numerical Analysis - Abstract
With the increasing availability of various sensor technologies, we now have access to large amounts of multi-block (also called multi-set, multi-relational, or multi-view) data that need to be jointly analyzed to explore their latent connections. Various component analysis methods have played an increasingly important role for the analysis of such coupled data. In this paper, we first provide a brief review of existing matrix-based (two-way) component analysis methods for the joint analysis of such data with a focus on biomedical applications. Then, we discuss their important extensions and generalization to multi-block multiway (tensor) data. We show how constrained multi-block tensor decomposition methods are able to extract similar or statistically dependent common features that are shared by all blocks, by incorporating the multiway nature of data. Special emphasis is given to the flexible common and individual feature analysis of multi-block data with the aim to simultaneously extract common and individual latent components with desired properties and types of diversity. Illustrative examples are given to demonstrate their effectiveness for biomedical data analysis., Comment: 20 pages, 11 figures, Proceedings of the IEEE, 2015
- Published
- 2015
- Full Text
- View/download PDF
65. Tensor Deflation for CANDECOMP/PARAFAC. Part 3: Rank Splitting
- Author
-
Phan, Anh-Huy, Tichavsky, Petr, and Cichocki, Andrzej
- Subjects
Computer Science - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
CANDECOMP/PARAFAC (CPD) approximates multiway data by sum of rank-1 tensors. Our recent study has presented a method to rank-1 tensor deflation, i.e. sequential extraction of the rank-1 components. In this paper, we extend the method to block deflation problem. When at least two factor matrices have full column rank, one can extract two rank-1 tensors simultaneously, and rank of the data tensor is reduced by 2. For decomposition of order-3 tensors of size R x R x R and rank-R, the block deflation has a complexity of O(R^3) per iteration which is lower than the cost O(R^4) of the ALS algorithm for the overall CPD.
- Published
- 2015
66. Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions
- Author
-
Lee, Namgil and Cichocki, Andrzej
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Numerical Analysis ,15A09, 65F08, 65F20, 65F22 - Abstract
We propose a new method for low-rank approximation of Moore-Penrose pseudoinverses (MPPs) of large-scale matrices using tensor networks. The computed pseudoinverses can be useful for solving or preconditioning of large-scale overdetermined or underdetermined systems of linear equations. The computation is performed efficiently and stably based on the modified alternating least squares (MALS) scheme using low-rank tensor train (TT) decompositions and tensor network contractions. The formulated large-scale optimization problem is reduced to sequential smaller-scale problems for which any standard and stable algorithms can be applied. Regularization technique is incorporated in order to alleviate ill-posedness and obtain robust low-rank approximations. Numerical simulation results illustrate that the regularized pseudoinverses of a wide class of non-square or nonsymmetric matrices admit good approximate low-rank TT representations. Moreover, we demonstrated that the computational cost of the proposed method is only logarithmic in the matrix size given that the TT-ranks of a data matrix and its approximate pseudoinverse are bounded. It is illustrated that a strongly nonsymmetric convection-diffusion problem can be efficiently solved by using the preconditioners computed by the proposed method., Comment: 28 pages
- Published
- 2015
- Full Text
- View/download PDF
67. Smooth PARAFAC Decomposition for Tensor Completion
- Author
-
Yokota, Tatsuya, Zhao, Qibin, and Cichocki, Andrzej
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In recent years, low-rank based tensor completion, which is a higher-order extension of matrix completion, has received considerable attention. However, the low-rank assumption is not sufficient for the recovery of visual data, such as color and 3D images, where the ratio of missing data is extremely high. In this paper, we consider "smoothness" constraints as well as low-rank approximations, and propose an efficient algorithm for performing tensor completion that is particularly powerful regarding visual data. The proposed method admits significant advantages, owing to the integration of smooth PARAFAC decomposition for incomplete tensors and the efficient selection of models in order to minimize the tensor rank. Thus, our proposed method is termed as "smooth PARAFAC tensor completion (SPC)." In order to impose the smoothness constraints, we employ two strategies, total variation (SPC-TV) and quadratic variation (SPC-QV), and invoke the corresponding algorithms for model learning. Extensive experimental evaluations on both synthetic and real-world visual data illustrate the significant improvements of our method, in terms of both prediction performance and efficiency, compared with many state-of-the-art tensor completion methods., Comment: 13 pages, 9 figures
- Published
- 2015
- Full Text
- View/download PDF
68. Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion
- Author
-
Zhao, Qibin, Zhang, Liqing, and Cichocki, Andrzej
- Subjects
Computer Science - Learning ,Computer Science - Numerical Analysis ,Statistics - Machine Learning - Abstract
Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion. The most challenging problem is related to determination of model complexity (i.e., multilinear rank), especially when noise and missing data are present. In addition, existing methods cannot take into account uncertainty information of latent factors, resulting in low generalization performance. To address these issues, we present a class of probabilistic generative Tucker models for tensor decomposition and completion with structural sparsity over multilinear latent space. To exploit structural sparse modeling, we introduce two group sparsity inducing priors by hierarchial representation of Laplace and Student-t distributions, which facilitates fully posterior inference. For model learning, we derived variational Bayesian inferences over all model (hyper)parameters, and developed efficient and scalable algorithms based on multilinear operations. Our methods can automatically adapt model complexity and infer an optimal multilinear rank by the principle of maximum lower bound of model evidence. Experimental results and comparisons on synthetic, chemometrics and neuroimaging data demonstrate remarkable performance of our models for recovering ground-truth of multilinear rank and missing entries.
- Published
- 2015
69. Total Variation Regularized Tensor RPCA for Background Subtraction from Compressive Measurements
- Author
-
Cao, Wenfei, Wang, Yao, Sun, Jian, Meng, Deyu, Yang, Can, Cichocki, Andrzej, and Xu, Zongben
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust PCA (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation (TV) to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model (H-TenRPCA). To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model (PG-TenRPCA) by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using alternating direction method of multipliers (ADMM) are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches., Comment: To appear in IEEE TIP
- Published
- 2015
- Full Text
- View/download PDF
70. Log-Determinant Divergences Revisited: Alpha--Beta and Gamma Log-Det Divergences
- Author
-
Cichocki, Andrzej, Cruces, Sergio, and Amari, Shun-Ichi
- Subjects
Statistics - Computation ,Computer Science - Information Theory - Abstract
In this paper, we review and extend a family of log-det divergences for symmetric positive definite (SPD) matrices and discuss their fundamental properties. We show how to generate from parameterized Alpha-Beta (AB) and Gamma Log-det divergences many well known divergences, for example, the Stein's loss, S-divergence, called also Jensen-Bregman LogDet (JBLD) divergence, the Logdet Zero (Bhattacharryya) divergence, Affine Invariant Riemannian Metric (AIRM) as well as some new divergences. Moreover, we establish links and correspondences among many log-det divergences and display them on alpha-beta plain for various set of parameters. Furthermore, this paper bridges these divergences and shows also their links to divergences of multivariate and multiway Gaussian distributions. Closed form formulas are derived for gamma divergences of two multivariate Gaussian densities including as special cases the Kullback-Leibler, Bhattacharryya, R\'enyi and Cauchy-Schwartz divergences. Symmetrized versions of the log-det divergences are also discussed and reviewed. A class of divergences is extended to multiway divergences for separable covariance (precision) matrices., Comment: 35 pages, 4 figures
- Published
- 2014
- Full Text
- View/download PDF
71. Decomposition of Big Tensors With Low Multilinear Rank
- Author
-
Zhou, Guoxu, Cichocki, Andrzej, and Xie, Shengli
- Subjects
Computer Science - Numerical Analysis ,Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Tensor decompositions are promising tools for big data analytics as they bring multiple modes and aspects of data to a unified framework, which allows us to discover complex internal structures and correlations of data. Unfortunately most existing approaches are not designed to meet the major challenges posed by big data analytics. This paper attempts to improve the scalability of tensor decompositions and provides two contributions: A flexible and fast algorithm for the CP decomposition (FFCP) of tensors based on their Tucker compression; A distributed randomized Tucker decomposition approach for arbitrarily big tensors but with relatively low multilinear rank. These two algorithms can deal with huge tensors, even if they are dense. Extensive simulations provide empirical evidence of the validity and efficiency of the proposed algorithms.
- Published
- 2014
72. Very Large-Scale Singular Value Decomposition Using Tensor Train Networks
- Author
-
Lee, Namgil and Cichocki, Andrzej
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Numerical Analysis ,15A18, 65F15, 65F30 - Abstract
We propose new algorithms for singular value decomposition (SVD) of very large-scale matrices based on a low-rank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute several dominant singular values and corresponding singular vectors for large-scale structured matrices given in a TT format. The computational complexity of the proposed methods scales logarithmically with the matrix size under the assumption that both the matrix and the singular vectors admit low-rank TT decompositions. The proposed methods, which are called the alternating least squares for SVD (ALS-SVD) and modified alternating least squares for SVD (MALS-SVD), compute the left and right singular vectors approximately through block TT decompositions. The very large-scale optimization problem is reduced to sequential small-scale optimization problems, and each core tensor of the block TT decompositions can be updated by applying any standard optimization methods. The optimal ranks of the block TT decompositions are determined adaptively during iteration process, so that we can achieve high approximation accuracy. Extensive numerical simulations are conducted for several types of TT-structured matrices such as Hilbert matrix, Toeplitz matrix, random matrix with prescribed singular values, and tridiagonal matrix. The simulation results demonstrate the effectiveness of the proposed methods compared with standard SVD algorithms and TT-based algorithms developed for symmetric eigenvalue decomposition.
- Published
- 2014
- Full Text
- View/download PDF
73. Canonical Polyadic Decomposition with Auxiliary Information for Brain Computer Interface
- Author
-
Li, Junhua, Li, Chao, and Cichocki, Andrzej
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Physiological signals are often organized in the form of multiple dimensions (e.g., channel, time, task, and 3D voxel), so it is better to preserve original organization structure when processing. Unlike vector-based methods that destroy data structure, Canonical Polyadic Decomposition (CPD) aims to process physiological signals in the form of multi-way array, which considers relationships between dimensions and preserves structure information contained by the physiological signal. Nowadays, CPD is utilized as an unsupervised method for feature extraction in a classification problem. After that, a classifier, such as support vector machine, is required to classify those features. In this manner, classification task is achieved in two isolated steps. We proposed supervised Canonical Polyadic Decomposition by directly incorporating auxiliary label information during decomposition, by which a classification task can be achieved without an extra step of classifier training. The proposed method merges the decomposition and classifier learning together, so it reduces procedure of classification task compared with that of respective decomposition and classification. In order to evaluate the performance of the proposed method, three different kinds of signals, synthetic signal, EEG signal, and MEG signal, were used. The results based on evaluations of synthetic and real signals demonstrated that the proposed method is effective and efficient.
- Published
- 2014
- Full Text
- View/download PDF
74. Bayesian Robust Tensor Factorization for Incomplete Multiway Data
- Author
-
Zhao, Qibin, Zhou, Guoxu, Zhang, Liqing, Cichocki, Andrzej, and Amari, Shun-ichi
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Learning - Abstract
We propose a generative model for robust tensor factorization in the presence of both missing data and outliers. The objective is to explicitly infer the underlying low-CP-rank tensor capturing the global information and a sparse tensor capturing the local information (also considered as outliers), thus providing the robust predictive distribution over missing entries. The low-CP-rank tensor is modeled by multilinear interactions between multiple latent factors on which the column sparsity is enforced by a hierarchical prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$ distribution that associates an individual hyperparameter with each element independently. For model learning, we develop an efficient closed-form variational inference under a fully Bayesian treatment, which can effectively prevent the overfitting problem and scales linearly with data size. In contrast to existing related works, our method can perform model selection automatically and implicitly without need of tuning parameters. More specifically, it can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers. In addition, the tradeoff between the low-rank approximation and the sparse representation can be optimized in the sense of maximum model evidence. The extensive experiments and comparisons with many state-of-the-art algorithms on both synthetic and real-world datasets demonstrate the superiorities of our method from several perspectives., Comment: in IEEE Transactions on Neural Networks and Learning Systems, 2015
- Published
- 2014
- Full Text
- View/download PDF
75. Feature Learning from Incomplete EEG with Denoising Autoencoder
- Author
-
Li, Junhua, Struzik, Zbigniew, Zhang, Liqing, and Cichocki, Andrzej
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Quantitative Biology - Neurons and Cognition - Abstract
An alternative pathway for the human brain to communicate with the outside world is by means of a brain computer interface (BCI). A BCI can decode electroencephalogram (EEG) signals of brain activities, and then send a command or an intent to an external interactive device, such as a wheelchair. The effectiveness of the BCI depends on the performance in decoding the EEG. Usually, the EEG is contaminated by different kinds of artefacts (e.g., electromyogram (EMG), background activity), which leads to a low decoding performance. A number of filtering methods can be utilized to remove or weaken the effects of artefacts, but they generally fail when the EEG contains extreme artefacts. In such cases, the most common approach is to discard the whole data segment containing extreme artefacts. This causes the fatal drawback that the BCI cannot output decoding results during that time. In order to solve this problem, we employ the Lomb-Scargle periodogram to estimate the spectral power from incomplete EEG (after removing only parts contaminated by artefacts), and Denoising Autoencoder (DAE) for learning. The proposed method is evaluated with motor imagery EEG data. The results show that our method can successfully decode incomplete EEG to good effect., Comment: The paper was accepted for publication by Neurocomputing
- Published
- 2014
- Full Text
- View/download PDF
76. A joint optimization framework to semi-supervised RVFL and ELM networks for efficient data classification
- Author
-
Peng, Yong, Li, Qingxi, Kong, Wanzeng, Qin, Feiwei, Zhang, Jianhai, and Cichocki, Andrzej
- Published
- 2020
- Full Text
- View/download PDF
77. EEG-based approach for recognizing human social emotion perception
- Author
-
Zhu, Li, Su, Chongwei, Zhang, Jianhai, Cui, Gaochao, Cichocki, Andrzej, Zhou, Changle, and Li, Junhua
- Published
- 2020
- Full Text
- View/download PDF
78. Efficient representations of EEG signals for SSVEP frequency recognition based on deep multiset CCA
- Author
-
Liu, Qianqian, Jiao, Yong, Miao, Yangyang, Zuo, Cili, Wang, Xingyu, Cichocki, Andrzej, and Jin, Jing
- Published
- 2020
- Full Text
- View/download PDF
79. International Federation of Clinical Neurophysiology (IFCN) – EEG research workgroup: Recommendations on frequency and topographic analysis of resting state EEG rhythms. Part 1: Applications in clinical research studies
- Author
-
Babiloni, Claudio, Barry, Robert J., Başar, Erol, Blinowska, Katarzyna J., Cichocki, Andrzej, Drinkenburg, Wilhelmus H.I.M., Klimesch, Wolfgang, Knight, Robert T., Lopes da Silva, Fernando, Nunez, Paul, Oostenveld, Robert, Jeong, Jaeseung, Pascual-Marqui, Roberto, Valdes-Sosa, Pedro, and Hallett, Mark
- Published
- 2020
- Full Text
- View/download PDF
80. A Randomized Algorithm for Tensor Singular Value Decomposition Using an Arbitrary Number of Passes
- Author
-
Ahmadi-Asl, Salman, primary, Phan, Anh-Huy, additional, and Cichocki, Andrzej, additional
- Published
- 2023
- Full Text
- View/download PDF
81. Variance characteristic preserving common spatial pattern for motor imagery BCI
- Author
-
Liang, Wei, primary, Jin, Jing, additional, Xu, Ren, additional, Wang, Xingyu, additional, and Cichocki, Andrzej, additional
- Published
- 2023
- Full Text
- View/download PDF
82. Image reconstruction using superpixel clustering and tensor completion
- Author
-
Asante-Mensah, Maame G., primary, Phan, Anh Huy, additional, Ahmadi-Asl, Salman, additional, Aghbari, Zaher Al, additional, and Cichocki, Andrzej, additional
- Published
- 2023
- Full Text
- View/download PDF
83. Multi-tensor Completion for Estimating Missing Values in Video Data
- Author
-
Li, Chao, Guo, Lili, and Cichocki, Andrzej
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Many tensor-based data completion methods aim to solve image and video in-painting problems. But, all methods were only developed for a single dataset. In most of real applications, we can usually obtain more than one dataset to reflect one phenomenon, and all the datasets are mutually related in some sense. Thus one question raised whether such the relationship can improve the performance of data completion or not? In the paper, we proposed a novel and efficient method by exploiting the relationship among datasets for multi-video data completion. Numerical results show that the proposed method significantly improve the performance of video in-painting, particularly in the case of very high missing percentage.
- Published
- 2014
84. Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems
- Author
-
Cichocki, Andrzej
- Subjects
Computer Science - Numerical Analysis ,Mathematics - Numerical Analysis - Abstract
In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA)., Comment: arXiv admin note: text overlap with arXiv:1403.2048
- Published
- 2014
85. Fundamental Tensor Operations for Large-Scale Data Analysis in Tensor Train Formats
- Author
-
Lee, Namgil and Cichocki, Andrzej
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Emerging Technologies ,15A63, 15A69, 65F25, 65F30 - Abstract
We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the aim of breaking the curse-of-dimensionality in large-scale numerical analysis, we describe basic operations on large-scale vectors, matrices, and high-order tensors represented by TT decomposition. The proposed representations can be used for describing numerical methods based on TT decomposition for solving large-scale optimization problems such as systems of linear equations and symmetric eigenvalue problems., Comment: 36 pages; Several improvements and corrected references
- Published
- 2014
86. Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
- Author
-
Caiafa, Cesar F. and Cichocki, Andrzej
- Subjects
Computer Science - Information Theory ,Computer Science - Data Structures and Algorithms - Abstract
In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an $N$th-order $(I_1\times I_2\times \cdots \times I_N)$ data tensor $\underline{\mathbf{X}}$ from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order $N$. In addition, it is proved that, in the matrix case and in a particular case with $3$rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction $\underline{\mathbf{X}}_\tau$ is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where $\tau$ is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter $\tau=\tau_0 > 0$ exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using $\tau=0$, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e. it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets., Comment: Submitted to IEEE Transactions on Signal Processing
- Published
- 2014
- Full Text
- View/download PDF
87. Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness
- Author
-
Zhou, Guoxu, Cichocki, Andrzej, Zhao, Qibin, and Xie, Shengli
- Subjects
Computer Science - Learning ,Computer Science - Computer Vision and Pattern Recognition ,Statistics - Machine Learning - Abstract
Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large-scale, existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms., Comment: appears in IEEE Transactions on Image Processing, 2015
- Published
- 2014
- Full Text
- View/download PDF
88. Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions
- Author
-
Cichocki, Andrzej
- Subjects
Computer Science - Emerging Technologies - Abstract
Many problems in computational neuroscience, neuroinformatics, pattern/image recognition, signal processing and machine learning generate massive amounts of multidimensional data with multiple aspects and high dimensionality. Tensors (i.e., multi-way arrays) provide often a natural and compact representation for such massive multidimensional data via suitable low-rank approximations. Big data analytics require novel technologies to efficiently process huge datasets within tolerable elapsed times. Such a new emerging technology for multidimensional big data is a multiway analysis via tensor networks (TNs) and tensor decompositions (TDs) which represent tensors by sets of factor (component) matrices and lower-order (core) tensors. Dynamic tensor analysis allows us to discover meaningful hidden structures of complex data and to perform generalizations by capturing multi-linear and multi-aspect relationships. We will discuss some fundamental TN models, their mathematical and graphical descriptions and associated learning algorithms for large-scale TDs and TNs, with many potential applications including: Anomaly detection, feature extraction, classification, cluster analysis, data fusion and integration, pattern recognition, predictive modeling, regression, time series analysis and multiway component analysis. Keywords: Large-scale HOSVD, Tensor decompositions, CPD, Tucker models, Hierarchical Tucker (HT) decomposition, low-rank tensor approximations (LRA), Tensorization/Quantization, tensor train (TT/QTT) - Matrix Product States (MPS), Matrix Product Operator (MPO), DMRG, Strong Kronecker Product (SKP)., Comment: Part of this work was presented on the International Workshop on Smart Info-Media Systems in Asia,(invited talk - SISA-2013) Sept.30--Oct.2, 2013, Nagoya, JAPAN
- Published
- 2014
89. Non-Orthogonal Tensor Diagonalization
- Author
-
Tichavsky, Petr, Phan, Anh Huy, and Cichocki, Andrzej
- Subjects
Computer Science - Numerical Analysis ,Statistics - Other Statistics - Abstract
Tensor diagonalization means transforming a given tensor to an exactly or nearly diagonal form through multiplying the tensor by non-orthogonal invertible matrices along selected dimensions of the tensor. It is generalization of approximate joint diagonalization (AJD) of a set of matrices. In particular, we derive (1) a new algorithm for symmetric AJD, which is called two-sided symmetric diagonalization of order-three tensor, (2) a similar algorithm for non-symmetric AJD, also called general two-sided diagonalization of an order-3 tensor, and (3) an algorithm for three-sided diagonalization of order-3 or order-4 tensors. The latter two algorithms may serve for canonical polyadic (CP) tensor decomposition, and they can outperform other CP tensor decomposition methods in terms of computational speed under the restriction that the tensor rank does not exceed the tensor multilinear rank. Finally, we propose (4) similar algorithms for tensor block diagonalization, which is related to the tensor block-term decomposition., Comment: The manuscript was revised deeply, but the main idea is the same. The algorithm has changed significantly
- Published
- 2014
90. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination
- Author
-
Zhao, Qibin, Zhang, Liqing, and Cichocki, Andrzej
- Subjects
Computer Science - Learning ,Computer Science - Computer Vision and Pattern Recognition ,Statistics - Machine Learning - Abstract
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank. In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
- Published
- 2014
- Full Text
- View/download PDF
91. Correlation-based channel selection and regularized feature optimization for MI-based BCI
- Author
-
Jin, Jing, Miao, Yangyang, Daly, Ian, Zuo, Cili, Hu, Dewen, and Cichocki, Andrzej
- Published
- 2019
- Full Text
- View/download PDF
92. Quadratic programming over ellipsoids with applications to constrained linear regression and tensor decomposition
- Author
-
Phan, Anh-Huy, Yamagishi, Masao, Mandic, Danilo, and Cichocki, Andrzej
- Published
- 2020
- Full Text
- View/download PDF
93. Novel hybrid brain–computer interface system based on motor imagery and P300
- Author
-
Zuo, Cili, Jin, Jing, Yin, Erwei, Saab, Rami, Miao, Yangyang, Wang, Xingyu, Hu, Dewen, and Cichocki, Andrzej
- Published
- 2020
- Full Text
- View/download PDF
94. Frequency Recognition in SSVEP-based BCI using Multiset Canonical Correlation Analysis
- Author
-
Zhang, Yu, Zhou, Guoxu, Jin, Jing, Wang, Xingyu, and Cichocki, Andrzej
- Subjects
Statistics - Machine Learning - Abstract
Canonical correlation analysis (CCA) has been one of the most popular methods for frequency recognition in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs). Despite its efficiency, a potential problem is that using pre-constructed sine-cosine waves as the required reference signals in the CCA method often does not result in the optimal recognition accuracy due to their lack of features from the real EEG data. To address this problem, this study proposes a novel method based on multiset canonical correlation analysis (MsetCCA) to optimize the reference signals used in the CCA method for SSVEP frequency recognition. The MsetCCA method learns multiple linear transforms that implement joint spatial filtering to maximize the overall correlation among canonical variates, and hence extracts SSVEP common features from multiple sets of EEG data recorded at the same stimulus frequency. The optimized reference signals are formed by combination of the common features and completely based on training data. Experimental study with EEG data from ten healthy subjects demonstrates that the MsetCCA method improves the recognition accuracy of SSVEP frequency in comparison with the CCA method and other two competing methods (multiway CCA (MwayCCA) and phase constrained CCA (PCCA)), especially for a small number of channels and a short time window length. The superiority indicates that the proposed MsetCCA method is a new promising candidate for frequency recognition in SSVEP-based BCIs.
- Published
- 2013
- Full Text
- View/download PDF
95. Tensor Decompositions: A New Concept in Brain Data Analysis?
- Author
-
Cichocki, Andrzej
- Subjects
Computer Science - Numerical Analysis ,Computer Science - Learning ,Quantitative Biology - Neurons and Cognition ,Statistics - Machine Learning - Abstract
Matrix factorizations and their extensions to tensor factorizations and decompositions have become prominent techniques for linear and multilinear blind source separation (BSS), especially multiway Independent Component Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover, tensor decompositions have many other potential applications beyond multilinear BSS, especially feature extraction, classification, dimensionality reduction and multiway clustering. In this paper, we briefly overview new and emerging models and approaches for tensor decompositions in applications to group and linked multiway BSS/ICA, feature extraction, classification andMultiway Partial Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker and CP models, Penalized Tensor Decompositions (PTD), feature extraction, classification, multiway PLS and CCA.
- Published
- 2013
96. Tensor Networks for Dimensionality Reduction, Big Data and Deep Learning
- Author
-
Cichocki, Andrzej, Kacprzyk, Janusz, Series editor, Gawęda, Adam E, editor, Rutkowski, Leszek, editor, and Yen, Gary G., editor
- Published
- 2018
- Full Text
- View/download PDF
97. Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction
- Author
-
Zhou, Guoxu, Cichocki, Andrzej, Zhang, Yu, and Mandic, Danilo
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Learning - Abstract
Very often data we encounter in practice is a collection of matrices rather than a single matrix. These multi-block data are naturally linked and hence often share some common features and at the same time they have their own individual features, due to the background in which they are measured and collected. In this study we proposed a new scheme of common and individual feature analysis (CIFA) that processes multi-block data in a linked way aiming at discovering and separating their common and individual features. According to whether the number of common features is given or not, two efficient algorithms were proposed to extract the common basis which is shared by all data. Then feature extraction is performed on the common and the individual spaces separately by incorporating the techniques such as dimensionality reduction and blind source separation. We also discussed how the proposed CIFA can significantly improve the performance of classification and clustering tasks by exploiting common and individual features of samples respectively. Our experimental results show some encouraging features of the proposed methods in comparison to the state-of-the-art methods on synthetic and real data., Comment: 13 pages,11 figures
- Published
- 2012
- Full Text
- View/download PDF
98. CANDECOMP/PARAFAC Decomposition of High-order Tensors Through Tensor Reshaping
- Author
-
Phan, Anh Huy, Tichavsky, Petr, and Cichocki, Andrzej
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
In general, algorithms for order-3 CANDECOMP/-PARAFAC (CP), also coined canonical polyadic decomposition (CPD), are easily to implement and can be extended to higher order CPD. Unfortunately, the algorithms become computationally demanding, and they are often not applicable to higher order and relatively large scale tensors. In this paper, by exploiting the uniqueness of CPD and the relation of a tensor in Kruskal form and its unfolded tensor, we propose a fast approach to deal with this problem. Instead of directly factorizing the high order data tensor, the method decomposes an unfolded tensor with lower order, e.g., order-3 tensor. On basis of the order-3 estimated tensor, a structured Kruskal tensor of the same dimension as the data tensor is then generated, and decomposed to find the final solution using fast algorithms for the structured CPD. In addition, strategies to unfold tensors are suggested and practically verified in the paper.
- Published
- 2012
- Full Text
- View/download PDF
99. Accelerated Canonical Polyadic Decomposition by Using Mode Reduction
- Author
-
Zhou, Guoxu, Cichocki, Andrzej, and Xie, Shengli
- Subjects
Computer Science - Numerical Analysis ,Computer Science - Learning ,Mathematics - Numerical Analysis - Abstract
Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely applied to analyze high order tensors. Existing CPD methods use alternating least square (ALS) iterations and hence need to unfold tensors to each of the $N$ modes frequently, which is one major bottleneck of efficiency for large-scale data and especially when $N$ is large. To overcome this problem, in this paper we proposed a new CPD method which converts the original $N$th ($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized by decomposing this mode reduced tensor followed by a Khatri-Rao product projection procedure. This way is quite efficient as unfolding to each of the $N$ modes are avoided, and dimensionality reduction can also be easily incorporated to further improve the efficiency. We show that, under mild conditions, any $N$th-order CPD can be converted into a 3rd-order case but without destroying the essential uniqueness, and theoretically gives the same results as direct $N$-way CPD methods. Simulations show that, compared with state-of-the-art CPD methods, the proposed method is more efficient and escape from local solutions more easily., Comment: 12 pages. Accepted by TNNLS
- Published
- 2012
- Full Text
- View/download PDF
100. Higher-Order Partial Least Squares (HOPLS): A Generalized Multi-Linear Regression Method
- Author
-
Zhao, Qibin, Caiafa, Cesar F., Mandic, Danilo P., Chao, Zenas C., Nagasaka, Yasuo, Fujii, Naotaka, Zhang, Liqing, and Cichocki, Andrzej
- Subjects
Computer Science - Artificial Intelligence - Abstract
A new generalized multilinear regression model, termed the Higher-Order Partial Least Squares (HOPLS), is introduced with the aim to predict a tensor (multiway array) $\tensor{Y}$ from a tensor $\tensor{X}$ through projecting the data onto the latent space and performing regression on the corresponding latent variables. HOPLS differs substantially from other regression models in that it explains the data by a sum of orthogonal Tucker tensors, while the number of orthogonal loadings serves as a parameter to control model complexity and prevent overfitting. The low dimensional latent space is optimized sequentially via a deflation operation, yielding the best joint subspace approximation for both $\tensor{X}$ and $\tensor{Y}$. Instead of decomposing $\tensor{X}$ and $\tensor{Y}$ individually, higher order singular value decomposition on a newly defined generalized cross-covariance tensor is employed to optimize the orthogonal loadings. A systematic comparison on both synthetic data and real-world decoding of 3D movement trajectories from electrocorticogram (ECoG) signals demonstrate the advantages of HOPLS over the existing methods in terms of better predictive ability, suitability to handle small sample sizes, and robustness to noise.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.