Back to Search Start Over

Learning Mixtures of Sparse Linear Regressions Using Sparse Graph Codes.

Authors :
Yin, Dong
Pedarsani, Ramtin
Chen, Yudong
Ramchandran, Kannan
Source :
IEEE Transactions on Information Theory; Mar2019, Vol. 65 Issue 3, p1430-1451, 22p
Publication Year :
2019

Abstract

In this paper, we consider the mixture of sparse linear regressions model. Let $\boldsymbol {\beta }^{(1)},\ldots, \boldsymbol {\beta }^{(L)}\in \mathbb {C} ^{n}$ be $L $ unknown sparse parameter vectors with a total of $K $ non-zero elements. Noisy linear measurements are obtained in the form $y_{i} = \boldsymbol {x}_{i} ^{\mathrm{ H}} \boldsymbol {\beta }^{(\ell _{i})} + w_{i}$ , each of which is generated randomly from one of the sparse vectors with the label $\ell _{i} $ unknown. The goal is to estimate the parameter vectors efficiently with low sample and computational costs. This problem presents significant challenges as one needs to simultaneously solve the demixing problem of recovering the labels $\ell _{i} $ as well as the estimation problem of recovering the sparse vectors $\boldsymbol {\beta }^{(\ell)} $. Our solution to the problem leverages the connection between modern coding theory and statistical inference. We introduce a new algorithm, Mixed-Coloring, which samples the mixture strategically using query vectors $\boldsymbol {x}_{i} $ constructed based on ideas from sparse graph codes. Our novel code design allows for both efficient demixing and parameter estimation. To find $K$ non-zero elements, it is clear that we need at least $\Theta (K)$ measurements, and thus the time complexity is at least $\Theta (K)$. In the noiseless setting, for a constant number of sparse parameter vectors, our algorithm achieves the order-optimal sample and time complexities of $\Theta (K)$. In the presence of Gaussian noise, for the problem with two parameter vectors (i.e., $L = 2$), we show that the Robust Mixed-Coloring algorithm achieves near-optimal $\Theta (K \mathop {\mathrm {polylog}}\nolimits (n))$ sample and time complexities. When $K = \mathcal {O}(n^{\alpha })$ for some constant $\alpha \in (0,1)$ (i.e., $K$ is sublinear in $n$), we can achieve sample and time complexities both sublinear in the ambient dimension. In one of our experiments, to recover a mixture of two regressions with dimension $n = 500$ and sparsity $K = 50$ , our algorithm is more than 300 times faster than EM algorithm, with about one third of its sample cost. 1 The proposed algorithm works even when the noise is non-Gaussian in nature, but the guarantees on sample and time complexities are difficult to obtain. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189448
Volume :
65
Issue :
3
Database :
Complementary Index
Journal :
IEEE Transactions on Information Theory
Publication Type :
Academic Journal
Accession number :
134886967
Full Text :
https://doi.org/10.1109/TIT.2018.2864276