next up previous
Next: Post-nonlinear mixture model Up: Problem Statement Previous: Model for sparse sources

Linear mixture model

In a general linear mixture model, the measurement random vector $ \textbf{y} \in \mathbb{R}^{m\times 1}$ can be described as

$\displaystyle \textbf{y} = \textbf{A}\textbf{s} + \textbf{n}$ (2)

where $ \textbf{s} \in \mathbb{R}^{n\times 1}$ is an independent random vector representing the sources, $ \textbf{A} \in
\mathbb{R}^{m\times n}$ is the unknown mixing matrix, $ \textbf{n}
\in \mathbb{R}^{m\times 1}$ is an independent random vector with Gaussian white noise representing sensor noise.

For $ m \geq n$, several algorithms exist that estimate the unmixing matrix $ \textbf{W} = \textbf{A}^{-1}$ sufficiently well [4]. For $ m <
n$ the mixing matrix is not square and the problem cannot be solved without additional information about the sources. In the absence of noise, if only source $ i$ is active, the output signal $ \textbf{y}$ will be aligned with the vector representing the $ i$-th column of $ \textbf{A}$, the $ i$-th ``basis vector'' [7]. Therefore, if the sources are sparse according to the model described in (1), most of the output samples $ \textbf{y}$ will be aligned with one of the basis vectors (see Fig. 2(a)).

Using this geometrical insight a large number of estimators for the mixing matrix have been proposed, amongst them a technique using overcomplete representations [6], a line spectrum estimation method [14] and a number of geometric algorithms [15,16]. Once the mixing matrix has been estimated, the original sources can be recuperated with the shortest-path algorithm introduced in [7].


next up previous
Next: Post-nonlinear mixture model Up: Problem Statement Previous: Model for sparse sources
Steven Van Vaerenbergh
Last modified: 2006-04-05