next up previous
Next: Conclusions Up: A Spectral Clustering Approach Previous: Cost function and parameter


Simulation results

Monte-Carlo simulations were performed for signals with different sparsity and SNR levels. The source signals were generated according to (3) with a normal distribution $ f_{S_1}(s_i)$ with zero mean and variance $ 10$. For each sparsity and SNR level, $ 20$ different mixing matrices were generated randomly by choosing the amplitudes of the basis vectors uniformly from $ [0.1,1]$ and the angles uniformly from $ [-\pi,\pi]$ with a minimum angle of $ \pi/10$ between every pair of basis vectors to avoid cluster overlapping. The number of samples in each case was $ 2500/(1-\nu_1-\nu_2)$ in order to restrict the clustering to $ 2500$ samples. After mixing by $ \textbf{A}$, the mixtures were transformed by the nonlinear functions $ f_j(x) = \tanh(x)$. Finally Gaussian white noise was added to reach the specified SNR level.

A 2-measurement scenario with $ 3$ mixtures ($ m=3$, $ n=2$) as well as a 3-measurement scenario with $ 5$ mixtures ($ m=5$, $ n=3$) were simulated. Fine-tuning spectral clustering was applied and $ m$ MLPs with $ r = 15$ hidden neurons were trained to estimate the two inverse nonlinearities, with a learning rate of $ \mu=0.01$ and a maximum of $ 1000$ epochs. An illustration of the different steps of the algorithm is shown in Fig. 2.

Figure 3: MSE values for varying sparsity and SNR levels, for the 2-measurement ($ m=2$, left) and 3-measurement case ($ m=3$, right).
MSE values for varying sparsity and SNR levels, for the 2-measurement (m=2, left) and 3-measurement case (m=3, right).

After training, the basis vectors were estimated from $ K_{j,k}^i$ and the source signals were estimated applying the shortest-path algorithm from [7]. The results are shown in Fig. 3. Since no measures were taken to reduce the sensor noise, the obtained mean square errors (MSEs) are highly dependent on the SNR level. Although in most cases the inverse nonlinearity estimation can ``linearize'' the clusters sufficiently well (see for instance Fig. 2(d)) only a modest MSE value was obtained even for the noiseless case ( $ SNR
= \infty$dB). This is due to the strong nonlinearity used and to the fact that the MLPs only represent the inverse nonlinear functions well for input points that are in the training range. Points that are outside of it, such as the ``non-sparse'' samples, are estimated with greater error and therefore represent the main contribution in the MSE.


next up previous
Next: Conclusions Up: A Spectral Clustering Approach Previous: Cost function and parameter
Steven Van Vaerenbergh
Last modified: 2006-04-05