Adaptive identification of a time-varying channel

We now evaluate the tracking capabilities of the different algorithms. For this experiment we use the setup described in [7]. Specifically, we consider the problem of online identification of a communication channel in which an abrupt change (switch) is triggered at some point. Here, a signal $ x_t \in \mathcal{N}(0,1)$ is fed into a nonlinear channel that consists of a linear finite impulse response (FIR) channel followed by the nonlinearity $ y = \tanh(z)$, where $ z$ is the output of the linear channel. During the first $ 500$ iterations the impulse response of the linear channel is chosen as $ {\boldsymbol{\mathbf{h}}}_1 = [1, 0.0668, -0.4764, 0.8070]$, and at iteration $ 501$ it is switched to $ {\boldsymbol{\mathbf{h}}}_2 = [1, -0.4326, -0.6656, 0.7153]$. Finally, $ 20$dB of Gaussian white noise is added to the channel output.

Figure 2: MSE performance of different tracking algorithms on a communications channel that shows an abrupt change at iteration $ 500$.
\includegraphics[width=\linewidth]{fig/channel_ID_comparison5c}

We perform an online identification experiment, in which the algorithms are given one input datum (with a time-embedding of $ 4$ taps) and one output sample at each time instant. At each step, the MSE performance is measured on a set of $ 100$ data points that are generated with the current channel model. In this comparison we include results for Naive Online $ R_{reg}$ Minimization Algorithm (NORMA), which is a kernel-based implementation of leaky LMS [2], and extended KRLS (EX-KRLS) from [9], which is a straightforward kernelized version of classic extended RLS [3].

An RBF kernel $ k({\boldsymbol{\mathbf{x}}},{\boldsymbol{\mathbf{x}}}') = \exp({-\vert\vert{\boldsymbol{\mathbf{x}}}-{\boldsymbol{\mathbf{x}}}'\vert\vert^2/2\ell})$ is used in all algorithms, with a length-scale $ \ell=1$. The regularization is set to match the true value of the noise-to-signal ratio, $ 0.01$. Regarding memory, SW-KRLS and KRLS-T are given a dictionary size of $ M=50$. NORMA and EX-KRLS are not imposed any memory limit (i.e. $ M=1500$), given that the former would perform very weakly with only $ 50$ bases (being LMS-based), and the latter can only be applied with an evergrowing dictionary when tracking. The adaptation rates are chosen as follows: NORMA uses learning rate $ \eta=0.1$; EX-KRLS has parameters $ \alpha=0.999$, $ q=0.1$ and forgetting factor $ \lambda=0.99$; and KRLS-T uses $ \lambda=0.999$. Note that the same value of $ \lambda$ does not necessarily correspond to the same convergence rate in different algorithms.

The identification results, averaged out over $ 25$ simulations, can be found in Fig. 2. EX-KRLS initially obtains acceptable tracking results, but later starts to diverge due to numerical problems. SW-KRLS obtains very reasonable results, but, since it gives the same importance to all samples in its window, its speed of convergence is limited by its window size $ M$. The best performance, both in terms of convergence rate and final MSE, is obtained by the proposed KRLS-T algorithm, which gives more importance to more recent data. The influence of its forgetting factor is illustrated in Fig. 3. In the limiting case $ \lambda=1$, KRLS-T does not perform tracking, and then it is fair to compare its performance to ALD-KRLS (which is not a tracker). We applied ALD-KRLS with $ \nu=0.003$, which leads to a final dictionary of $ M=50$ bases (while we verified also that the performance was hardly affected by changing $ \nu$). Similar to the previous example, KRLS-T obtains superior results.

Figure 3: MSE performance of KRLS-T and ALD-KRLS on a communications channel that shows an abrupt change at iteration $ 500$.
\includegraphics[width=\linewidth]{fig/channel_ID_comparison6b}

Pdf version (275 KB)
Steven Van Vaerenbergh
Last modified: 2011-09-20