We now evaluate the tracking capabilities of the different algorithms. For this experiment we use the setup described in [7]. Specifically, we consider the problem of online identification of a communication channel in which an abrupt change (switch) is triggered at some point. Here, a signal
is fed into a nonlinear channel that consists of a linear finite impulse response (FIR) channel followed by the nonlinearity
, where
is the output of the linear channel. During the first
iterations the impulse response of the linear channel is chosen as
, and at iteration
it is switched to
. Finally,
dB of Gaussian white noise is added to the channel output.
![]() |
We perform an online identification experiment, in which the algorithms are given one input datum (with a time-embedding of taps) and one output sample at each time instant. At each step, the MSE performance is measured on a set of
data points that are generated with the current channel model. In this comparison we include results for Naive Online
Minimization Algorithm (NORMA), which is a kernel-based implementation of leaky LMS [2], and extended KRLS (EX-KRLS) from [9], which is a straightforward kernelized version of classic extended RLS [3].
An RBF kernel
is used in all algorithms, with a length-scale
. The regularization is set to match the true value of the noise-to-signal ratio,
. Regarding memory, SW-KRLS and KRLS-T are given a dictionary size of
. NORMA and EX-KRLS are not imposed any memory limit (i.e.
), given that the former would perform very weakly with only
bases (being LMS-based), and the latter can only be applied with an evergrowing dictionary when tracking. The adaptation rates are chosen as follows: NORMA uses learning rate
; EX-KRLS has parameters
,
and forgetting factor
; and KRLS-T uses
. Note that the same value of
does not necessarily correspond to the same convergence rate in different algorithms.
The identification results, averaged out over simulations, can be found in Fig. 2. EX-KRLS initially obtains acceptable tracking results, but later starts to diverge due to numerical problems. SW-KRLS obtains very reasonable results, but, since it gives the same importance to all samples in its window, its speed of convergence is limited by its window size
. The best performance, both in terms of convergence rate and final MSE, is obtained by the proposed KRLS-T algorithm, which gives more importance to more recent data. The influence of its forgetting factor is illustrated in Fig. 3. In the limiting case
, KRLS-T does not perform tracking, and then it is fair to compare its performance to ALD-KRLS (which is not a tracker). We applied ALD-KRLS with
, which leads to a final dictionary of
bases (while we verified also that the performance was hardly affected by changing
). Similar to the previous example, KRLS-T obtains superior results.
![]() |