Equalization is performed by the proposed online K-CCA method, for
which a Gaussian kernel with
and the regularization
constant
were used. The filter
has length
and the RLS forgetting factor is
. For comparison, two other
equalization methods are also included. The first one is the
gradient identification method proposed in
[6] to which we added the same RLS
block for equalization as in the presented K-CCA method. The
second one is a time-delay MLP with
inputs for its
time-delay (i.e. equal to the equalizer's length),
neurons in
its hidden layer and
. The MLP does not take
the system structure into account, and hence its equalization
results are only included to see the advantages of the other two
methods (that do exploit the Wiener system structure).
All three methods were trained with a training data set in an
adaptive manner, while at every iteration the equalizing
capabilities of each method were tested using a separate test data
set, generated by the same Wiener system. In Fig.
4 the mean square error (MSE) curves are
compared for these three methods, averaged out over
Monte-Carlo simulations.
![]() |
Fig. 5 shows the coefficients estimated by the
K-CCA algorithm after processing samples online for the
given example when
instead of the correct
coefficients for the linear filter. Fig. 6
compares the MSE curves obtained for different values of
when
the correct value is
. Note that the effect of overestimating
on the algorithm's performance is minimal.
![]() |
![]() |
A parameter that affects the performance of the K-CCA algorithm
more is the length of the sliding-window. Fig.
7 shows MSE curves for different window lengths,
for the given setup. A longer window corresponds to a bigger
kernel matrix, leading in turn to a better representation of the
inverse nonlinearity
and hence a lower equalization error.
The curves were averaged out over
simulations.
![]() |
In a second setup, the same Wiener system is used with a BPSK
input signal. After training the K-CCA algorithm online with
symbols, its bit error rate (BER) was calculated on a test
data set. The BER curve is shown in Fig. 8.