Prediction of a Steady-State Nonlinear Time-Series

In a first experiment we perform one-step prediction on the nonlinear Mackey-Glass time-series with a number of online algorithms. The data are corrupted by zero-mean Gaussian noise with $ 0.001$ variance. The algorithms are trained online on $ 500$ points of this series, and in each iteration the MSE performance is calculated on a test set of $ 100$ points. A time-embedding of $ 7$ is chosen, i.e. $ {\mathbf x}_n = [x_{n}, \dots, x_{n-6}]^T$ and the desired output is $ y_n = x_{n+1}$.

The learning curves of the different algorithms are shown in Fig. 1. For the kernel-based algorithms, the Gaussian kernel with $ \sigma = 1$ and regularization $ \lambda = 0.1$ is chosen. As a lower bound for the MSE, the results of the ALD-KRLS algorithm with $ \nu = 0$ are included, which uses a growing memory and has complexity $ O(n^2)$. For SW-KRLS and FB-KRLS the memory size is fixed to $ 50$ patterns. It is remarkable that the FB-KRLS technique obtains results that are very close to the lower bound. By setting $ \nu =0.43$, ALD-KRLS stores $ 53$ patterns in memory, which is similar to FB-KRLS. Nevertheless, in this case ALD-KRLS performs worse.

Figure 1: Top: Learning curves for one-step prediction on the Mackey-Glass time-series. Bottom: indices of the patterns stored in memory by ALD-KRLS ($ \nu =0.43$) and FB-KRLS. Note that the final memory of FB-KRLS consists of patterns selected over the whole series.
\includegraphics[width=\linewidth]{fig/mg30}

Pdf version (236 KB)
Steven Van Vaerenbergh
Last modified: 2010-08-07