In this paper we have introduced a Bayesian framework that unifies existing KRLS theory and provides additional insight by explicitly handling uncertainty. This approach allows to define the concept of ``forgetting'' in a natural manner in KRLS, and it rigorously introduces regularization into KRLS. Finally, we have combined these ideas into a concrete algorithm, the KRLS Tracker, which works with fixed memory and computational requirements per time step, and allows for simple, practical implementation and usage.
We included different numerical experiments that show how the proposed algorithm outperforms existing online kernel methods not only in the non-stationary scenarios for which it was designed, but also in stationary scenarios (by setting its forgetting factor to ) due to its more rigorous approach to regularization.
The described Bayesian framework opens the door to many interesting future research lines: New forgetting strategies can be developed using the general setup of Section 3.1. These strategies can be combined with a linear kernel to produce new linear adaptive filtering algorithms and to gain insight into existing ones. For instance, the KRLS Tracker with forgetting updates
,
yields exactly the exponentially weighted RLS when a linear kernel is used. Furthermore, it would be interesting to study how the proposed KRLS tracker algorithm can be extended to a full kernel Kalman filter.