Loading…

Learning for Control: \mathcal -Error Bounds for Kernel-Based Regression

We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in reproducing kernel Hilbert spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their co...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control 2024-10, Vol.69 (10), p.6530-6545
Main Authors: Bisiacco, Mauro, Pillonetto, Gianluigi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in reproducing kernel Hilbert spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas, such as learning for control , measuring the estimation error through the \mathcal {L}_{1} norm is often more advantageous. This can, e.g., provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs \mathcal {H} associated with Lebesgue measurable positive-definite kernels, which induce subspaces of \mathcal {L}_{1}, also known as stable RKHSs in the literature. The inclusion \mathcal {H} \subset \mathcal {L}_{1} is then characterized. This permits to convert all the error bounds, which depend on the RKHS norm in terms of the \mathcal {L}_{1} norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in \mathcal {L}_{1} than the one presented here.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2024.3372882