Loading…
Robustly Learning a Single Neuron via Sharpness
We study the problem of learning a single neuron with respect to the \(L_2^2\)-loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal \(L_2^2\)-error within a constant factor. Our algorithm app...
Saved in:
Published in: | arXiv.org 2023-06 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We study the problem of learning a single neuron with respect to the \(L_2^2\)-loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal \(L_2^2\)-error within a constant factor. Our algorithm applies under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory. |
---|---|
ISSN: | 2331-8422 |