Loading…

Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals

We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples \((\mathbf{x},y)\) from an unknown distribution on \(\mathbb{R}^n \times \{ \pm 1\}\), whose marginal distribution on \(\mathbf{x}\) is the standard Gaussian and the labels \(y...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-02
Main Authors: Diakonikolas, Ilias, Kane, Daniel M, Ren, Lisheng
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples \((\mathbf{x},y)\) from an unknown distribution on \(\mathbb{R}^n \times \{ \pm 1\}\), whose marginal distribution on \(\mathbf{x}\) is the standard Gaussian and the labels \(y\) can be arbitrary, the goal is to output a hypothesis with 0-1 loss \(\mathrm{OPT}+\epsilon\), where \(\mathrm{OPT}\) is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.
ISSN:2331-8422