Loading…
Most Neural Networks Are Almost Learnable
We present a PTAS for learning random constant-depth networks. We show that for any fixed \(\epsilon>0\) and depth \(i\), there is a poly-time algorithm that for any distribution on \(\sqrt{d} \cdot \mathbb{S}^{d-1}\) learns random Xavier networks of depth \(i\), up to an additive error of \(\eps...
Saved in:
Published in: | arXiv.org 2023-10 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We present a PTAS for learning random constant-depth networks. We show that for any fixed \(\epsilon>0\) and depth \(i\), there is a poly-time algorithm that for any distribution on \(\sqrt{d} \cdot \mathbb{S}^{d-1}\) learns random Xavier networks of depth \(i\), up to an additive error of \(\epsilon\). The algorithm runs in time and sample complexity of \((\bar{d})^{\mathrm{poly}(\epsilon^{-1})}\), where \(\bar d\) is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to \((\bar{d})^{\mathrm{polylog}(\epsilon^{-1})}\), resulting in a quasi-poly-time algorithm for learning constant depth random networks. |
---|---|
ISSN: | 2331-8422 |