Loading…

An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks

It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that \emph{deep threshold} networks can memorize \(n\) points in \(d\) dimensions...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-06
Main Authors: Rajput, Shashank, Sreenivasan, Kartik, Papailiopoulos, Dimitris, Karbasi, Amin
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that \emph{deep threshold} networks can memorize \(n\) points in \(d\) dimensions using \(\widetilde{\mathcal{O}}(e^{1/\delta^2}+\sqrt{n})\) neurons and \(\widetilde{\mathcal{O}}(e^{1/\delta^2}(d+\sqrt{n})+n)\) weights, where \(\delta\) is the minimum distance between the points. In this work, we improve the dependence on \(\delta\) from exponential to almost linear, proving that \(\widetilde{\mathcal{O}}(\frac{1}{\delta}+\sqrt{n})\) neurons and \(\widetilde{\mathcal{O}}(\frac{d}{\delta}+n)\) weights are sufficient. Our construction uses Gaussian random weights only in the first layer, while all the subsequent layers use binary or integer weights. We also prove new lower bounds by connecting memorization in neural networks to the purely geometric problem of separating \(n\) points on a sphere using hyperplanes.
ISSN:2331-8422