Loading…
Geometric compression of invariant manifolds in neural networks
We study how neural networks compress uninformative input space in models where data lie in d dimensions, but the labels of which only vary within a linear manifold of dimension d ∥ < d . We show that for a one-hidden-layer network initialized with infinitesimal weights (i.e. in the feature learn...
Saved in:
Published in: | Journal of statistical mechanics 2021-04, Vol.2021 (4), p.44001 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We study how neural networks compress uninformative input space in models where data lie in
d
dimensions, but the labels of which only vary within a linear manifold of dimension
d
∥
<
d
. We show that for a one-hidden-layer network initialized with infinitesimal weights (i.e. in the
feature learning
regime) trained with gradient descent, the first layer of weights evolves to become nearly insensitive to the
d
⊥
=
d
−
d
∥
uninformative directions. These are effectively compressed by a factor
λ
∼
p
, where
p
is the size of the training set. We quantify the benefit of such a compression on the test error
ϵ
. For large initialization of the weights (the
lazy training
regime), no compression occurs and for regular boundaries separating labels we find that
ϵ
∼
p
−
β
, with
β
Lazy
=
d
/(3
d
− 2). Compression improves the learning curves so that
β
Feature
= (2
d
− 1)/(3
d
− 2) if
d
∥
= 1 and
β
Feature
= (
d
+
d
⊥
/2)/(3
d
− 2) if
d
∥
> 1. We test these predictions for a stripe model where boundaries are parallel interfaces (
d
∥
= 1) as well as for a cylindrical boundary (
d
∥
= 2). Next, we show that compression shapes the neural tangent kernel (NTK) evolution in time, so that its top eigenvectors become more informative and display a larger projection on the labels. Consequently, kernel learning with the frozen NTK at the end of training outperforms the initial NTK. We confirm these predictions both for a one-hidden-layer fully connected network trained on the stripe model and for a 16-layer convolutional neural network trained on the Modified National Institute of Standards and Technology database (MNIST), for which we also find
β
Feature
>
β
Lazy
. The great similarities found in these two cases support the idea that compression is central to the training of MNIST, and puts forward kernel principal component analysis on the evolving NTK as a useful diagnostic of compression in deep networks. |
---|---|
ISSN: | 1742-5468 1742-5468 |
DOI: | 10.1088/1742-5468/abf1f3 |