Loading…

On the Universality of Invariant Networks

Constraining linear layers in neural networks to respect symmetry transformations from a group \(G\) is a common design principle for invariant networks that has found many applications in machine learning. In this paper, we consider a fundamental question that has received little attention to date:...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-05
Main Authors: Maron, Haggai, Fetaya, Ethan, Segol, Nimrod, Lipman, Yaron
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Constraining linear layers in neural networks to respect symmetry transformations from a group \(G\) is a common design principle for invariant networks that has found many applications in machine learning. In this paper, we consider a fundamental question that has received little attention to date: Can these networks approximate any (continuous) invariant function? We tackle the rather general case where \(G\leq S_n\) (an arbitrary subgroup of the symmetric group) that acts on \(\mathbb{R}^n\) by permuting coordinates. This setting includes several recent popular invariant networks. We present two main results: First, \(G\)-invariant networks are universal if high-order tensors are allowed. Second, there are groups \(G\) for which higher-order tensors are unavoidable for obtaining universality. \(G\)-invariant networks consisting of only first-order tensors are of special interest due to their practical value. We conclude the paper by proving a necessary condition for the universality of \(G\)-invariant networks that incorporate only first-order tensors.
ISSN:2331-8422