Loading…

On the relative expressiveness of Bayesian and neural networks

A neural network computes a function. A central property of neural networks is that they are “universal approximators:” for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Baye...

Full description

Saved in:
Bibliographic Details
Published in:International journal of approximate reasoning 2019-10, Vol.113, p.303-323
Main Authors: Choi, Arthur, Wang, Ruocheng, Darwiche, Adnan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A neural network computes a function. A central property of neural networks is that they are “universal approximators:” for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by marginal Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their marginal queries to become “universal approximators.”
ISSN:0888-613X
1873-4731
DOI:10.1016/j.ijar.2019.07.008