Loading…
Change-Based Inference in Attractor Nets: Linear Analysis
One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However,...
Saved in:
Published in: | Neural computation 2010-12, Vol.22 (12), p.3036-3061 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpretation is hard to reconcile with the observation that the firing rates of such neurons constantly change following presentation of stimuli. We have recently suggested an alternative interpretation according to which computations are realized by systematic changes in the states of such networks over time. This way of performing computations is fast, accurate, readily learnable, and robust to various forms of noise. Here we analyze the computation of stimulus discrimination in this change-based setting, relating it directly to the computation of stimulus estimation in the conventional attractor-based view. We use a common linear approximation to compare the two methods and show that perfect performance at estimation implies chance performance at discrimination. |
---|---|
ISSN: | 0899-7667 1530-888X |
DOI: | 10.1162/NECO_a_00051 |