Loading…

Initial state randomness improves sequence learning in a model hippocampal network

Randomness can be a useful component of computation. Using a computationally minimal, but still biologically based model of the hippocampus, we evaluate the effects of initial state randomization on learning a cognitive problem that requires this brain structure. Greater randomness of initial states...

Full description

Saved in:
Bibliographic Details
Published in:Physical review. E, Statistical, nonlinear, and soft matter physics Statistical, nonlinear, and soft matter physics, 2002-03, Vol.65 (3 Pt 1), p.031914-031914
Main Authors: Shon, A P, Wu, X B, Sullivan, D W, Levy, W B
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Randomness can be a useful component of computation. Using a computationally minimal, but still biologically based model of the hippocampus, we evaluate the effects of initial state randomization on learning a cognitive problem that requires this brain structure. Greater randomness of initial states leads to more robust performance in simulations of the cognitive task called transverse patterning, a context-dependent discrimination task that we code as a sequence prediction problem. At the conclusion of training, greater initial randomness during training trials also correlates with increased, repetitive firing of select individual neurons, previously named local context neurons. In essence, such repetitively firing neurons recognize subsequences, and previously their presence has been correlated with solving the transverse patterning problem. A more detailed analysis of the simulations across training trials reveals more about initial state randomization. The beneficial effects of initial state randomization derive from enhanced variation, across training trials, of the sequential states of a network. This greater variation is not uniformly present during training; it is largely restricted to the beginning of training and when novel sequences are introduced. Little such variation occurs after extensive or even moderate amounts of training. We explain why variation is high early in training, but not later. This automatic modulation of the initial-state-driven random variation through state space is reminiscent of simulated annealing where modulated randomization encourages a selectively broad search through state space. In contrast to an annealing schedule, the selective occurrence of such a random search here is an emergent property, and the critical randomization occurs during training rather than testing.
ISSN:1539-3755
DOI:10.1103/PhysRevE.65.031914