Loading…
The cognitive mechanisms of optimal sampling
► Behaviour of a number of learning models was simulated in a “two-armed bandit” situation. ► All models found the better arm more quickly when the payoff difference was greater. ► Only a backwards programming model absorbed on one arm sooner in shorter sessions. ► Some other models showed such an e...
Saved in:
Published in: | Behavioural processes 2012-02, Vol.89 (2), p.77-85 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | ► Behaviour of a number of learning models was simulated in a “two-armed bandit” situation. ► All models found the better arm more quickly when the payoff difference was greater. ► Only a backwards programming model absorbed on one arm sooner in shorter sessions. ► Some other models showed such an effect if motivation carried over between sessions. ► The most successful model used a rule of thumb specific to the precise situation.
How can animals learn the prey densities available in an environment that changes unpredictably from day to day, and how much effort should they devote to doing so, rather than exploiting what they already know? Using a two-armed bandit situation, we simulated several processes that might explain the trade-off between exploring and exploiting. They included an optimising model, dynamic backward sampling; a dynamic version of the matching law; the Rescorla–Wagner model; a neural network model; and ɛ-greedy and rule of thumb models derived from the study of reinforcement learning in artificial intelligence. Under conditions like those used in published studies of birds’ performance under two-armed bandit conditions, all models usually identified the more profitable source of reward, and did so more quickly when the reward probability differential was greater. Only the dynamic programming model switched from exploring to exploiting more quickly when available time in the situation was less. With sessions of equal length presented in blocks, a session-length effect was induced in some of the models by allowing motivational, but not memory, carry-over from one session to the next. The rule of thumb model was the most successful overall, though the neural network model also performed better than the remaining models. |
---|---|
ISSN: | 0376-6357 1872-8308 |
DOI: | 10.1016/j.beproc.2011.10.004 |