Loading…

STOCHASTIC GRADIENT LEARNING AND INSTABILITY: AN EXAMPLE

In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverg...

Full description

Saved in:
Bibliographic Details
Published in:Macroeconomic dynamics 2016-04, Vol.20 (3), p.777-790
Main Authors: Slobodyan, Sergey, Bogomolova, Anna, Kolyuzhnov, Dmitri
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverges for all but the very smallest gain values. We employ a stochastic Lyapunov function approach to demonstrate that the SG mean dynamics is easily destabilized by the noise associated with real-time learning, because its Jacobian contains stable but very small eigenvalues. We also express caution on usage of perpetual learning algorithms with such small eigenvalues, as the real-time dynamics might diverge from the equilibrium that is stable under the mean dynamics.
ISSN:1365-1005
1469-8056
DOI:10.1017/S1365100514000583