On learning dynamics underlying the evolution of learning rules.

Details

Ressource 1Download: BIB_E6628D11E4D8.P001.pdf (1631.94 [Ko])
State: Public
Version: Final published version
Serval ID
serval:BIB_E6628D11E4D8
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
On learning dynamics underlying the evolution of learning rules.
Journal
Theoretical Population Biology
Author(s)
Dridi S., Lehmann L.
ISSN
1096-0325 (Electronic)
ISSN-L
0040-5809
Publication state
Published
Issued date
2014
Peer-reviewed
Oui
Volume
91
Pages
20-36
Language
english
Abstract
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Keywords
Fluctuating environments, Evolutionary game theory, Stochastic approximation, Reinforcement learning, Fictitious play, Producer-scrounger game
Pubmed
Web of science
Create date
19/05/2013 12:42
Last modification date
20/08/2019 17:09
Usage data