Logo Logo
Hilfe
Kontakt
Switch language to English
Contribution of the idiothetic and the allothetic information to the hippocampal place code
Contribution of the idiothetic and the allothetic information to the hippocampal place code
Hippocampal cells exhibit preference to be active at a specific place in a familiar environment, enabling them to encode the representation of space within the brain at the population level (J. O’Keefe and Dostrovsky 1971). These cells rely on the external sensory inputs and self-motion cues, however, it is still not known how exactly these inputs interact to build a stable representation of a certain location (“place field”). Existing studies suggest that both proprioceptive and other idiothetic types of information are continuously integrated to update the self-position (e.g. implementing “path integration”) while other stable sensory cues provide references to update the allocentric position of self and correct it for the collected integration-related errors. It was shown that both allocentric and idiothetic types of information influence positional cell firing, however in most of the studies these inputs were firmly coupled. The use of virtual reality setups (Thurley and Ayaz 2016) made it possible to separate the influence of vision and proprioception for the price of not keeping natural conditions - the animal is usually head- or body-fixed (Hölscher et al. 2005; Ravassard A. 2013; Jayakumar et al. 2018a; Haas et al. 2019), which introduces vestibular motor- and visual- conflicts, providing a bias for space encoding. Here we use the novel CAVE Virtual Reality system for freely-moving rodents (Del Grosso 2018) that allows to investigate the effect of visual- and positional- (vestibular) manipulation on the hippocampal space code while keeping natural behaving conditions. In this study, we focus on the dynamic representation of space when the visual- cue-defined and physical-boundary-defined reference frames are in conflict. We confirm the dominance of one reference frame over the other on the level of place fields, when the information about one reference frame is absent (Gothard et al. 2001). We show that the hippocampal cells form adjacent categories by their input preference - surprisingly, not only that they are being driven either by visual / allocentric information or by the distance to the physical boundaries and path integration, but also by a specific combination of both. We found a large category of units integrating inputs from both allocentric and idiothetic pathways that are able to represent an intermediate position between two reference frames, when they are in conflict. This experimental evidence suggests that most of the place cells are involved in representing both reference frames using a weighted combination of sensory inputs. In line with the studies showing dominance of the more reliable sensory modality (Kathryn J. Jeffery and J. M. O’Keefe 1999; Gothard et al. 2001), our data is consistent (although not proving it) with CA1 cells implementing an optimal Bayesian coding given the idiothetic and allocentric inputs with weights inversely proportional to the availability of the input, as proposed for other sensory systems (Kate J. Jeffery, Page, and Simon M. Stringer 2016). This mechanism of weighted sensory integration, consistent with recent dynamic loop models of the hippocampal-entorhinal network (Li, Arleo, and Sheynikhovich 2020), can contribute to the physiological explanation of Bayesian inference and optimal combination of spatial cues for localization (Cheng et al. 2007).
Spatial navigation, Hippocampus, Optimal coding, Multisensory integration, Place cells
Sobolev, Andrey
2021
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Sobolev, Andrey (2021): Contribution of the idiothetic and the allothetic information to the hippocampal place code. Dissertation, LMU München: Graduate School of Systemic Neurosciences (GSN)
[thumbnail of Sobolev_Andrey.pdf]
Vorschau
PDF
Sobolev_Andrey.pdf

12MB

Abstract

Hippocampal cells exhibit preference to be active at a specific place in a familiar environment, enabling them to encode the representation of space within the brain at the population level (J. O’Keefe and Dostrovsky 1971). These cells rely on the external sensory inputs and self-motion cues, however, it is still not known how exactly these inputs interact to build a stable representation of a certain location (“place field”). Existing studies suggest that both proprioceptive and other idiothetic types of information are continuously integrated to update the self-position (e.g. implementing “path integration”) while other stable sensory cues provide references to update the allocentric position of self and correct it for the collected integration-related errors. It was shown that both allocentric and idiothetic types of information influence positional cell firing, however in most of the studies these inputs were firmly coupled. The use of virtual reality setups (Thurley and Ayaz 2016) made it possible to separate the influence of vision and proprioception for the price of not keeping natural conditions - the animal is usually head- or body-fixed (Hölscher et al. 2005; Ravassard A. 2013; Jayakumar et al. 2018a; Haas et al. 2019), which introduces vestibular motor- and visual- conflicts, providing a bias for space encoding. Here we use the novel CAVE Virtual Reality system for freely-moving rodents (Del Grosso 2018) that allows to investigate the effect of visual- and positional- (vestibular) manipulation on the hippocampal space code while keeping natural behaving conditions. In this study, we focus on the dynamic representation of space when the visual- cue-defined and physical-boundary-defined reference frames are in conflict. We confirm the dominance of one reference frame over the other on the level of place fields, when the information about one reference frame is absent (Gothard et al. 2001). We show that the hippocampal cells form adjacent categories by their input preference - surprisingly, not only that they are being driven either by visual / allocentric information or by the distance to the physical boundaries and path integration, but also by a specific combination of both. We found a large category of units integrating inputs from both allocentric and idiothetic pathways that are able to represent an intermediate position between two reference frames, when they are in conflict. This experimental evidence suggests that most of the place cells are involved in representing both reference frames using a weighted combination of sensory inputs. In line with the studies showing dominance of the more reliable sensory modality (Kathryn J. Jeffery and J. M. O’Keefe 1999; Gothard et al. 2001), our data is consistent (although not proving it) with CA1 cells implementing an optimal Bayesian coding given the idiothetic and allocentric inputs with weights inversely proportional to the availability of the input, as proposed for other sensory systems (Kate J. Jeffery, Page, and Simon M. Stringer 2016). This mechanism of weighted sensory integration, consistent with recent dynamic loop models of the hippocampal-entorhinal network (Li, Arleo, and Sheynikhovich 2020), can contribute to the physiological explanation of Bayesian inference and optimal combination of spatial cues for localization (Cheng et al. 2007).