Files

Abstract

The rapid processing of visual information is essential for many human behaviors, such as playing sports and avoiding road hazards. These behaviors require the processing of incoming sensory stimuli and informing a proper response within a fraction of a second. To study the neural mechanisms underlying rapid decision-making, researchers have used the compelled saccade paradigm, where monkeys rapidly identify incoming sensory stimuli and report their decision with a saccade. However, previous studies have focused on relatively simple perceptual decisions, such as color matching or motion discrimination. Furthermore, while numerous studies have investigated the role of visual and oculomotor brain areas during rapid decisions, these areas have traditionally been studied in isolation, limiting our understanding of their interactions during this behavior. In the first part of my thesis, we developed a novel recording methodology to overcome the limitations of traditional single-neuron electrophysiological techniques. This methodology allows dense, simultaneous recordings from the lateral intraparietal cortex (LIP), frontal eye fields (FEF), and superior colliculus (SC) using linear arrays. This approach enabled simultaneous recording of multiple areas within the same animal during behavior, with an average yield of approximately 50 single neurons per area. To our knowledge, this is the first study to directly compare these three areas together during behavior. We further advanced this methodology by making it compatible with reversible inactivation protocols, where a pharmacological agent can be infused into brain areas with high precision, which produces significant pharmacological effects in causal experiments. Alongside this neurophysiological methodology, we developed a novel rapid categorization task where monkeys must report the learned category of motion stimuli with severely limited viewing times. This task is significantly more complex and demanding than previous color matching and motion discrimination tasks, requiring longer stimulus viewing times for accurate behavior. Simultaneous neural recordings from LIP, FEF, and SC revealed that FEF populations lead in both encoding the category of the stimulus and developing saccadic motor plans, with significantly earlier latencies than SC and LIP populations. In contrast, LIP populations encode more bottom-up sensory information, such as specific motion direction and target colors, significantly earlier than FEF and SC. Reversible inactivation experiments in FEF further demonstrated that the categorization process is significantly impaired and delayed when either the stimulus or a saccade target is placed in the response field of the inactivated FEF neurons. These results support a model in which LIP contributes early sensory information to other areas, and FEF is central to transforming this sensory information into behaviorally meaningful categories and mapping it onto specific motor plans. FEF then broadcasts the results of this computation both downstream to SC and as feedback to LIP. This model explains recent findings of earlier and stronger category representations in SC compared to LIP in a different motion categorization task. Altogether, these results enhance our understanding of how rapid categorization is achieved through the coordinated activity of different oculomotor regions, highlighting their unique roles during this behavior.

Details

Actions

from
to
Export
Download Full History