Try a new search

Format these results:

Searched for:

person:heb02

in-biosketch:yes

Total Results:

57


Cortical and subcortical signatures of conscious object recognition

Levinson, Max; Podvalny, Ella; Baete, Steven H; He, Biyu J
The neural mechanisms underlying conscious recognition remain unclear, particularly the roles played by the prefrontal cortex, deactivated brain areas and subcortical regions. We investigated neural activity during conscious object recognition using 7 Tesla fMRI while human participants viewed object images presented at liminal contrasts. Here, we show both recognized and unrecognized images recruit widely distributed cortical and subcortical regions; however, recognized images elicit enhanced activation of visual, frontoparietal, and subcortical networks and stronger deactivation of the default-mode network. For recognized images, object category information can be decoded from all of the involved cortical networks but not from subcortical regions. Phase-scrambled images trigger strong involvement of inferior frontal junction, anterior cingulate cortex and default-mode network, implicating these regions in inferential processing under increased uncertainty. Our results indicate that content-specific activity in both activated and deactivated cortical networks and non-content-specific subcortical activity support conscious recognition.
PMID: 34006884
ISSN: 2041-1723
CID: 4877132

Neural integration underlying naturalistic prediction flexibly adapts to varying sensory input rate

Baumgarten, Thomas J; Maniscalco, Brian; Lee, Jennifer L; Flounders, Matthew W; Abry, Patrice; He, Biyu J
Prediction of future sensory input based on past sensory information is essential for organisms to effectively adapt their behavior in dynamic environments. Humans successfully predict future stimuli in various natural settings. Yet, it remains elusive how the brain achieves effective prediction despite enormous variations in sensory input rate, which directly affect how fast sensory information can accumulate. We presented participants with acoustic sequences capturing temporal statistical regularities prevalent in nature and investigated neural mechanisms underlying predictive computation using MEG. By parametrically manipulating sequence presentation speed, we tested two hypotheses: neural prediction relies on integrating past sensory information over fixed time periods or fixed amounts of information. We demonstrate that across halved and doubled presentation speeds, predictive information in neural activity stems from integration over fixed amounts of information. Our findings reveal the neural mechanisms enabling humans to robustly predict dynamic stimuli in natural environments despite large sensory input rate variations.
PMCID:8113607
PMID: 33976118
ISSN: 2041-1723
CID: 4868192

One-trial perceptual learning in the absence of conscious remembering and independent of the medial temporal lobe

Squire, Larry R; Frascino, Jennifer C; Rivera, Charlotte S; Heyworth, Nadine C; He, Biyu J
A degraded, black-and-white image of an object, which appears meaningless on first presentation, is easily identified after a single exposure to the original, intact image. This striking example of perceptual learning reflects a rapid (one-trial) change in performance, but the kind of learning that is involved is not known. We asked whether this learning depends on conscious (hippocampus-dependent) memory for the images that have been presented or on an unconscious (hippocampus-independent) change in the perception of images, independently of the ability to remember them. We tested five memory-impaired patients with hippocampal lesions or larger medial temporal lobe (MTL) lesions. In comparison to volunteers, the patients were fully intact at perceptual learning, and their improvement persisted without decrement from 1 d to more than 5 mo. Yet, the patients were impaired at remembering the test format and, even after 1 d, were impaired at remembering the images themselves. To compare perceptual learning and remembering directly, at 7 d after seeing degraded images and their solutions, patients and volunteers took either a naming test or a recognition memory test with these images. The patients improved as much as the volunteers at identifying the degraded images but were severely impaired at remembering them. Notably, the patient with the most severe memory impairment and the largest MTL lesions performed worse than the other patients on the memory tests but was the best at perceptual learning. The findings show that one-trial, long-lasting perceptual learning relies on hippocampus-independent (nondeclarative) memory, independent of any requirement to consciously remember.
PMID: 33952702
ISSN: 1091-6490
CID: 4868162

A Gradient of Sharpening Effects by Perceptual Prior across the Human Cortical Hierarchy

González-García, Carlos; He, Biyu Jade
Prior knowledge profoundly influences perceptual processing. Previous studies have revealed consistent suppression of predicted stimulus information in sensory areas, but how prior knowledge modulates processing higher up in the cortical hierarchy remains poorly understood. In addition, the mechanism leading to suppression of predicted sensory information remains unclear, and studies thus far have revealed a mixed pattern of results in support of either the 'sharpening' or 'dampening' model. Here, using 7T fMRI in humans (both sexes), we observed that prior knowledge acquired from fast, one-shot perceptual learning sharpens neural representation throughout the ventral visual stream, generating suppressed sensory responses. In contrast, the frontoparietal (FPN) and default-mode (DMN) networks exhibit similar sharpening of content-specific neural representation but in the context of unchanged and enhanced activity magnitudes, respectively-a pattern we refer to as 'selective enhancement'. Together, these results reveal a heretofore unknown macroscopic gradient of prior knowledge's sharpening effect on neural representations across the cortical hierarchy.SIGNIFICANCE STATEMENT:A fundamental question in neuroscience is how prior knowledge shapes perceptual processing. Perception is constantly informed by internal priors in the brain acquired from past experiences, but the neural mechanisms underlying this process are poorly understood. To date, research on this question has focused on early visual regions, reporting a consistent downregulation when predicted stimuli are encountered. Here, using a dramatic one-shot perceptual learning paradigm, we observed that prior knowledge results in sharper neural representations across the cortical hierarchy of the human brain through a gradient of mechanisms. In visual regions, neural responses tuned away from internal predictions are suppressed. In frontoparietal regions, neural activity consistent with priors is selectively enhanced. These results deepen our understanding of how prior knowledge informs perception.
PMID: 33208472
ISSN: 1529-2401
CID: 4673592

Spontaneous perception: a framework for task-free, self-paced perception

Baror, Shira; He, Biyu J
Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.
PMCID:8333690
PMID: 34377535
ISSN: 2057-2107
CID: 4995762

Task-evoked activity quenches neural correlations and variability across cortical areas

Ito, Takuya; Brincat, Scott L; Siegel, Markus; Mill, Ravi D; He, Biyu J; Miller, Earl K; Rotstein, Horacio G; Cole, Michael W
Many large-scale functional connectivity studies have emphasized the importance of communication through increased inter-region correlations during task states. In contrast, local circuit studies have demonstrated that task states primarily reduce correlations among pairs of neurons, likely enhancing their information coding by suppressing shared spontaneous activity. Here we sought to adjudicate between these conflicting perspectives, assessing whether co-active brain regions during task states tend to increase or decrease their correlations. We found that variability and correlations primarily decrease across a variety of cortical regions in two highly distinct data sets: non-human primate spiking data and human functional magnetic resonance imaging data. Moreover, this observed variability and correlation reduction was accompanied by an overall increase in dimensionality (reflecting less information redundancy) during task states, suggesting that decreased correlations increased information coding capacity. We further found in both spiking and neural mass computational models that task-evoked activity increased the stability around a stable attractor, globally quenching neural variability and correlations. Together, our results provide an integrative mechanistic account that encompasses measures of large-scale neural activity, variability, and correlations during resting and task states.
PMCID:7425988
PMID: 32745096
ISSN: 1553-7358
CID: 4590322

Neuromodulation of Brain State and Behavior

McCormick, David A; Nestvogel, Dennis B; He, Biyu J
Neural activity and behavior are both notoriously variable, with responses differing widely between repeated presentation of identical stimuli or trials. Recent results in humans and animals reveal that these variations are not random in their nature, but may in fact be due in large part to rapid shifts in neural, cognitive, and behavioral states. Here we review recent advances in the understanding of rapid variations in the waking state, how variations are generated, and how they modulate neural and behavioral responses in both mice and humans. We propose that the brain has an identifiable set of states through which it wanders continuously in a nonrandom fashion, owing to the activity of both ascending modulatory and fast-acting corticocortical and subcortical-cortical neural pathways. These state variations provide the backdrop upon which the brain operates, and understanding them is critical to making progress in revealing the neural mechanisms underlying cognition and behavior. Expected final online publication date for the Annual Review of Neuroscience, Volume 43 is July 8, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
PMID: 32250724
ISSN: 1545-4126
CID: 4378742

A dual role of prestimulus spontaneous neural activity in visual object recognition

Podvalny, Ella; Flounders, Matthew W; King, Leana E; Holroyd, Tom; He, Biyu J
Vision relies on both specific knowledge of visual attributes, such as object categories, and general brain states, such as those reflecting arousal. We hypothesized that these phenomena independently influence recognition of forthcoming stimuli through distinct processes reflected in spontaneous neural activity. Here, we recorded magnetoencephalographic (MEG) activity in participants (N = 24) who viewed images of objects presented at recognition threshold. Using multivariate analysis applied to sensor-level activity patterns recorded before stimulus presentation, we identified two neural processes influencing subsequent subjective recognition: a general process, which disregards stimulus category and correlates with pupil size, and a specific process, which facilitates category-specific recognition. The two processes are doubly-dissociable: the general process correlates with changes in criterion but not in sensitivity, whereas the specific process correlates with changes in sensitivity but not in criterion. Our findings reveal distinct mechanisms of how spontaneous neural activity influences perception and provide a framework to integrate previous findings.
PMCID:6718405
PMID: 31477706
ISSN: 2041-1723
CID: 4068992

State-aware detection of sensory stimuli in the cortex of the awake mouse

Sederberg, Audrey J; Pala, Aurélie; Zheng, He J V; He, Biyu J; Stanley, Garrett B
Cortical responses to sensory inputs vary across repeated presentations of identical stimuli, but how this trial-to-trial variability impacts detection of sensory inputs is not fully understood. Using multi-channel local field potential (LFP) recordings in primary somatosensory cortex (S1) of the awake mouse, we optimized a data-driven cortical state classifier to predict single-trial sensory-evoked responses, based on features of the spontaneous, ongoing LFP recorded across cortical layers. Our findings show that, by utilizing an ongoing prediction of the sensory response generated by this state classifier, an ideal observer improves overall detection accuracy and generates robust detection of sensory inputs across various states of ongoing cortical activity in the awake brain, which could have implications for variability in the performance of detection tasks across brain states.
PMCID:6561583
PMID: 31150385
ISSN: 1553-7358
CID: 3944992

Neural dynamics of visual ambiguity resolution by perceptual prior

Flounders, Matthew W; González-García, Carlos; Hardstone, Richard; He, Biyu J
Past experiences have enormous power in shaping our daily perception. Currently, dynamical neural mechanisms underlying this process remain mysterious. Exploiting a dramatic visual phenomenon, where a single experience of viewing a clear image allows instant recognition of a related degraded image, we investigated this question using MEG and 7 Tesla fMRI in humans. We observed that following the acquisition of perceptual priors, different degraded images are represented much more distinctly in neural dynamics starting from ~500 ms after stimulus onset. Content-specific neural activity related to stimulus-feature processing dominated within 300 ms after stimulus onset, while content-specific neural activity related to recognition processing dominated from 500 ms onward. Model-driven MEG-fMRI data fusion revealed the spatiotemporal evolution of neural activities involved in stimulus, attentional, and recognition processing. Together, these findings shed light on how experience shapes perceptual processing across space and time in the brain.
PMID: 30843519
ISSN: 2050-084x
CID: 3724112