Searched for: in-biosketch:yes
person:af137
GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention
Janjušević, Nikola; Khalilian-Gourtani, Amirhossein; Flinker, Adeen; Feng, Li; Wang, Yao
Nonlocal self-similarity within images has become an increasingly popular prior in deep-learning models. Despite their successful image restoration performance, such models remain largely uninterpretable due to their black-box construction. Our previous studies have shown that interpretable construction of a fully convolutional denoiser (CDLNet), with performance on par with state-of-the-art black-box counterparts, is achievable by unrolling a convolutional dictionary learning algorithm. In this manuscript, we seek an interpretable construction of a convolutional network with a nonlocal self-similarity prior that performs on par with black-box nonlocal models. We show that such an architecture can be effectively achieved by up-grading the
PMCID:11928013
PMID: 40124211
ISSN: 2573-0436
CID: 5814622
A corollary discharge circuit in human speech
Khalilian-Gourtani, Amirhossein; Wang, Ran; Chen, Xupeng; Yu, Leyao; Dugan, Patricia; Friedman, Daniel; Doyle, Werner; Devinsky, Orrin; Wang, Yao; Flinker, Adeen
When we vocalize, our brain distinguishes self-generated sounds from external ones. A corollary discharge signal supports this function in animals; however, in humans, its exact origin and temporal dynamics remain unknown. We report electrocorticographic recordings in neurosurgical patients and a connectivity analysis framework based on Granger causality that reveals major neural communications. We find a reproducible source for corollary discharge across multiple speech production paradigms localized to the ventral speech motor cortex before speech articulation. The uncovered discharge predicts the degree of auditory cortex suppression during speech, its well-documented consequence. These results reveal the human corollary discharge source and timing with far-reaching implication for speech motor-control as well as auditory hallucinations in human psychosis.
PMCID:11648673
PMID: 39625978
ISSN: 1091-6490
CID: 5780132
Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language
Hong, Zhuoqiao; Wang, Haocheng; Zada, Zaid; Gazula, Harshvardhan; Turner, David; Aubrey, Bobbi; Niekerken, Leonard; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Friedman, Daniel; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri; Nastase, Samuel A; Goldstein, Ariel
Recent research has used large language models (LLMs) to study the neural basis of naturalistic language processing in the human brain. LLMs have rapidly grown in complexity, leading to improved language processing capabilities. However, neuroscience researchers haven't kept up with the quick progress in LLM development. Here, we utilized several families of transformer-based LLMs to investigate the relationship between model size and their ability to capture linguistic information in the human brain. Crucially, a subset of LLMs were trained on a fixed training set, enabling us to dissociate model size from architecture and training set size. We used electrocorticography (ECoG) to measure neural activity in epilepsy patients while they listened to a 30-minute naturalistic audio story. We fit electrode-wise encoding models using contextual embeddings extracted from each hidden layer of the LLMs to predict word-level neural signals. In line with prior work, we found that larger LLMs better capture the structure of natural language and better predict neural activity. We also found a log-linear relationship where the encoding performance peaks in relatively earlier layers as model size increases. We also observed variations in the best-performing layer across different brain regions, corresponding to an organized language processing hierarchy.
PMCID:11244877
PMID: 39005394
ISSN: 2692-8205
CID: 5676342
Author Correction: Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns
Goldstein, Ariel; Grinstein-Dabush, Avigail; Schain, Mariano; Wang, Haocheng; Hong, Zhuoqiao; Aubrey, Bobbi; Nastase, Samuel A; Zada, Zaid; Ham, Eric; Feder, Amir; Gazula, Harshvardhan; Buchnik, Eliav; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Reichart, Roi; Friedman, Daniel; Brenner, Michael; Hassidim, Avinatan; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri
PMID: 39353920
ISSN: 2041-1723
CID: 5739352
A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
Zada, Zaid; Goldstein, Ariel; Michelmann, Sebastian; Simony, Erez; Price, Amy; Hasenfratz, Liat; Barham, Emily; Zadbood, Asieh; Doyle, Werner; Friedman, Daniel; Dugan, Patricia; Melloni, Lucia; Devore, Sasha; Flinker, Adeen; Devinsky, Orrin; Nastase, Samuel A; Hasson, Uri
Effective communication hinges on a mutual understanding of word meaning in different contexts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We developed a model-based coupling framework that aligns brain activity in both speaker and listener to a shared embedding space from a large language model (LLM). The context-sensitive LLM embeddings allow us to track the exchange of linguistic information, word by word, from one brain to another in natural conversations. Linguistic content emerges in the speaker's brain before word articulation and rapidly re-emerges in the listener's brain after word articulation. The contextual embeddings better capture word-by-word neural alignment between speaker and listener than syntactic and articulatory models. Our findings indicate that the contextual embeddings learned by LLMs can serve as an explicit numerical model of the shared, context-rich meaning space humans use to communicate their thoughts to one another.
PMID: 39096896
ISSN: 1097-4199
CID: 5696672
Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals
Chen, Junbo; Chen, Xupeng; Wang, Ran; Le, Chenqian; Khalilian-Gourtani, Amirhossein; Jensen, Erika; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen; Wang, Yao
OBJECTIVE/UNASSIGNED:This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. APPROACH/UNASSIGNED:We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. MAIN RESULTS/UNASSIGNED:The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. SIGNIFICANCE/UNASSIGNED:The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.
PMCID:10980022
PMID: 38559163
ISSN: 2692-8205
CID: 5676302
Temporal integration in human auditory cortex is predominantly yoked to absolute time, not structure duration
Norman-Haignere, Sam V; Keshishian, Menoua K; Devinsky, Orrin; Doyle, Werner; McKhann, Guy M; Schevon, Catherine A; Flinker, Adeen; Mesgarani, Nima
Sound structures such as phonemes and words have highly variable durations. Thus, there is a fundamental difference between integrating across absolute time (e.g., 100 ms) vs. sound structure (e.g., phonemes). Auditory and cognitive models have traditionally cast neural integration in terms of time and structure, respectively, but the extent to which cortical computations reflect time or structure remains unknown. To answer this question, we rescaled the duration of all speech structures using time stretching/compression and measured integration windows in the human auditory cortex using a new experimental/computational method applied to spatiotemporally precise intracranial recordings. We observed significantly longer integration windows for stretched speech, but this lengthening was very small (~5%) relative to the change in structure durations, even in non-primary regions strongly implicated in speech-specific processing. These findings demonstrate that time-yoked computations dominate throughout the human auditory cortex, placing important constraints on neurocomputational models of structure processing.
PMCID:11463558
PMID: 39386565
ISSN: 2692-8205
CID: 5751762
Temporal dynamics of short-term neural adaptation across human visual cortex
Brands, Amber Marijn; Devore, Sasha; Devinsky, Orrin; Doyle, Werner; Flinker, Adeen; Friedman, Daniel; Dugan, Patricia; Winawer, Jonathan; Groen, Iris Isabelle Anna
Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses between lower and higher visual brain areas and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.
PMID: 38815000
ISSN: 1553-7358
CID: 5663772
Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns
Goldstein, Ariel; Grinstein-Dabush, Avigail; Schain, Mariano; Wang, Haocheng; Hong, Zhuoqiao; Aubrey, Bobbi; Schain, Mariano; Nastase, Samuel A; Zada, Zaid; Ham, Eric; Feder, Amir; Gazula, Harshvardhan; Buchnik, Eliav; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Reichart, Roi; Friedman, Daniel; Brenner, Michael; Hassidim, Avinatan; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri
Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.
PMCID:10980748
PMID: 38553456
ISSN: 2041-1723
CID: 5645352
Timing and location of speech errors induced by direct cortical stimulation
Kabakoff, Heather; Yu, Leyao; Friedman, Daniel; Dugan, Patricia; Doyle, Werner K; Devinsky, Orrin; Flinker, Adeen
Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g. planning versus motor execution). In this retrospective observational study, we analysed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 s. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 s), superior temporal gyrus (0.51 s) and middle temporal gyrus (0.54 s), followed by relatively long latencies in sensorimotor cortex (0.72 s) and especially long latencies in inferior frontal gyrus (0.95 s). Non-parametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.
PMCID:10948744
PMID: 38505231
ISSN: 2632-1297
CID: 5640502