Searched for: in-biosketch:yes
person:af137
A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations
Goldstein, Ariel; Wang, Haocheng; Niekerken, Leonard; Schain, Mariano; Zada, Zaid; Aubrey, Bobbi; Sheffer, Tom; Nastase, Samuel A; Gazula, Harshvardhan; Singh, Aditi; Rao, Aditi; Choe, Gina; Kim, Catherine; Doyle, Werner; Friedman, Daniel; Devore, Sasha; Dugan, Patricia; Hassidim, Avinatan; Brenner, Michael; Matias, Yossi; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri
This study introduces a unified computational framework connecting acoustic, speech and word-level linguistic structures to study the neural basis of everyday conversations in the human brain. We used electrocorticography to record neural signals across 100 h of speech production and comprehension as participants engaged in open-ended real-life conversations. We extracted low-level acoustic, mid-level speech and contextual word embeddings from a multimodal speech-to-text model (Whisper). We developed encoding models that linearly map these embeddings onto brain activity during speech production and comprehension. Remarkably, this model accurately predicts neural activity at each level of the language processing hierarchy across hours of new conversations not used in training the model. The internal processing hierarchy in the model is aligned with the cortical hierarchy for speech and language processing, where sensory and motor regions better align with the model's speech embeddings, and higher-level language areas better align with the model's language embeddings. The Whisper model captures the temporal sequence of language-to-speech encoding before word articulation (speech production) and speech-to-language encoding post articulation (speech comprehension). The embeddings learned by this model outperform symbolic models in capturing neural activity supporting natural speech and language. These findings support a paradigm shift towards unified computational models that capture the entire processing hierarchy for speech comprehension and production in real-world conversations.
PMID: 40055549
ISSN: 2397-3374
CID: 5807992
A left-lateralized dorsolateral prefrontal network for naming
Yu, Leyao; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen
The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dynamics supporting this convergence relevant to daily auditory discourse remain poorly understood. Here, we leveraged neurosurgical electrocorticographic (ECoG) recordings from 48 patients and dissociated two key language networks that highly overlap in time and space integral to word retrieval. Using unsupervised temporal clustering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was agnostic to input modalities. Functionally, we confirmed that the semantic processing network encodes word surprisal during sentence perception. Our findings characterize how humans integrate ongoing auditory semantic information over time, a critical linguistic function from passive comprehension to daily discourse.
PMCID:11118423
PMID: 38798614
ISSN: 2692-8205
CID: 5676322
Transformer-based neural speech decoding from surface and depth electrode signals
Chen, Junbo; Chen, Xupeng; Wang, Ran; Le, Chenqian; Khalilian-Gourtani, Amirhossein; Jensen, Erika; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen; Wang, Yao
PMID: 39819752
ISSN: 1741-2552
CID: 5777232
A low-activity cortical network selectively encodes syntax
Morgan, Adam M; Devinsky, Orrin; Doyle, Werner K; Dugan, Patricia; Friedman, Daniel; Flinker, Adeen
Syntax, the abstract structure of language, is a hallmark of human cognition. Despite its importance, its neural underpinnings remain obscured by inherent limitations of non-invasive brain measures and a near total focus on comprehension paradigms. Here, we address these limitations with high-resolution neurosurgical recordings (electrocorticography) and a controlled sentence production experiment. We uncover three syntactic networks that are broadly distributed across traditional language regions, but with focal concentrations in middle and inferior frontal gyri. In contrast to previous findings from comprehension studies, these networks process syntax mostly to the exclusion of words and meaning, supporting a cognitive architecture with a distinct syntactic system. Most strikingly, our data reveal an unexpected property of syntax: it is encoded independent of neural activity levels. We propose that this "low-activity coding" scheme represents a novel mechanism for encoding information, reserved for higher-order cognition more broadly.
PMCID:11212956
PMID: 38948730
ISSN: 2692-8205
CID: 5676332
From Single Words to Sentence Production: Shared Cortical Representations but Distinct Temporal Dynamics
Morgan, Adam M; Devinsky, Orrin; Doyle, Werner K; Dugan, Patricia; Friedman, Daniel; Flinker, Adeen
Sentence production is the uniquely human ability to transform complex thoughts into strings of words. Despite the importance of this process, language production research has primarily focused on single words. It remains an untested assumption that insights from this literature generalize to more naturalistic utterances like sentences. Here, we investigate this using high-resolution neurosurgical recordings (ECoG) and an overt production experiment where patients produce six words in isolation (picture naming) and in sentences (scene description). We trained machine learning models to identify the unique brain activity pattern for each word during picture naming, and used these patterns to decode which words patients were processing while they produced sentences. Our findings reveal that words share cortical representations across tasks. In sensorimotor cortex, words were consistently activated in the order in which they were said in the sentence. However, in inferior and middle frontal gyri (IFG and MFG), the order in which words were processed depended on the syntactic structure of the sentence. This dynamic interplay between sentence structure and word processing reveals that sentence production is not simply a sequence of single word production tasks, and highlights a regional division of labor within the language network. Finally, we argue that the dynamics of word processing in prefrontal cortex may impose a subtle pressure on language evolution, explaining why nearly all the world's languages position subjects before objects.
PMCID:11565881
PMID: 39554006
ISSN: 2692-8205
CID: 5766162
A corollary discharge circuit in human speech
Khalilian-Gourtani, Amirhossein; Wang, Ran; Chen, Xupeng; Yu, Leyao; Dugan, Patricia; Friedman, Daniel; Doyle, Werner; Devinsky, Orrin; Wang, Yao; Flinker, Adeen
When we vocalize, our brain distinguishes self-generated sounds from external ones. A corollary discharge signal supports this function in animals; however, in humans, its exact origin and temporal dynamics remain unknown. We report electrocorticographic recordings in neurosurgical patients and a connectivity analysis framework based on Granger causality that reveals major neural communications. We find a reproducible source for corollary discharge across multiple speech production paradigms localized to the ventral speech motor cortex before speech articulation. The uncovered discharge predicts the degree of auditory cortex suppression during speech, its well-documented consequence. These results reveal the human corollary discharge source and timing with far-reaching implication for speech motor-control as well as auditory hallucinations in human psychosis.
PMCID:11648673
PMID: 39625978
ISSN: 1091-6490
CID: 5780132
Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language
Hong, Zhuoqiao; Wang, Haocheng; Zada, Zaid; Gazula, Harshvardhan; Turner, David; Aubrey, Bobbi; Niekerken, Leonard; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Friedman, Daniel; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri; Nastase, Samuel A; Goldstein, Ariel
Recent research has used large language models (LLMs) to study the neural basis of naturalistic language processing in the human brain. LLMs have rapidly grown in complexity, leading to improved language processing capabilities. However, neuroscience researchers haven't kept up with the quick progress in LLM development. Here, we utilized several families of transformer-based LLMs to investigate the relationship between model size and their ability to capture linguistic information in the human brain. Crucially, a subset of LLMs were trained on a fixed training set, enabling us to dissociate model size from architecture and training set size. We used electrocorticography (ECoG) to measure neural activity in epilepsy patients while they listened to a 30-minute naturalistic audio story. We fit electrode-wise encoding models using contextual embeddings extracted from each hidden layer of the LLMs to predict word-level neural signals. In line with prior work, we found that larger LLMs better capture the structure of natural language and better predict neural activity. We also found a log-linear relationship where the encoding performance peaks in relatively earlier layers as model size increases. We also observed variations in the best-performing layer across different brain regions, corresponding to an organized language processing hierarchy.
PMCID:11244877
PMID: 39005394
ISSN: 2692-8205
CID: 5676342
Author Correction: Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns
Goldstein, Ariel; Grinstein-Dabush, Avigail; Schain, Mariano; Wang, Haocheng; Hong, Zhuoqiao; Aubrey, Bobbi; Nastase, Samuel A; Zada, Zaid; Ham, Eric; Feder, Amir; Gazula, Harshvardhan; Buchnik, Eliav; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Reichart, Roi; Friedman, Daniel; Brenner, Michael; Hassidim, Avinatan; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri
PMID: 39353920
ISSN: 2041-1723
CID: 5739352
A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
Zada, Zaid; Goldstein, Ariel; Michelmann, Sebastian; Simony, Erez; Price, Amy; Hasenfratz, Liat; Barham, Emily; Zadbood, Asieh; Doyle, Werner; Friedman, Daniel; Dugan, Patricia; Melloni, Lucia; Devore, Sasha; Flinker, Adeen; Devinsky, Orrin; Nastase, Samuel A; Hasson, Uri
Effective communication hinges on a mutual understanding of word meaning in different contexts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We developed a model-based coupling framework that aligns brain activity in both speaker and listener to a shared embedding space from a large language model (LLM). The context-sensitive LLM embeddings allow us to track the exchange of linguistic information, word by word, from one brain to another in natural conversations. Linguistic content emerges in the speaker's brain before word articulation and rapidly re-emerges in the listener's brain after word articulation. The contextual embeddings better capture word-by-word neural alignment between speaker and listener than syntactic and articulatory models. Our findings indicate that the contextual embeddings learned by LLMs can serve as an explicit numerical model of the shared, context-rich meaning space humans use to communicate their thoughts to one another.
PMID: 39096896
ISSN: 1097-4199
CID: 5696672
Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals
Chen, Junbo; Chen, Xupeng; Wang, Ran; Le, Chenqian; Khalilian-Gourtani, Amirhossein; Jensen, Erika; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen; Wang, Yao
OBJECTIVE/UNASSIGNED:This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. APPROACH/UNASSIGNED:We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. MAIN RESULTS/UNASSIGNED:The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. SIGNIFICANCE/UNASSIGNED:The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.
PMCID:10980022
PMID: 38559163
ISSN: 2692-8205
CID: 5676302