Try a new search

Format these results:

Searched for:

person:wkd1

in-biosketch:true

Total Results:

177


Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language

Hong, Zhuoqiao; Wang, Haocheng; Zada, Zaid; Gazula, Harshvardhan; Turner, David; Aubrey, Bobbi; Niekerken, Leonard; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Friedman, Daniel; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri; Nastase, Samuel A; Goldstein, Ariel
Recent research has used large language models (LLMs) to study the neural basis of naturalistic language processing in the human brain. LLMs have rapidly grown in complexity, leading to improved language processing capabilities. However, neuroscience researchers haven't kept up with the quick progress in LLM development. Here, we utilized several families of transformer-based LLMs to investigate the relationship between model size and their ability to capture linguistic information in the human brain. Crucially, a subset of LLMs were trained on a fixed training set, enabling us to dissociate model size from architecture and training set size. We used electrocorticography (ECoG) to measure neural activity in epilepsy patients while they listened to a 30-minute naturalistic audio story. We fit electrode-wise encoding models using contextual embeddings extracted from each hidden layer of the LLMs to predict word-level neural signals. In line with prior work, we found that larger LLMs better capture the structure of natural language and better predict neural activity. We also found a log-linear relationship where the encoding performance peaks in relatively earlier layers as model size increases. We also observed variations in the best-performing layer across different brain regions, corresponding to an organized language processing hierarchy.
PMCID:11244877
PMID: 39005394
ISSN: 2692-8205
CID: 5676342

A low-activity cortical network selectively encodes syntax

Morgan, Adam M; Devinsky, Orrin; Doyle, Werner K; Dugan, Patricia; Friedman, Daniel; Flinker, Adeen
Syntax, the abstract structure of language, is a hallmark of human cognition. Despite its importance, its neural underpinnings remain obscured by inherent limitations of non-invasive brain measures and a near total focus on comprehension paradigms. Here, we address these limitations with high-resolution neurosurgical recordings (electrocorticography) and a controlled sentence production experiment. We uncover three syntactic networks that are broadly distributed across traditional language regions, but with focal concentrations in middle and inferior frontal gyri. In contrast to previous findings from comprehension studies, these networks process syntax mostly to the exclusion of words and meaning, supporting a cognitive architecture with a distinct syntactic system. Most strikingly, our data reveal an unexpected property of syntax: it is encoded independent of neural activity levels. We propose that this "low-activity coding" scheme represents a novel mechanism for encoding information, reserved for higher-order cognition more broadly.
PMCID:11212956
PMID: 38948730
ISSN: 2692-8205
CID: 5676332

Temporal dynamics of short-term neural adaptation across human visual cortex

Brands, Amber Marijn; Devore, Sasha; Devinsky, Orrin; Doyle, Werner; Flinker, Adeen; Friedman, Daniel; Dugan, Patricia; Winawer, Jonathan; Groen, Iris Isabelle Anna
Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses between lower and higher visual brain areas and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.
PMID: 38815000
ISSN: 1553-7358
CID: 5663772

A left-lateralized dorsolateral prefrontal network for naming

Yu, Leyao; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen
The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dynamics supporting this convergence relevant to daily auditory discourse remain poorly understood. Here, we leveraged neurosurgical electrocorticographic (ECoG) recordings from 48 patients and dissociated two key language networks that highly overlap in time and space integral to word retrieval. Using unsupervised temporal clustering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was agnostic to input modalities. Functionally, we confirmed that the semantic processing network encodes word surprisal during sentence perception. Our findings characterize how humans integrate ongoing auditory semantic information over time, a critical linguistic function from passive comprehension to daily discourse.
PMCID:11118423
PMID: 38798614
ISSN: 2692-8205
CID: 5676322

Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns

Goldstein, Ariel; Grinstein-Dabush, Avigail; Schain, Mariano; Wang, Haocheng; Hong, Zhuoqiao; Aubrey, Bobbi; Schain, Mariano; Nastase, Samuel A; Zada, Zaid; Ham, Eric; Feder, Amir; Gazula, Harshvardhan; Buchnik, Eliav; Doyle, Werner; Devore, Sasha; Dugan, Patricia; Reichart, Roi; Friedman, Daniel; Brenner, Michael; Hassidim, Avinatan; Devinsky, Orrin; Flinker, Adeen; Hasson, Uri
Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.
PMCID:10980748
PMID: 38553456
ISSN: 2041-1723
CID: 5645352

Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals

Chen, Junbo; Chen, Xupeng; Wang, Ran; Le, Chenqian; Khalilian-Gourtani, Amirhossein; Jensen, Erika; Dugan, Patricia; Doyle, Werner; Devinsky, Orrin; Friedman, Daniel; Flinker, Adeen; Wang, Yao
OBJECTIVE/UNASSIGNED:This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. APPROACH/UNASSIGNED:We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. MAIN RESULTS/UNASSIGNED:The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. SIGNIFICANCE/UNASSIGNED:The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.
PMCID:10980022
PMID: 38559163
ISSN: 2692-8205
CID: 5676302

Timing and location of speech errors induced by direct cortical stimulation

Kabakoff, Heather; Yu, Leyao; Friedman, Daniel; Dugan, Patricia; Doyle, Werner K; Devinsky, Orrin; Flinker, Adeen
Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g. planning versus motor execution). In this retrospective observational study, we analysed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 s. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 s), superior temporal gyrus (0.51 s) and middle temporal gyrus (0.54 s), followed by relatively long latencies in sensorimotor cortex (0.72 s) and especially long latencies in inferior frontal gyrus (0.95 s). Non-parametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.
PMCID:10948744
PMID: 38505231
ISSN: 2632-1297
CID: 5640502

Flexible, high-resolution cortical arrays with large coverage capture microscale high-frequency oscillations in patients with epilepsy

Barth, Katrina J; Sun, James; Chiang, Chia-Han; Qiao, Shaoyu; Wang, Charles; Rahimpour, Shervin; Trumpis, Michael; Duraivel, Suseendrakumar; Dubey, Agrita; Wingel, Katie E; Voinas, Alex E; Ferrentino, Breonna; Doyle, Werner; Southwell, Derek G; Haglund, Michael M; Vestal, Matthew; Harward, Stephen C; Solzbacher, Florian; Devore, Sasha; Devinsky, Orrin; Friedman, Daniel; Pesaran, Bijan; Sinha, Saurabh R; Cogan, Gregory B; Blanco, Justin; Viventi, Jonathan
OBJECTIVE:Effective surgical treatment of drug-resistant epilepsy depends on accurate localization of the epileptogenic zone (EZ). High-frequency oscillations (HFOs) are potential biomarkers of the EZ. Previous research has shown that HFOs often occur within submillimeter areas of brain tissue and that the coarse spatial sampling of clinical intracranial electrode arrays may limit the accurate capture of HFO activity. In this study, we sought to characterize microscale HFO activity captured on thin, flexible microelectrocorticographic (μECoG) arrays, which provide high spatial resolution over large cortical surface areas. METHODS:We used novel liquid crystal polymer thin-film μECoG arrays (.76-1.72-mm intercontact spacing) to capture HFOs in eight intraoperative recordings from seven patients with epilepsy. We identified ripple (80-250 Hz) and fast ripple (250-600 Hz) HFOs using a common energy thresholding detection algorithm along with two stages of artifact rejection. We visualized microscale subregions of HFO activity using spatial maps of HFO rate, signal-to-noise ratio, and mean peak frequency. We quantified the spatial extent of HFO events by measuring covariance between detected HFOs and surrounding activity. We also compared HFO detection rates on microcontacts to simulated macrocontacts by spatially averaging data. RESULTS:We found visually delineable subregions of elevated HFO activity within each μECoG recording. Forty-seven percent of HFOs occurred on single 200-μm-diameter recording contacts, with minimal high-frequency activity on surrounding contacts. Other HFO events occurred across multiple contacts simultaneously, with covarying activity most often limited to a .95-mm radius. Through spatial averaging, we estimated that macrocontacts with 2-3-mm diameter would only capture 44% of the HFOs detected in our μECoG recordings. SIGNIFICANCE/CONCLUSIONS:These results demonstrate that thin-film microcontact surface arrays with both highresolution and large coverage accurately capture microscale HFO activity and may improve the utility of HFOs to localize the EZ for treatment of drug-resistant epilepsy.
PMID: 37150937
ISSN: 1528-1167
CID: 5503242

The role of superficial and deep layers in the generation of high frequency oscillations and interictal epileptiform discharges in the human cortex

Fabo, Daniel; Bokodi, Virag; Szabó, Johanna-Petra; Tóth, Emilia; Salami, Pariya; Keller, Corey J; Hajnal, Boglárka; Thesen, Thomas; Devinsky, Orrin; Doyle, Werner; Mehta, Ashesh; Madsen, Joseph; Eskandar, Emad; Erőss, Lorand; Ulbert, István; Halgren, Eric; Cash, Sydney S
Describing intracortical laminar organization of interictal epileptiform discharges (IED) and high frequency oscillations (HFOs), also known as ripples. Defining the frequency limits of slow and fast ripples. We recorded potential gradients with laminar multielectrode arrays (LME) for current source density (CSD) and multi-unit activity (MUA) analysis of interictal epileptiform discharges IEDs and HFOs in the neocortex and mesial temporal lobe of focal epilepsy patients. IEDs were observed in 20/29, while ripples only in 9/29 patients. Ripples were all detected within the seizure onset zone (SOZ). Compared to hippocampal HFOs, neocortical ripples proved to be longer, lower in frequency and amplitude, and presented non-uniform cycles. A subset of ripples (≈ 50%) co-occurred with IEDs, while IEDs were shown to contain variable high-frequency activity, even below HFO detection threshold. The limit between slow and fast ripples was defined at 150 Hz, while IEDs' high frequency components form clusters separated at 185 Hz. CSD analysis of IEDs and ripples revealed an alternating sink-source pair in the supragranular cortical layers, although fast ripple CSD appeared lower and engaged a wider cortical domain than slow ripples MUA analysis suggested a possible role of infragranularly located neural populations in ripple and IED generation. Laminar distribution of peak frequencies derived from HFOs and IEDs, respectively, showed that supragranular layers were dominated by slower (< 150 Hz) components. Our findings suggest that cortical slow ripples are generated primarily in upper layers while fast ripples and associated MUA in deeper layers. The dissociation of macro- and microdomains suggests that microelectrode recordings may be more selective for SOZ-linked ripples. We found a complex interplay between neural activity in the neocortical laminae during ripple and IED formation. We observed a potential leading role of cortical neurons in deeper layers, suggesting a refined utilization of LMEs in SOZ localization.
PMCID:10267175
PMID: 37316509
ISSN: 2045-2322
CID: 5539912

Temporal dynamics of neural responses in human visual cortex

Groen, Iris I A; Piantoni, Giovanni; Montenegro, Stephanie; Flinker, Adeen; Devore, Sasha; Devinsky, Orrin; Doyle, Werner; Dugan, Patricia; Friedman, Daniel; Ramsey, Nick; Petridou, Natalia; Winawer, Jonathan
Neural responses to visual stimuli exhibit complex temporal dynamics, including sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low contrast. These phenomena are often studied independently. Here, we demonstrate these phenomena within the same experiment and model the underlying neural computations with a single computational model. We extracted time-varying responses from electrocorticographic (ECoG) recordings from patients presented with stimuli that varied in contrast, duration, and inter-stimulus interval (ISI). Aggregating data across patients from both sexes yielded 98 electrodes with robust visual responses, covering both earlier (V1-V3) and higher-order (V3a/b, LO, TO, IPS) retinotopic maps. In all regions, the temporal dynamics of neural responses exhibit several non-linear features: peak response amplitude saturates with high contrast and longer stimulus durations; the response to a second stimulus is suppressed for short ISIs and recovers for longer ISIs; response latency decreases with increasing contrast. These features are accurately captured by a computational model comprised of a small set of canonical neuronal operations: linear filtering, rectification, exponentiation, and a delayed divisive normalization. We find that an increased normalization term captures both contrast- and adaptation-related response reductions, suggesting potentially shared underlying mechanisms. We additionally demonstrate both changes and invariance in temporal response dynamics between earlier and higher-order visual areas. Together, our results reveal the presence of a wide range of temporal and contrast-dependent neuronal dynamics in the human visual cortex, and demonstrate that a simple model captures these dynamics at millisecond resolution.SIGNIFICANCE STATEMENTSensory inputs and neural responses change continuously over time. It is especially challenging to understand a system that has both dynamic inputs and outputs. Here we use a computational modeling approach that specifies computations to convert a time-varying input stimulus to a neural response time course, and use this to predict neural activity measured in the human visual cortex. We show that this computational model predicts a wide variety of complex neural response shapes that we induced experimentally by manipulating the duration, repetition and contrast of visual stimuli. By comparing data and model predictions, we uncover systematic properties of temporal dynamics of neural signals, allowing us to better understand how the brain processes dynamic sensory information.
PMID: 35999054
ISSN: 1529-2401
CID: 5338232