Try a new search

Format these results:

Searched for:

person:wkd1

in-biosketch:true

Total Results:

170


Flexible, high-resolution cortical arrays with large coverage capture microscale high-frequency oscillations in patients with epilepsy

Barth, Katrina J; Sun, James; Chiang, Chia-Han; Qiao, Shaoyu; Wang, Charles; Rahimpour, Shervin; Trumpis, Michael; Duraivel, Suseendrakumar; Dubey, Agrita; Wingel, Katie E; Voinas, Alex E; Ferrentino, Breonna; Doyle, Werner; Southwell, Derek G; Haglund, Michael M; Vestal, Matthew; Harward, Stephen C; Solzbacher, Florian; Devore, Sasha; Devinsky, Orrin; Friedman, Daniel; Pesaran, Bijan; Sinha, Saurabh R; Cogan, Gregory B; Blanco, Justin; Viventi, Jonathan
OBJECTIVE:Effective surgical treatment of drug-resistant epilepsy depends on accurate localization of the epileptogenic zone (EZ). High-frequency oscillations (HFOs) are potential biomarkers of the EZ. Previous research has shown that HFOs often occur within submillimeter areas of brain tissue and that the coarse spatial sampling of clinical intracranial electrode arrays may limit the accurate capture of HFO activity. In this study, we sought to characterize microscale HFO activity captured on thin, flexible microelectrocorticographic (μECoG) arrays, which provide high spatial resolution over large cortical surface areas. METHODS:We used novel liquid crystal polymer thin-film μECoG arrays (.76-1.72-mm intercontact spacing) to capture HFOs in eight intraoperative recordings from seven patients with epilepsy. We identified ripple (80-250 Hz) and fast ripple (250-600 Hz) HFOs using a common energy thresholding detection algorithm along with two stages of artifact rejection. We visualized microscale subregions of HFO activity using spatial maps of HFO rate, signal-to-noise ratio, and mean peak frequency. We quantified the spatial extent of HFO events by measuring covariance between detected HFOs and surrounding activity. We also compared HFO detection rates on microcontacts to simulated macrocontacts by spatially averaging data. RESULTS:We found visually delineable subregions of elevated HFO activity within each μECoG recording. Forty-seven percent of HFOs occurred on single 200-μm-diameter recording contacts, with minimal high-frequency activity on surrounding contacts. Other HFO events occurred across multiple contacts simultaneously, with covarying activity most often limited to a .95-mm radius. Through spatial averaging, we estimated that macrocontacts with 2-3-mm diameter would only capture 44% of the HFOs detected in our μECoG recordings. SIGNIFICANCE/CONCLUSIONS:These results demonstrate that thin-film microcontact surface arrays with both highresolution and large coverage accurately capture microscale HFO activity and may improve the utility of HFOs to localize the EZ for treatment of drug-resistant epilepsy.
PMID: 37150937
ISSN: 1528-1167
CID: 5503242

The role of superficial and deep layers in the generation of high frequency oscillations and interictal epileptiform discharges in the human cortex

Fabo, Daniel; Bokodi, Virag; Szabó, Johanna-Petra; Tóth, Emilia; Salami, Pariya; Keller, Corey J; Hajnal, Boglárka; Thesen, Thomas; Devinsky, Orrin; Doyle, Werner; Mehta, Ashesh; Madsen, Joseph; Eskandar, Emad; Erőss, Lorand; Ulbert, István; Halgren, Eric; Cash, Sydney S
Describing intracortical laminar organization of interictal epileptiform discharges (IED) and high frequency oscillations (HFOs), also known as ripples. Defining the frequency limits of slow and fast ripples. We recorded potential gradients with laminar multielectrode arrays (LME) for current source density (CSD) and multi-unit activity (MUA) analysis of interictal epileptiform discharges IEDs and HFOs in the neocortex and mesial temporal lobe of focal epilepsy patients. IEDs were observed in 20/29, while ripples only in 9/29 patients. Ripples were all detected within the seizure onset zone (SOZ). Compared to hippocampal HFOs, neocortical ripples proved to be longer, lower in frequency and amplitude, and presented non-uniform cycles. A subset of ripples (≈ 50%) co-occurred with IEDs, while IEDs were shown to contain variable high-frequency activity, even below HFO detection threshold. The limit between slow and fast ripples was defined at 150 Hz, while IEDs' high frequency components form clusters separated at 185 Hz. CSD analysis of IEDs and ripples revealed an alternating sink-source pair in the supragranular cortical layers, although fast ripple CSD appeared lower and engaged a wider cortical domain than slow ripples MUA analysis suggested a possible role of infragranularly located neural populations in ripple and IED generation. Laminar distribution of peak frequencies derived from HFOs and IEDs, respectively, showed that supragranular layers were dominated by slower (< 150 Hz) components. Our findings suggest that cortical slow ripples are generated primarily in upper layers while fast ripples and associated MUA in deeper layers. The dissociation of macro- and microdomains suggests that microelectrode recordings may be more selective for SOZ-linked ripples. We found a complex interplay between neural activity in the neocortical laminae during ripple and IED formation. We observed a potential leading role of cortical neurons in deeper layers, suggesting a refined utilization of LMEs in SOZ localization.
PMCID:10267175
PMID: 37316509
ISSN: 2045-2322
CID: 5539912

Temporal dynamics of neural responses in human visual cortex

Groen, Iris I A; Piantoni, Giovanni; Montenegro, Stephanie; Flinker, Adeen; Devore, Sasha; Devinsky, Orrin; Doyle, Werner; Dugan, Patricia; Friedman, Daniel; Ramsey, Nick; Petridou, Natalia; Winawer, Jonathan
Neural responses to visual stimuli exhibit complex temporal dynamics, including sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low contrast. These phenomena are often studied independently. Here, we demonstrate these phenomena within the same experiment and model the underlying neural computations with a single computational model. We extracted time-varying responses from electrocorticographic (ECoG) recordings from patients presented with stimuli that varied in contrast, duration, and inter-stimulus interval (ISI). Aggregating data across patients from both sexes yielded 98 electrodes with robust visual responses, covering both earlier (V1-V3) and higher-order (V3a/b, LO, TO, IPS) retinotopic maps. In all regions, the temporal dynamics of neural responses exhibit several non-linear features: peak response amplitude saturates with high contrast and longer stimulus durations; the response to a second stimulus is suppressed for short ISIs and recovers for longer ISIs; response latency decreases with increasing contrast. These features are accurately captured by a computational model comprised of a small set of canonical neuronal operations: linear filtering, rectification, exponentiation, and a delayed divisive normalization. We find that an increased normalization term captures both contrast- and adaptation-related response reductions, suggesting potentially shared underlying mechanisms. We additionally demonstrate both changes and invariance in temporal response dynamics between earlier and higher-order visual areas. Together, our results reveal the presence of a wide range of temporal and contrast-dependent neuronal dynamics in the human visual cortex, and demonstrate that a simple model captures these dynamics at millisecond resolution.SIGNIFICANCE STATEMENTSensory inputs and neural responses change continuously over time. It is especially challenging to understand a system that has both dynamic inputs and outputs. Here we use a computational modeling approach that specifies computations to convert a time-varying input stimulus to a neural response time course, and use this to predict neural activity measured in the human visual cortex. We show that this computational model predicts a wide variety of complex neural response shapes that we induced experimentally by manipulating the duration, repetition and contrast of visual stimuli. By comparing data and model predictions, we uncover systematic properties of temporal dynamics of neural signals, allowing us to better understand how the brain processes dynamic sensory information.
PMID: 35999054
ISSN: 1529-2401
CID: 5338232

Spatiotemporal dynamics of human high gamma discriminate naturalistic behavioral states

Alasfour, Abdulwahab; Gabriel, Paolo; Jiang, Xi; Shamie, Isaac; Melloni, Lucia; Thesen, Thomas; Dugan, Patricia; Friedman, Daniel; Doyle, Werner; Devinsky, Orin; Gonda, David; Sattar, Shifteh; Wang, Sonya; Halgren, Eric; Gilja, Vikash
In analyzing the neural correlates of naturalistic and unstructured behaviors, features of neural activity that are ignored in a trial-based experimental paradigm can be more fully studied and investigated. Here, we analyze neural activity from two patients using electrocorticography (ECoG) and stereo-electroencephalography (sEEG) recordings, and reveal that multiple neural signal characteristics exist that discriminate between unstructured and naturalistic behavioral states such as "engaging in dialogue" and "using electronics". Using the high gamma amplitude as an estimate of neuronal firing rate, we demonstrate that behavioral states in a naturalistic setting are discriminable based on long-term mean shifts, variance shifts, and differences in the specific neural activity's covariance structure. Both the rapid and slow changes in high gamma band activity separate unstructured behavioral states. We also use Gaussian process factor analysis (GPFA) to show the existence of salient spatiotemporal features with variable smoothness in time. Further, we demonstrate that both temporally smooth and stochastic spatiotemporal activity can be used to differentiate unstructured behavioral states. This is the first attempt to elucidate how different neural signal features contain information about behavioral states collected outside the conventional experimental paradigm.
PMID: 35939509
ISSN: 1553-7358
CID: 5286572

Intracranial EEG Validation of Single-Channel Subgaleal EEG for Seizure Identification

Pacia, Steven V; Doyle, Werner K; Friedman, Daniel; H Bacher, Daniel; Kuzniecky, Ruben I
PURPOSE/OBJECTIVE:A device that provides continuous, long-term, accurate seizure detection information to providers and patients could fundamentally alter epilepsy care. Subgaleal (SG) EEG is a promising modality that offers a minimally invasive, safe, and accurate means of long-term seizure monitoring. METHODS:Subgaleal EEG electrodes were placed, at or near the cranial vertex, simultaneously with intracranial EEG electrodes in 21 epilepsy patients undergoing intracranial EEG studies for up to 13 days. A total of 219, 10-minute single-channel SGEEG samples, including 138 interictal awake or sleep segments and 81 seizures (36 temporal lobe, 32 extra-temporal, and 13 simultaneous temporal/extra-emporal onsets) were reviewed by 3 expert readers blinded to the intracranial EEG results, then analyzed for accuracy and interrater reliability. RESULTS:Using a single-channel of SGEEG, reviewers accurately identified 98% of temporal and extratemporal onset, intracranial, EEG-verified seizures with a sensitivity of 98% and specificity of 99%. All focal to bilateral tonic--clonic seizures were correctly identified. CONCLUSIONS:Single-channel SGEEG, placed at or near the vertex, reliably identifies focal and secondarily generalized seizures. These findings demonstrate that the SG space at the cranial vertex may be an appropriate site for long-term ambulatory seizure monitoring.
PMID: 32925251
ISSN: 1537-1603
CID: 4592552

Intracranial electroencephalographic biomarker predicts effective responsive neurostimulation for epilepsy prior to treatment

Scheid, Brittany H; Bernabei, John M; Khambhati, Ankit N; Mouchtaris, Sofia; Jeschke, Jay; Bassett, Dani S; Becker, Danielle; Davis, Kathryn A; Lucas, Timothy; Doyle, Werner; Chang, Edward F; Friedman, Daniel; Rao, Vikram R; Litt, Brian
OBJECTIVE:Despite the overall success of responsive neurostimulation (RNS) therapy for drug-resistant focal epilepsy, clinical outcomes in individuals vary significantly and are hard to predict. Biomarkers that indicate the clinical efficacy of RNS-ideally before device implantation-are critically needed, but challenges include the intrinsic heterogeneity of the RNS patient population and variability in clinical management across epilepsy centers. The aim of this study is to use a multicenter dataset to evaluate a candidate biomarker from intracranial electroencephalographic (iEEG) recordings that predicts clinical outcome with subsequent RNS therapy. METHODS:We assembled a federated dataset of iEEG recordings, collected prior to RNS implantation, from a retrospective cohort of 30 patients across three major epilepsy centers. Using ictal iEEG recordings, each center independently calculated network synchronizability, a candidate biomarker indicating the susceptibility of epileptic brain networks to RNS therapy. RESULTS:Ictal measures of synchronizability in the high-γ band (95-105 Hz) significantly distinguish between good and poor RNS responders after at least 3 years of therapy under the current RNS therapy guidelines (area under the curve = .83). Additionally, ictal high-γ synchronizability is inversely associated with the degree of therapeutic response. SIGNIFICANCE/CONCLUSIONS:This study provides a proof-of-concept roadmap for collaborative biomarker evaluation in federated data, where practical considerations impede full data sharing across centers. Our results suggest that network synchronizability can help predict therapeutic response to RNS therapy. With further validation, this biomarker could facilitate patient selection and help avert a costly, invasive intervention in patients who are unlikely to benefit.
PMID: 34997577
ISSN: 1528-1167
CID: 5107542

Multiscale temporal integration organizes hierarchical computation in human auditory cortex

Norman-Haignere, Sam V; Long, Laura K; Devinsky, Orrin; Doyle, Werner; Irobunda, Ifeoma; Merricks, Edward M; Feldstein, Neil A; McKhann, Guy M; Schevon, Catherine A; Flinker, Adeen; Mesgarani, Nima
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
PMID: 35145280
ISSN: 2397-3374
CID: 5156382

Shared computational principles for language processing in humans and deep language models

Goldstein, Ariel; Zada, Zaid; Buchnik, Eliav; Schain, Mariano; Price, Amy; Aubrey, Bobbi; Nastase, Samuel A; Feder, Amir; Emanuel, Dotan; Cohen, Alon; Jansen, Aren; Gazula, Harshvardhan; Choe, Gina; Rao, Aditi; Kim, Catherine; Casto, Colton; Fanda, Lora; Doyle, Werner; Friedman, Daniel; Dugan, Patricia; Melloni, Lucia; Reichart, Roi; Devore, Sasha; Flinker, Adeen; Hasenfratz, Liat; Levy, Omer; Hassidim, Avinatan; Brenner, Michael; Matias, Yossi; Norman, Kenneth A; Devinsky, Orrin; Hasson, Uri
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
PMCID:8904253
PMID: 35260860
ISSN: 1546-1726
CID: 5190382

A cortical network processes auditory error signals during human speech production to maintain fluency

Ozker, Muge; Doyle, Werner; Devinsky, Orrin; Flinker, Adeen
Hearing one's own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.
PMID: 35113857
ISSN: 1545-7885
CID: 5153792

Imagined speech can be decoded from low- and cross-frequency intracranial EEG features

Proix, Timothée; Delgado Saa, Jaime; Christen, Andy; Martin, Stephanie; Pasley, Brian N; Knight, Robert T; Tian, Xing; Poeppel, David; Doyle, Werner K; Devinsky, Orrin; Arnal, Luc H; Mégevand, Pierre; Giraud, Anne-Lise
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
PMID: 35013268
ISSN: 2041-1723
CID: 5118532