Try a new search

Format these results:

Searched for:

person:wkd1

in-biosketch:true

Total Results:

164


Intracranial EEG Validation of Single-Channel Subgaleal EEG for Seizure Identification

Pacia, Steven V; Doyle, Werner K; Friedman, Daniel; H Bacher, Daniel; Kuzniecky, Ruben I
PURPOSE/OBJECTIVE:A device that provides continuous, long-term, accurate seizure detection information to providers and patients could fundamentally alter epilepsy care. Subgaleal (SG) EEG is a promising modality that offers a minimally invasive, safe, and accurate means of long-term seizure monitoring. METHODS:Subgaleal EEG electrodes were placed, at or near the cranial vertex, simultaneously with intracranial EEG electrodes in 21 epilepsy patients undergoing intracranial EEG studies for up to 13 days. A total of 219, 10-minute single-channel SGEEG samples, including 138 interictal awake or sleep segments and 81 seizures (36 temporal lobe, 32 extra-temporal, and 13 simultaneous temporal/extra-emporal onsets) were reviewed by 3 expert readers blinded to the intracranial EEG results, then analyzed for accuracy and interrater reliability. RESULTS:Using a single-channel of SGEEG, reviewers accurately identified 98% of temporal and extratemporal onset, intracranial, EEG-verified seizures with a sensitivity of 98% and specificity of 99%. All focal to bilateral tonic--clonic seizures were correctly identified. CONCLUSIONS:Single-channel SGEEG, placed at or near the vertex, reliably identifies focal and secondarily generalized seizures. These findings demonstrate that the SG space at the cranial vertex may be an appropriate site for long-term ambulatory seizure monitoring.
PMID: 32925251
ISSN: 1537-1603
CID: 4592552

Intracranial electroencephalographic biomarker predicts effective responsive neurostimulation for epilepsy prior to treatment

Scheid, Brittany H; Bernabei, John M; Khambhati, Ankit N; Mouchtaris, Sofia; Jeschke, Jay; Bassett, Dani S; Becker, Danielle; Davis, Kathryn A; Lucas, Timothy; Doyle, Werner; Chang, Edward F; Friedman, Daniel; Rao, Vikram R; Litt, Brian
OBJECTIVE:Despite the overall success of responsive neurostimulation (RNS) therapy for drug-resistant focal epilepsy, clinical outcomes in individuals vary significantly and are hard to predict. Biomarkers that indicate the clinical efficacy of RNS-ideally before device implantation-are critically needed, but challenges include the intrinsic heterogeneity of the RNS patient population and variability in clinical management across epilepsy centers. The aim of this study is to use a multicenter dataset to evaluate a candidate biomarker from intracranial electroencephalographic (iEEG) recordings that predicts clinical outcome with subsequent RNS therapy. METHODS:We assembled a federated dataset of iEEG recordings, collected prior to RNS implantation, from a retrospective cohort of 30 patients across three major epilepsy centers. Using ictal iEEG recordings, each center independently calculated network synchronizability, a candidate biomarker indicating the susceptibility of epileptic brain networks to RNS therapy. RESULTS:Ictal measures of synchronizability in the high-γ band (95-105 Hz) significantly distinguish between good and poor RNS responders after at least 3 years of therapy under the current RNS therapy guidelines (area under the curve = .83). Additionally, ictal high-γ synchronizability is inversely associated with the degree of therapeutic response. SIGNIFICANCE/CONCLUSIONS:This study provides a proof-of-concept roadmap for collaborative biomarker evaluation in federated data, where practical considerations impede full data sharing across centers. Our results suggest that network synchronizability can help predict therapeutic response to RNS therapy. With further validation, this biomarker could facilitate patient selection and help avert a costly, invasive intervention in patients who are unlikely to benefit.
PMID: 34997577
ISSN: 1528-1167
CID: 5107542

Multiscale temporal integration organizes hierarchical computation in human auditory cortex

Norman-Haignere, Sam V; Long, Laura K; Devinsky, Orrin; Doyle, Werner; Irobunda, Ifeoma; Merricks, Edward M; Feldstein, Neil A; McKhann, Guy M; Schevon, Catherine A; Flinker, Adeen; Mesgarani, Nima
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
PMID: 35145280
ISSN: 2397-3374
CID: 5156382

Shared computational principles for language processing in humans and deep language models

Goldstein, Ariel; Zada, Zaid; Buchnik, Eliav; Schain, Mariano; Price, Amy; Aubrey, Bobbi; Nastase, Samuel A; Feder, Amir; Emanuel, Dotan; Cohen, Alon; Jansen, Aren; Gazula, Harshvardhan; Choe, Gina; Rao, Aditi; Kim, Catherine; Casto, Colton; Fanda, Lora; Doyle, Werner; Friedman, Daniel; Dugan, Patricia; Melloni, Lucia; Reichart, Roi; Devore, Sasha; Flinker, Adeen; Hasenfratz, Liat; Levy, Omer; Hassidim, Avinatan; Brenner, Michael; Matias, Yossi; Norman, Kenneth A; Devinsky, Orrin; Hasson, Uri
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
PMCID:8904253
PMID: 35260860
ISSN: 1546-1726
CID: 5190382

A cortical network processes auditory error signals during human speech production to maintain fluency

Ozker, Muge; Doyle, Werner; Devinsky, Orrin; Flinker, Adeen
Hearing one's own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.
PMID: 35113857
ISSN: 1545-7885
CID: 5153792

Imagined speech can be decoded from low- and cross-frequency intracranial EEG features

Proix, Timothée; Delgado Saa, Jaime; Christen, Andy; Martin, Stephanie; Pasley, Brian N; Knight, Robert T; Tian, Xing; Poeppel, David; Doyle, Werner K; Devinsky, Orrin; Arnal, Luc H; Mégevand, Pierre; Giraud, Anne-Lise
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
PMID: 35013268
ISSN: 2041-1723
CID: 5118532

Ongoing neural oscillations influence behavior and sensory representations by suppressing neuronal excitability

Iemi, Luca; Gwilliams, Laura; Samaha, Jason; Auksztulewicz, Ryszard; Cycowicz, Yael M; King, Jean-Remi; Nikulin, Vadim V; Thesen, Thomas; Doyle, Werner; Devinsky, Orrin; Schroeder, Charles E; Melloni, Lucia; Haegens, Saskia
The ability to process and respond to external input is critical for adaptive behavior. Why, then, do neural and behavioral responses vary across repeated presentations of the same sensory input? Ongoing fluctuations of neuronal excitability are currently hypothesized to underlie the trial-by-trial variability in sensory processing. To test this, we capitalized on intracranial electrophysiology in neurosurgical patients performing an auditory discrimination task with visual cues: specifically, we examined the interaction between prestimulus alpha oscillations, excitability, task performance, and decoded neural stimulus representations. We found that strong prestimulus oscillations in the alpha+ band (i.e., alpha and neighboring frequencies), rather than the aperiodic signal, correlated with a low excitability state, indexed by reduced broadband high-frequency activity. This state was related to slower reaction times and reduced neural stimulus encoding strength. We propose that the alpha+ rhythm modulates excitability, thereby resulting in variability in behavior and sensory representations despite identical input.
PMID: 34875382
ISSN: 1095-9572
CID: 5105842

Long-term priors influence visual perception through recruitment of long-range feedback

Hardstone, Richard; Zhu, Michael; Flinker, Adeen; Melloni, Lucia; Devore, Sasha; Friedman, Daniel; Dugan, Patricia; Doyle, Werner K; Devinsky, Orrin; He, Biyu J
Perception results from the interplay of sensory input and prior knowledge. Despite behavioral evidence that long-term priors powerfully shape perception, the neural mechanisms underlying these interactions remain poorly understood. We obtained direct cortical recordings in neurosurgical patients as they viewed ambiguous images that elicit constant perceptual switching. We observe top-down influences from the temporal to occipital cortex, during the preferred percept that is congruent with the long-term prior. By contrast, stronger feedforward drive is observed during the non-preferred percept, consistent with a prediction error signal. A computational model based on hierarchical predictive coding and attractor networks reproduces all key experimental findings. These results suggest a pattern of large-scale information flow change underlying long-term priors' influence on perception and provide constraints on theories about long-term priors' influence on perception.
PMID: 34725348
ISSN: 2041-1723
CID: 5037932

An Intracranial Electrophysiology Study of Visual Language Encoding: The Contribution of the Precentral Gyrus to Silent Reading

Kaestner, Erik; Thesen, Thomas; Devinsky, Orrin; Doyle, Werner; Carlson, Chad; Halgren, Eric
Models of reading emphasize that visual (orthographic) processing provides input to phonological as well as lexical-semantic processing. Neurobiological models of reading have mapped these processes to distributed regions across occipital-temporal, temporal-parietal, and frontal cortices. However, the role of the precentral gyrus in these models is ambiguous. Articulatory phonemic representations in the precentral gyrus are obviously involved in reading aloud, but it is unclear if the precentral gyrus is recruited during reading silently in a time window consistent with participation in phonological processing contributions. Here, we recorded intracranial electrophysiology during a speeded semantic decision task from 24 patients to map the spatio-temporal flow of information across the cortex during silent reading. Patients selected animate nouns from a stream of nonanimate words, letter strings, and false-font stimuli. We characterized the distribution and timing of evoked high-gamma power (70-170 Hz) as well as phase-locking between electrodes. The precentral gyrus showed a proportion of electrodes responsive to linguistic stimuli (27%) that was at least as high as those of surrounding peri-sylvian regions. These precentral gyrus electrodes had significantly greater high-gamma power for words compared to both false-font and letter-string stimuli. In a patient with word-selective effects in the fusiform, superior temporal, and precentral gyri, there was significant phase-locking between the fusiform and precentral gyri starting at ∼180 msec and between the precentral and superior temporal gyri starting at ∼220 msec. Finally, our large patient cohort allowed exploratory analyses of the spatio-temporal reading network underlying silent reading. The distribution, timing, and connectivity results place the precentral gyrus as an important hub in the silent reading network.
PMCID:8497063
PMID: 34347873
ISSN: 1530-8898
CID: 5060932

Moment-by-moment tracking of naturalistic learning and its underlying hippocampo-cortical interactions

Michelmann, Sebastian; Price, Amy R; Aubrey, Bobbi; Strauss, Camilla K; Doyle, Werner K; Friedman, Daniel; Dugan, Patricia C; Devinsky, Orrin; Devore, Sasha; Flinker, Adeen; Hasson, Uri; Norman, Kenneth A
Humans form lasting memories of stimuli that were only encountered once. This naturally occurs when listening to a story, however it remains unclear how and when memories are stored and retrieved during story-listening. Here, we first confirm in behavioral experiments that participants can learn about the structure of a story after a single exposure and are able to recall upcoming words when the story is presented again. We then track mnemonic information in high frequency activity (70-200 Hz) as patients undergoing electrocorticographic recordings listen twice to the same story. We demonstrate predictive recall of upcoming information through neural responses in auditory processing regions. This neural measure correlates with behavioral measures of event segmentation and learning. Event boundaries are linked to information flow from cortex to hippocampus. When listening for a second time, information flow from hippocampus to cortex precedes moments of predictive recall. These results provide insight on a fine-grained temporal scale into how episodic memory encoding and retrieval work under naturalistic conditions.
PMID: 34518520
ISSN: 2041-1723
CID: 5012282