Try a new search

Format these results:

Searched for:

person:chklod01

in-biosketch:yes

Total Results:

58


Reply to Castro et al.: Do connectomes possess markers of activity-dependent synaptic plasticity?

Chapochnikov, Nikolai M; Pehlevan, Cengiz; Chklovskii, Dmitri B
PMID: 38048451
ISSN: 1091-6490
CID: 5590572

A complete reconstruction of the early visual system of an adult insect

Chua, Nicholas J; Makarova, Anastasia A; Gunn, Pat; Villani, Sonia; Cohen, Ben; Thasin, Myisha; Wu, Jingpeng; Shefter, Deena; Pang, Song; Xu, C Shan; Hess, Harald F; Polilov, Alexey A; Chklovskii, Dmitri B
For most model organisms in neuroscience, research into visual processing in the brain is difficult because of a lack of high-resolution maps that capture complex neuronal circuitry. The microinsect Megaphragma viggianii, because of its small size and non-trivial behavior, provides a unique opportunity for tractable whole-organism connectomics. We image its whole head using serial electron microscopy. We reconstruct its compound eye and analyze the optical properties of the ommatidia as well as the connectome of the first visual neuropil-the lamina. Compared with the fruit fly and the honeybee, Megaphragma visual system is highly simplified: it has 29 ommatidia per eye and 6 lamina neuron types. We report features that are both stereotypical among most ommatidia and specialized to some. By identifying the "barebones" circuits critical for flying insects, our results will facilitate constructing computational models of visual processing in insects.
PMID: 37774707
ISSN: 1879-0445
CID: 5609432

Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction

Chapochnikov, Nikolai M; Pehlevan, Cengiz; Chklovskii, Dmitri B
One major question in neuroscience is how to relate connectomes to neural activity, circuit function, and learning. We offer an answer in the peripheral olfactory circuit of the Drosophila larva, composed of olfactory receptor neurons (ORNs) connected through feedback loops with interconnected inhibitory local neurons (LNs). We combine structural and activity data and, using a holistic normative framework based on similarity-matching, we formulate biologically plausible mechanistic models of the circuit. In particular, we consider a linear circuit model, for which we derive an exact theoretical solution, and a nonnegative circuit model, which we examine through simulations. The latter largely predicts the ORN [Formula: see text] LN synaptic weights found in the connectome and demonstrates that they reflect correlations in ORN activity patterns. Furthermore, this model accounts for the relationship between ORN [Formula: see text] LN and LN-LN synaptic counts and the emergence of different LN types. Functionally, we propose that LNs encode soft cluster memberships of ORN activity, and partially whiten and normalize the stimulus representations in ORNs through inhibitory feedback. Such a synaptic organization could, in principle, autonomously arise through Hebbian plasticity and would allow the circuit to adapt to different environments in an unsupervised manner. We thus uncover a general and potent circuit motif that can learn and extract significant input features and render stimulus representations more efficient. Finally, our study provides a unified framework for relating structure, activity, function, and learning in neural circuits and supports the conjecture that similarity-matching shapes the transformation of neural representations.
PMID: 37428907
ISSN: 1091-6490
CID: 5536992

Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning

Qin, Shanshan; Farashahi, Shiva; Lipshutz, David; Sengupta, Anirvan M; Chklovskii, Dmitri B; Pehlevan, Cengiz
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
PMID: 36635497
ISSN: 1546-1726
CID: 5419072

A linear discriminant analysis model of imbalanced associative learning in the mushroom body compartment

Lipshutz, David; Kashalikar, Aneesh; Farashahi, Shiva; Chklovskii, Dmitri B
To adapt to their environments, animals learn associations between sensory stimuli and unconditioned stimuli. In invertebrates, olfactory associative learning primarily occurs in the mushroom body, which is segregated into separate compartments. Within each compartment, Kenyon cells (KCs) encoding sparse odor representations project onto mushroom body output neurons (MBONs) whose outputs guide behavior. Associated with each compartment is a dopamine neuron (DAN) that modulates plasticity of the KC-MBON synapses within the compartment. Interestingly, DAN-induced plasticity of the KC-MBON synapse is imbalanced in the sense that it only weakens the synapse and is temporally sparse. We propose a normative mechanistic model of the MBON as a linear discriminant analysis (LDA) classifier that predicts the presence of an unconditioned stimulus (class identity) given a KC odor representation (feature vector). Starting from a principled LDA objective function and under the assumption of temporally sparse DAN activity, we derive an online algorithm which maps onto the mushroom body compartment. Our model accounts for the imbalanced learning at the KC-MBON synapse and makes testable predictions that provide clear contrasts with existing models.
PMCID:9934445
PMID: 36745688
ISSN: 1553-7358
CID: 5420732

Biologically plausible single-layer networks for nonnegative independent component analysis

Lipshutz, David; Pehlevan, Cengiz; Chklovskii, Dmitri B
An important problem in neuroscience is to understand how brains extract relevant signals from mixtures of unknown sources, i.e., perform blind source separation. To model how the brain performs this task, we seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm. For biological plausibility, we require the network to satisfy the following three basic properties of neuronal circuits: (i) the network operates in the online setting; (ii) synaptic learning rules are local; and (iii) neuronal outputs are nonnegative. Closest is the work by Pehlevan et al. (Neural Comput 29:2925-2954, 2017), which considers nonnegative independent component analysis (NICA), a special case of blind source separation that assumes the mixture is a linear combination of uncorrelated, nonnegative sources. They derive an algorithm with a biologically plausible 2-layer network implementation. In this work, we improve upon their result by deriving 2 algorithms for NICA, each with a biologically plausible single-layer network implementation. The first algorithm maps onto a network with indirect lateral connections mediated by interneurons. The second algorithm maps onto a network with direct lateral connections and multi-compartmental output neurons.
PMID: 36070103
ISSN: 1432-0770
CID: 5337012

Neural Circuits for Dynamics-Based Segmentation of Time Series

TeÅŸileanu, Tiberiu; Golkar, Siavash; Nasiri, Samaneh; Sengupta, Anirvan M; Chklovskii, Dmitri B
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
PMID: 35026035
ISSN: 1530-888x
CID: 5118972

Small brains for big science

Makarova, Anastasia A; Polilov, Alexey A; Chklovskii, Dmitri B
As the study of the human brain is complicated by its sheer scale, complexity, and impracticality of invasive experiments, neuroscience research has long relied on model organisms. The brains of macaque, mouse, zebrafish, fruit fly, nematode, and others have yielded many secrets that advanced our understanding of the human brain. Here, we propose that adding miniature insects to this collection would reduce the costs and accelerate brain research. The smallest insects occupy a special place among miniature animals: despite their body sizes, comparable to unicellular organisms, they retain complex brains that include thousands of neurons. Their brains possess the advantages of those in insects, such as neuronal identifiability and the connectome stereotypy, yet are smaller and hence easier to map and understand. Finally, the brains of miniature insects offer insights into the evolution of brain design.
PMID: 34656052
ISSN: 1873-6882
CID: 5138062

A Biologically Plausible Neural Network for Multichannel Canonical Correlation Analysis

Lipshutz, David; Bahroun, Yanis; Golkar, Siavash; Sengupta, Anirvan M; Chklovskii, Dmitri B
Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
PMID: 34412114
ISSN: 1530-888x
CID: 4998332

A Normative and Biologically Plausible Algorithm for Independent Component Analysis

Chapter by: Bahroun, Yanis; Chklovskii, Dmitri B.; Sengupta, Anirvan M.
in: Advances in Neural Information Processing Systems by
[S.l.] : Neural information processing systems foundation, 2021
pp. 7368-7384
ISBN: 9781713845393
CID: 5314952