Try a new search

Format these results:

Searched for:

person:chklod01

in-biosketch:yes

Total Results:

58


Neural optimal feedback control with local learning rules

Chapter by: Friedrich, Johannes; Golkar, Siavash; Farashahi, Shiva; Genkin, Alexander; Sengupta, Anirvan M.; Chklovskii, Dmitri B.
in: Advances in Neural Information Processing Systems by
[S.l.] : Neural information processing systems foundation, 2021
pp. 16358-16370
ISBN: 9781713845393
CID: 5314862

Neurons as Canonical Correlation Analyzers

Pehlevan, Cengiz; Zhao, Xinyuan; Sengupta, Anirvan M; Chklovskii, Dmitri
Normative models of neural computation offer simplified yet lucid mathematical descriptions of murky biological phenomena. Previously, online Principal Component Analysis (PCA) was used to model a network of single-compartment neurons accounting for weighted summation of upstream neural activity in the soma and Hebbian/anti-Hebbian synaptic learning rules. However, synaptic plasticity in biological neurons often depends on the integration of synaptic currents over a dendritic compartment rather than total current in the soma. Motivated by this observation, we model a pyramidal neuronal network using online Canonical Correlation Analysis (CCA). Given two related datasets represented by distal and proximal dendritic inputs, CCA projects them onto the subspace which maximizes the correlation between their projections. First, adopting a normative approach and starting from a single-channel CCA objective function, we derive an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron. To model networks of pyramidal neurons, we introduce a novel multi-channel CCA objective function, and derive from it an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron network including its architecture, dynamics, and synaptic learning rules. Next, we model a neuron with more than two dendritic compartments by deriving its operation from a known objective function for multi-view CCA. Finally, we confirm the functionality of our networks via numerical simulations. Overall, our work presents a simplified but informative abstraction of learning in a pyramidal neuron network, and demonstrates how such networks can integrate multiple sources of inputs.
PMCID:7338892
PMID: 32694989
ISSN: 1662-5188
CID: 4546432

CaImAn an open source tool for scalable calcium imaging data analysis

Giovannucci, Andrea; Friedrich, Johannes; Gunn, Pat; Kalfon, Jérémie; Brown, Brandon L; Koay, Sue Ann; Taxidis, Jiannis; Najafi, Farzaneh; Gauthier, Jeffrey L; Zhou, Pengcheng; Khakh, Baljit S; Tank, David W; Chklovskii, Dmitri B; Pnevmatikakis, Eftychios A
Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.
PMCID:6342523
PMID: 30652683
ISSN: 2050-084x
CID: 3682462

A similarity-preserving neural network trained on transformed images recapitulates salient features of the fly motion detection circuit [Meeting Abstract]

Bahroun, Yanis; Sengupta, Anirvan M.; Chklovskii, Dmitri B.
Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Rao and Ruderman formulated this problem in terms of learning infinitesimal transformation operators (Lie group generators) via minimizing image reconstruction error. Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules. Here we propose a biologically plausible model of motion detection. We also adopt the transformation-operator approach but, instead of reconstruction-error minimization, start with a similarity-preserving objective function. An online algorithm that optimizes such an objective function naturally maps onto an NN with biologically plausible learning rules. The trained NN recapitulates major features of the well-studied motion detector in the fly. In particular, it is consistent with the experimental observation that local motion detectors combine information from at least three adjacent pixels, something that contradicts the celebrated Hassenstein-Reichardt model.
SCOPUS:85090173898
ISSN: 1049-5258
CID: 4668942

Neuroscience-Inspired Online Unsupervised Learning Algorithms: Artificial neural networks

Pehlevan, Cengiz; Chklovskii, Dmitri B.
ISI:000494430900009
ISSN: 1053-5888
CID: 4193512

Clustering is semidefinitely not that hard: Nonnegative SDP for manifold disentangling

Tepper, Mariano; Sengupta, Anirvan M.; Chklovskii, Dmitri
In solving hard computational problems, semidefinite program (SDP) relaxations often play an important role because they come with a guarantee of optimality. Here, we focus on a popular semidefinite relaxation of K-means clustering which yields the same solution as the non-convex original formulation for well segregated datasets. We report an unexpected finding: when data contains (greater than zero-dimensional) manifolds, the SDP solution captures such geometrical structures. Unlike traditional manifold embedding techniques, our approach does not rely on manually defining a kernel but rather enforces locality via a nonnegativity constraint. We thus call our approach NOnnegative MAnifold Disentangling, or NOMAD. To build an intuitive understanding of its manifold learning capabilities, we develop a theoretical analysis of NOMAD on idealized datasets. While NOMAD is convex and the globally optimal solution can be found by generic SDP solvers with polynomial time complexity, they are too slow for modern datasets. To address this problem, we analyze a non-convex heuristic and present a new, convex and yet efficient, algorithm, based on the conditional gradient method. Our results render NOMAD a versatile, understandable, and powerful tool for manifold learning.
ISI:000454480700001
ISSN: 1532-4435
CID: 3575242

Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

Chapter by: Sengupta, Anirvan M.; Tepper, Mariano; Pehlevan, Cengiz; Genkin, Alexander; Chklovskii, Dmitri B.
in: by
pp. 7080-7090
ISBN:
CID: 3857842

Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?

Pehlevan, Cengiz; Sengupta, Anirvan M; Chklovskii, Dmitri B
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.
PMID: 28957017
ISSN: 1530-888x
CID: 2717542

Blind Nonnegative Source Separation Using Biological Neural Networks

Pehlevan, Cengiz; Mohan, Sreyas; Chklovskii, Dmitri B
Blind source separation-the extraction of independent sources from a mixture-is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative-for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the data set is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.
PMID: 28777718
ISSN: 1530-888x
CID: 2742722

The comprehensive connectome of a neural substrate for 'ON' motion detection in Drosophila

Takemura, Shin-Ya; Nern, Aljoscha; Chklovskii, Dmitri B; Scheffer, Louis K; Rubin, Gerald M; Meinertzhagen, Ian A
Analysing computations in neural circuits often uses simplified models because the actual neuronal implementation is not known. For example, a problem in vision, how the eye detects image motion, has long been analysed using Hassenstein-Reichardt (HR) detector or Barlow-Levick (BL) models. These both simulate motion detection well, but the exact neuronal circuits undertaking these tasks remain elusive. We reconstructed a comprehensive connectome of the circuits of Drosophila's motion-sensing T4 cells using a novel EM technique. We uncover complex T4 inputs and reveal that putative excitatory inputs cluster at T4's dendrite shafts, while inhibitory inputs localize to the bases. Consistent with our previous study, we reveal that Mi1 and Tm3 cells provide most synaptic contacts onto T4. We are, however, unable to reproduce the spatial offset between these cells reported previously. Our comprehensive connectome reveals complex circuits that include candidate anatomical substrates for both HR and BL types of motion detectors.
PMCID:5435463
PMID: 28432786
ISSN: 2050-084x
CID: 2562092