# Abhishek Tiwari:NEUROINFORMATICS

### From OpenWetWare

Current revision (04:22, 8 June 2008) (view source) (→NEUROINFORMATICS) |
|||

Line 45: | Line 45: | ||

Perhaps the ion channel inverse problem can be the first instance of this philosophy spreading across the boundary into neuroinformatics. | Perhaps the ion channel inverse problem can be the first instance of this philosophy spreading across the boundary into neuroinformatics. | ||

+ | |||

+ | ---- | ||

+ | |||

+ | <html> | ||

+ | <a href="http://abhishek-tiwari.com/"><img src="http://abhishek-tiwari.com/favicon.ico" alt="abhishek-tiwari.com" border="0"></a> | ||

+ | <a href="http://icodons.com/iCODONS/"><img src="http://icodons.com/iCODONS/favicon.ico" alt="iCODONS.com"0"></a> | ||

+ | </html> |

## Current revision

# NEUROINFORMATICS

* PLoS Computational Biology* Volume 2 | Issue 10 | OCTOBER 2006

**Mapping Information Flow in Sensorimotor Networks**

**Synopsis**

Biological organisms continuously select and sample information used by their neural structures for perception and action, and for creating coherent cognitive states guiding their autonomous behavior. Information processing, however, is not solely an internal function of the nervous system. Here Authors show, instead, how sensorimotor interaction and body morphology can induce statistical regularities and information structure in sensory inputs and within the neural control architecture, and how the flow of information between sensors, neural units, and effectors is actively shaped by the interaction with the environment. Authors analyze sensory and motor data collected from real and simulated robots and reveal the presence of information structure and directed information flow induced by dynamically coupled sensorimotor activity, including effects of motor outputs on sensory inputs. Authors find that information structure and information flow in sensorimotor networks (a) is spatially and temporally specific; (b) can be affected by learning, and (c) can be affected by changes in body morphology. Results suggest a fundamental link between physical embeddedness and information, highlighting the effects of embodied interactions on internal (neural) information processing, and illuminating the role of various system components on the generation of behavior.

* PLoS Computational Biology* Volume 2 | Issue 10 | OCTOBER 2006

**Computational Inference of Neural Information Flow Networks**

**Synopsis**

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, Authors demonstrate that a dynamic Bayesian network (DBN) inference algorithm developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from lectrophysiology data collected with microelectrode arrays. The inferred networks recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. This study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

* PLoS Computational Biology* Volume 2 | Issue 10 | OCTOBER 2006

**Computational aspects of feedback in neural circuits**

**Synopsis**

It had previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. Authors investigate in this article the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. Authors show that this new model overcomes the limitation of a rapidly fading memory. In fact, Authors prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise the resulting computational model can perform a large class of biologically relevant real-time computations that require a non-fading memory. Authors demonstrate these computational implications of feedback both theoretically and through computer simulations of detailed cortical microcircuit models. They show that the application of simple learning procedures (such as linear regression or perceptron learning) enables such circuits, in spite of their complex inherent dynamics, to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular authors show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Authors have shown that feedback increases significantly the computational power of neural circuits. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a large variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as for example genetic regulatory networks.

* PLoS Computational Biology* Volume 2 | Issue 8 | AUGUST 2006

**The Ion Channel Inverse Problem: Neuroinformatics Meets Biophysics**

**Synopsis**

Ion channels are the building blocks of the information processing capability of neurons: any realistic computational model of a neuron must include reliable and effective ion channel components. With the growing availability of computational resources, numerical inverse approaches are increasingly used across a range of disciplines. In this review paper,Cannon et al. suggest same type of inverse methodology for the study of ion channels. Inverse Problem approach address the question “what system gave rise to these observations?” usually by starting with a parameterized model that is expected to correspond to the real system for some point in its parameter space. A computational model of the recording process is built so that it can take any set of parameters and generate the data that they would have given rise to in exactly the same format as the experimental data. The model can then be compared to the real system in the space—that of the real data—where the most information is present. The forward calculation is then repeated over and over for different parameter sets guided by an optimization process to find the model or models that best represent the data.

Perhaps the ion channel inverse problem can be the first instance of this philosophy spreading across the boundary into neuroinformatics.