Seminar Papers

[REVIEW ARTICLE] Cognitive computational neuroscience

The authors reviewed recent work in the intersection of cognitive science, computational neuroscience and artificial intelligence.  Computational models that mimic brain information processing during perceptual, cognitive and control tasks are beginning to be developed and tested with brain and behavioral data. 

https://www.nature.com/articles/s41593-018-0210-5

N. Kriegeskorte, P.K. Douglas. Cognitive computational neuroscience. Nat. Neurosci., 21 (2018), pp. 1148-1160

[Article] New brain map could improve AI algorithms for machine vision

A research team recently found new evidence revising the traditional view of the primate brain’s visual system organization. This remapping of the brain could serve as a future reference for understanding how the highly complex visual system works, and potentially influence the design of artificial neural networks for machine vision. (from the below link)

https://www.sciencedaily.com/releases/2019/08/190821113151.htm

Bing‐Xing Huo, Natalie Zeater, Meng Kuan Lin, Yeonsook S. Takahashi, Mitsutoshi Hanada, Jaimi Nagashima, Brian C. Lee, Junichi Hata, Afsah Zaheer, Ulrike Grünert, Michael I. Miller, Marcello G. P. Rosa, Hideyuki Okano, Paul R. Martin, Partha P. Mitra. Relation of koniocellular layers of dorsal lateral geniculate to inferior pulvinar nuclei in common marmosets. European Journal of Neuroscience, 2019; DOI: 10.1111/ejn.14529

[Article] Communication between neural networks

Fig. 1

The brain is organized into a network of specialized networks of nerve cells. 
For such a brain architecture to function, these specialized networks ―each located in a different brain area―  need to be able to communicate with each other. But which conditions are required for communication to take place and which control mechanisms work? 
Researchers at the Bernstein Center Freiburg and colleagues in Spain and Sweden are proposing a new model that combines three seemingly different explanatory models. Their conclusions have now been published in Nature Reviews Neuroscience.
https://www.sciencedaily.com/releases/2018/12/181217120046.htm

Gerald Hahn, Adrian Ponce-Alvarez, Gustavo Deco, Ad Aertsen, Arvind Kumar. Portraits of communication in neuronal networks. Nature Reviews Neuroscience, 2018; DOI: 10.1038/s41583-018-0094-0

[Article] Brain cells called astrocytes have unexpected role in brain 'plasticity'

Researchers have shown that astrocytes ― long-overlooked supportive cells in the brain ― help to enable the brain’s plasticity, a new role for astrocytes that was not previously known. 

The findings could point to ways to restore connections that have been lost due to aging or trauma.

https://www.sciencedaily.com/releases/2018/10/181018141035.htm

Elena Blanco-Suarez, Tong-Fei Liu, Alex Kopelevich, Nicola J. Allen. Astrocyte-Secreted Chordin-like 1 Drives Synapse Maturation and Limits Plasticity by Increasing Synaptic GluA2 AMPA Receptors. Neuron, 2018; DOI: 10.1016/j.neuron.2018.09.043

[Article] Deep learning for biology

A popular artificial-intelligence method provides a powerful tool for surveying and classifying biological data. But for the uninitiated, the technology poses significant difficulties.

https://www.nature.com/articles/d41586-018-02174-z

[Blog] #CCNeuro asks: 'How can we find out how the brain works?'

The Brain Blog

  • Thoughts and discussion about the brain from Peter Bandettini and Eric Wong

In this blog, the professor Eric Wong discussed about “How can we find out how the brain works?”, which is asked from the Cognitive Computational Neuroscience (CCNeuro) conference.

In short, he mentioned that the most typical conceptual approach to understanding the brain is considering the brain is modular. Thus, the modularity is important for understanding the brain from complexity.

The more details is shown in the brain blog (www.thebrainblog.org).

Beside this issues, you could find many neuroscience related thought and discussion from Peter Bandettini and Eric Wong in this blog.

[Article] 'Consciousness' in the machine learning (deep learning) perspective

**The Consciousness Prior**
Yoshua Bengio
Université de Montréal, MILA
September 26, 2017
**Abstract**
A new prior is proposed for representation learning, which canbe combined with other priors in order to help disentangling abstract factorsfrom each other. It is inspired by the phenomenon of conscious-ness seen as theformation of a low-dimensional combination of a few concepts constituting aconscious thought, i.e., consciousness as awareness at a particular timeinstant. This provides a powerful constraint on the representation in that suchlow-dimensional thought vectors can correspond to statements about realitywhich are either true, highly probable, or very useful for taking decisions.The fact that a few elements of the current state can be combined into such apredictive or useful statement is a strong constraint and deviates considerablyfrom the maximum likelihood approaches to modeling data and how states unfoldin the future based on an agent's actions. Instead of making predictions in thesensory (e.g. pixel) space, the consciousness prior allow the agent to makepredictions in the abstract space, with only a few dimensions of that spacebeing involved in each of these predictions. The consciousness prior also makesit natural to map conscious states to natural language utterances or to expressclassical AI knowledge in the form of facts and rules, although the consciousstates may be richer than what can be expressed easily in the form of asentence, a fact or a rule. 

https://arxiv.org/pdf/1709.08568.pdf

[NEWS] Robotic system monitors specific neurons

(from MIT news)
Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it. To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell. This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

If you interested in, please click the following link.

http://news.mit.edu/2017/robotic-system-monitors-specific-neurons-0830

[News] What the brain's wiring looks like

(from BBC news)

The world’s most detailed scan of the brain’s internal wiring has been produced by scientists at Cardiff University.
The MRI machine reveals the fibres which carry all the brain’s thought processes. It’s been done in Cardiff, Nottingham, Cambridge and Stockport, as well as London England and London Ontario. Doctors hope it will help increase understanding of a range of neurological disorders and could be used instead of invasive biopsies. If you interested in, please click the following link.

http://www.bbc.com/news/health-40488545

[News] Peering into neural networks

(Image: Christine Daniloff/MIT)

“New technique helps elucidate the inner workings of neural networks trained on visual data”

Neural networks is for performing computational tasks by learning. The learning is conducted by analyze large sets of training data. However, it is difficult to recognize which data they are processing between input and output.

A researchers from Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT, showed a method to find out the process during training the visual scenes identification. From this study, Bau, one of the researcher of this study, mentioned that it suggests that “neural networks are actually trying to approximate getting a grandmother neuron”.

If you interested in, please click the following link.

http://news.mit.edu/Peering_into_neural_networks