Seminar Papers

[Article] 20 years of the default mode network: A review and synthesis. Neuron.

Summary: The author thoroughly reviewed organization of the default mode network (DMN) and cognitive roles of the DMN (i.e., self-reference, social cognition, memory, mind wandering). Finally, he suggested a new perspective of the DMN function in human cognitition, in which the DMN intergrate and “broadcast” various representations to create coherent “interal narrative”.

Menon, V. (2023). 20 years of the default mode network: A review and synthesis. Neuron.

[Article] Shared functional specialization in transformer-based language models and the human brain.

Summary: Transformers are recently being compared to the brain. Usually, the internal representations (“embeddings”) are adopted for comparisons. However, the authors focused on “transformations” that integrate contextual information across words, and found that they are more layer-specific than the embeddings. It differs from existing research in that it focuses on transformations related to attention instead of embeddings, which has been one of our recent interests.

Kumar, S., Sumers, T. R., Yamakoshi, T., Goldstein, A., Hasson, U., Norman, K. A., … & Nastase, S. A. (2024). Shared functional specialization in transformer-based language models and the human brain. Nature Communications, 15(1), 5523.

[Article] BrainLM: A foundation model for brain activity recordings

BrainLM: A foundation model for brain activity recordings

Summary: This paper suggested BrainLM which is a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings.

Ortega Caro, Josue, et al. “BrainLM: A foundation model for brain activity recordings.” bioRxiv (2023): 2023-09.

[Article] A shared neural basis underlying psychiatric comorbidity.

Summary: Utilizing large longitudinal neuroimaging cohort (from adolescence to young adulthood) (IMAGEN), they use multitask connectomes to find neuropsychopathological (NP) factor. They also check generalizability of the NP factor with ABCD (and other) datasets.

Xie, C., Xiang, S., Shen, C., Peng, X., Kang, J., Li, Y., … & ZIB Consortium. (2023). A shared neural basis underlying psychiatric comorbidity. Nature medicine, 29(5), 1232-1242.

[Article] Auditory beat stimulation and its effects on cognition and mood states.

Summary: This paper presents a comprehensive review of auditory beat stimulation, with a particular focus on the applications and features of binaural beats. Despite the extensive research conducted on binaural beats, there is still a lack of consensus regarding the consistent effects and the underlying neural mechanisms.

Chaieb, Leila, et al. “Auditory beat stimulation and its effects on cognition and mood states.” Frontiers in psychiatry 6 (2015): 136819.

[Article] Endogenous theta stimulation during meditation predicts reduced opioid dosing following treatment with Mindfulness-Oriented Recovery Enhancement.

Summary: This study investigates the effectiveness of the Mindfulness-Oriented Recovery Enhancement (MORE) program for police opioid users. Experimental results showed that participants exhibited an increase in theta and alpha brain waves during mindfulness meditation, along with improved coherence of mid-frontal theta. These neurophysiological changes were associated with reductions in opioid use, related to self-transcendence induced by mindfulness.

Hudak, J., Hanley, A.W., Marchand, W.R. et al. Endogenous theta stimulation during meditation predicts reduced opioid dosing following treatment with Mindfulness-Oriented Recovery Enhancement. Neuropsychopharmacol. 46, 836–843 (2021)

[Article] Expression snippet transformer for robust video-based facial expression recognition

Summary: Although Transformer can be powerful for modeling visual relations and describing complicated patterns, it could still perform unsatisfactorily for video-based facial expression recognition, since the expression movements in a video can be too small to reflect meaningful spatial-temporal relations. They propose to decompose the modeling of expression movements of a video into the modeling of a series of expression snippets, each of which contains a few frames. Their propsed model, Expression Snippet Transformer (EST) process intra-snippet and inter-snippet information seperately and combine them together. Code is available in github.

Liu, Y., Wang, W., Feng, C., Zhang, H., Chen, Z., & Zhan, Y. (2023). Expression snippet transformer for robust video-based facial expression recognition. Pattern Recognition, 138, 109368.

[Article] Taking stock of value in the orbitofrontal cortex. Nature Reviews Neuroscience.

Summary: The authors reviewed two hypotheses on functions of the OFC, the value hypothesis and the cognitive map hypothesis. They then suggest a new view by reconciling two hypotheses and considering the hippocampal functions.

Knudsen, E. B., & Wallis, J. D. (2022). Taking stock of value in the orbitofrontal cortex. Nature Reviews Neuroscience.

[Article] What can 5.17 billion regression fits tell us about artificial models of the human visual system?

Summary: They performed a large-scale benchmarking analysis of 85 modern deep neural network models (e.g. CLIP, BarlowTwins,Mask-RCNN). They found that architectural differences have very little consequence in emergent fits to brain data. Next, differences in task have clear effects–with categorization and self-supervised models showing relatively stronger brain predictivity.

Conwell, C., Prince, J. S., Alvarez, G. A., & Konkle, T. (2021, October). What can 5.17 billion regression fits tell us about artificial models of the human visual system?. In SVRHM 2021 Workshop@ NeurIPS.

[Article] Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction & Masked graph auto-encoder constrained graph pooling.

Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction

Summary: GiGaMAE investigated how to enhance the generalization capability of self-supervised graph generative models, by reconstructing graph information in the latent space. They proposed a nove self-supervised reconstruction loss.

Shi, Yucheng, et al. “Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction.” Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023.

Masked graph auto-encoder constrained graph pooling."

Summary: MGAP is the novel node drop pooling method retaining sufficient effective graph information from both node attribute and network topology perspectives.

Liu, Chuang, et al. “Masked graph auto-encoder constrained graph pooling.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer International Publishing, 2022.