Summary: The author thoroughly reviewed organization of the default mode network (DMN) and cognitive roles of the DMN (i.e., self-reference, social cognition, memory, mind wandering). Finally, he suggested a new perspective of the DMN function in human cognitition, in which the DMN intergrate and “broadcast” various representations to create coherent “interal narrative”.
Menon, V. (2023). 20 years of the default mode network: A review and synthesis. Neuron.
Summary: Transformers are recently being compared to the brain. Usually, the internal representations (“embeddings”) are adopted for comparisons. However, the authors focused on “transformations” that integrate contextual information across words, and found that they are more layer-specific than the embeddings. It differs from existing research in that it focuses on transformations related to attention instead of embeddings, which has been one of our recent interests.
Summary: This paper suggested BrainLM which is a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings.
Summary: Utilizing large longitudinal neuroimaging cohort (from adolescence to young adulthood) (IMAGEN), they use multitask connectomes to find neuropsychopathological (NP) factor. They also check generalizability of the NP factor with ABCD (and other) datasets.
Summary: This paper presents a comprehensive review of auditory beat stimulation, with a particular focus on the applications and features of binaural beats. Despite the extensive research conducted on binaural beats, there is still a lack of consensus regarding the consistent effects and the underlying neural mechanisms.
Summary: This study investigates the effectiveness of the Mindfulness-Oriented Recovery Enhancement (MORE) program for police opioid users. Experimental results showed that participants exhibited an increase in theta and alpha brain waves during mindfulness meditation, along with improved coherence of mid-frontal theta. These neurophysiological changes were associated with reductions in opioid use, related to self-transcendence induced by mindfulness.
Summary: Although Transformer can be powerful for modeling visual relations and describing complicated patterns, it could still perform unsatisfactorily for video-based facial expression recognition, since the expression movements in a video can be too small to reflect meaningful spatial-temporal relations. They propose to decompose the modeling of expression movements of a video into the modeling of a series of expression snippets, each of which contains a few frames. Their propsed model, Expression Snippet Transformer (EST) process intra-snippet and inter-snippet information seperately and combine them together. Code is available in github.
Summary: The authors reviewed two hypotheses on functions of the OFC, the value hypothesis and the cognitive map hypothesis. They then suggest a new view by reconciling two hypotheses and considering the hippocampal functions.
Summary: They performed a large-scale benchmarking analysis of 85 modern deep neural network models (e.g. CLIP, BarlowTwins,Mask-RCNN). They found that architectural differences have very little consequence in emergent fits to brain data. Next, differences in task have clear effects–with categorization and self-supervised models showing relatively stronger brain predictivity.
Summary: GiGaMAE investigated how to enhance the generalization capability of self-supervised graph generative models, by reconstructing graph information in the latent space. They proposed a nove self-supervised reconstruction loss.
Summary: MGAP is the novel node drop pooling method retaining sufficient effective graph information from both node attribute and network topology perspectives.