Summary: The authors reviewed two hypotheses on functions of the OFC, the value hypothesis and the cognitive map hypothesis. They then suggest a new view by reconciling two hypotheses and considering the hippocampal functions.
Summary: They performed a large-scale benchmarking analysis of 85 modern deep neural network models (e.g. CLIP, BarlowTwins,Mask-RCNN). They found that architectural differences have very little consequence in emergent fits to brain data. Next, differences in task have clear effects–with categorization and self-supervised models showing relatively stronger brain predictivity.
Summary: GiGaMAE investigated how to enhance the generalization capability of self-supervised graph generative models, by reconstructing graph information in the latent space. They proposed a nove self-supervised reconstruction loss.
Summary: MGAP is the novel node drop pooling method retaining sufficient effective graph information from both node attribute and network topology perspectives.
Summary: The authors compared the theory-based RL model called Explore, Model, Plan Agent (EMPA) on Atari games with human performance and the double DQN model. The EMPA model showed compatible performances with human participants. Encoding analyses identified neural representation associated with theory-encoding and theory-updating.
Summary: They suggested BrainGSLs to capture more information in limited data and insufficient supervision. It incorporates a local topological-aware encoder, a node-edge bi-decoder, a signal representation learning module, and a classifier. They evaluated their model on ASD, BD, and MDD datasets.
summary : Recently, Micro-Expression(ME) is one of the popular research interest for Facial Expression Recognition(FER) task. In particular, depth information is often utilized to analyze micro expressions. CAS(ME)3 offers around 80 hours of video dataset with manually labelled micro-expressions & macro-expressions. They also provide depth information and demonstrate effective way to process depth information for multimodal Mircro Expression Recognition(MER). CAS(ME)3 is currently one of the most well-known RGB-D dataset for emotion recognition.
summary : Plants emit remotely detectable and informative airborne sounds under stress. Plants are not quite, human just cannot listen! With this experiments, these sound could be detected from a distance of 3–5m by many mammals and insects, which can make them interact with plant
summary : This study investigates the perception of silence by employing the auditory illusion paradigm. The hypothesis posits that if silence can be perceived, then the same auditory illusion would exist. Therefore, silence was used instead of auditory stimuli to induce auditory illusions. Participants reported experiencing the same illusions as in the normal auditory illusion paradigm
summary: They proposed a novel GCN based deep learning model to diagnose autism spectrum disorder. They used spatial-temporal features from fMRI data and used Graph convolution network with attention. They achieved SOTA performance.
summary :Here they recorded simultaneous EEG-fMRI data from participants viewing affective pictures. Applying multivariate analyses including SVM and RSA. They found perceptual processing of affective pictures began ~100ms in the visual cortex, where affect-specific representation began to form ~200ms. The neural representation of affective scenes is sustained rather than dynamic.