Seminar Papers

[Article] Auditory beat stimulation and its effects on cognition and mood states.

Summary: This paper presents a comprehensive review of auditory beat stimulation, with a particular focus on the applications and features of binaural beats. Despite the extensive research conducted on binaural beats, there is still a lack of consensus regarding the consistent effects and the underlying neural mechanisms.

Chaieb, Leila, et al. “Auditory beat stimulation and its effects on cognition and mood states.” Frontiers in psychiatry 6 (2015): 136819.

[Article] Endogenous theta stimulation during meditation predicts reduced opioid dosing following treatment with Mindfulness-Oriented Recovery Enhancement.

Summary: This study investigates the effectiveness of the Mindfulness-Oriented Recovery Enhancement (MORE) program for police opioid users. Experimental results showed that participants exhibited an increase in theta and alpha brain waves during mindfulness meditation, along with improved coherence of mid-frontal theta. These neurophysiological changes were associated with reductions in opioid use, related to self-transcendence induced by mindfulness.

Hudak, J., Hanley, A.W., Marchand, W.R. et al. Endogenous theta stimulation during meditation predicts reduced opioid dosing following treatment with Mindfulness-Oriented Recovery Enhancement. Neuropsychopharmacol. 46, 836–843 (2021)

[Article] Expression snippet transformer for robust video-based facial expression recognition

Summary: Although Transformer can be powerful for modeling visual relations and describing complicated patterns, it could still perform unsatisfactorily for video-based facial expression recognition, since the expression movements in a video can be too small to reflect meaningful spatial-temporal relations. They propose to decompose the modeling of expression movements of a video into the modeling of a series of expression snippets, each of which contains a few frames. Their propsed model, Expression Snippet Transformer (EST) process intra-snippet and inter-snippet information seperately and combine them together. Code is available in github.

Liu, Y., Wang, W., Feng, C., Zhang, H., Chen, Z., & Zhan, Y. (2023). Expression snippet transformer for robust video-based facial expression recognition. Pattern Recognition, 138, 109368.

[Article] Taking stock of value in the orbitofrontal cortex. Nature Reviews Neuroscience.

Summary: The authors reviewed two hypotheses on functions of the OFC, the value hypothesis and the cognitive map hypothesis. They then suggest a new view by reconciling two hypotheses and considering the hippocampal functions.

Knudsen, E. B., & Wallis, J. D. (2022). Taking stock of value in the orbitofrontal cortex. Nature Reviews Neuroscience.

[Article] What can 5.17 billion regression fits tell us about artificial models of the human visual system?

Summary: They performed a large-scale benchmarking analysis of 85 modern deep neural network models (e.g. CLIP, BarlowTwins,Mask-RCNN). They found that architectural differences have very little consequence in emergent fits to brain data. Next, differences in task have clear effects–with categorization and self-supervised models showing relatively stronger brain predictivity.

Conwell, C., Prince, J. S., Alvarez, G. A., & Konkle, T. (2021, October). What can 5.17 billion regression fits tell us about artificial models of the human visual system?. In SVRHM 2021 Workshop@ NeurIPS.

[Article] Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction & Masked graph auto-encoder constrained graph pooling.

Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction

Summary: GiGaMAE investigated how to enhance the generalization capability of self-supervised graph generative models, by reconstructing graph information in the latent space. They proposed a nove self-supervised reconstruction loss.

Shi, Yucheng, et al. “Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction.” Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023.

Masked graph auto-encoder constrained graph pooling."

Summary: MGAP is the novel node drop pooling method retaining sufficient effective graph information from both node attribute and network topology perspectives.

Liu, Chuang, et al. “Masked graph auto-encoder constrained graph pooling.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer International Publishing, 2022.

[Article] The neural architecture of theory-based reinforcement learning. Neuron.

Summary: The authors compared the theory-based RL model called Explore, Model, Plan Agent (EMPA) on Atari games with human performance and the double DQN model. The EMPA model showed compatible performances with human participants. Encoding analyses identified neural representation associated with theory-encoding and theory-updating.

Tomov, M. S., Tsividis, P. A., Pouncy, T., Tenenbaum, J. B., & Gershman, S. J. (2023). The neural architecture of theory-based reinforcement learning. Neuron.

[Article] Graph Self-supervised Learning with Application to Brain Networks Analysis. IEEE Journal of Biomedical and Health Informatics.

Summary: They suggested BrainGSLs to capture more information in limited data and insufficient supervision. It incorporates a local topological-aware encoder, a node-edge bi-decoder, a signal representation learning module, and a classifier. They evaluated their model on ASD, BD, and MDD datasets.

Wen, G., Cao, P., Liu, L., Yang, J., Zhang, X., Wang, F., & Zaiane, O. R. (2023). Graph Self-supervised Learning with Application to Brain Networks Analysis. IEEE Journal of Biomedical and Health Informatics.

[Article] CAS(ME)3: A third generation facial spontaneous micro-expression database with depth information and high ecological validity.

summary : Recently, Micro-Expression(ME) is one of the popular research interest for Facial Expression Recognition(FER) task. In particular, depth information is often utilized to analyze micro expressions. CAS(ME)3 offers around 80 hours of video dataset with manually labelled micro-expressions & macro-expressions. They also provide depth information and demonstrate effective way to process depth information for multimodal Mircro Expression Recognition(MER). CAS(ME)3 is currently one of the most well-known RGB-D dataset for emotion recognition.

Li, J., Dong, Z., Lu, S., Wang, S. J., Yan, W. J., Ma, Y., … & Fu, X. (2022). CAS (ME) 3: A third generation facial spontaneous micro-expression database with depth information and high ecological validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3), 2782-2800.

[Article] Sounds emitted by plants under stress are airborne and informative.

summary : Plants emit remotely detectable and informative airborne sounds under stress. Plants are not quite, human just cannot listen! With this experiments, these sound could be detected from a distance of 3–5m by many mammals and insects, which can make them interact with plant

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Goldstein, R., Anikster, Y., … & Hadany, L. (2023). Sounds emitted by plants under stress are airborne and informative. Cell, 186(7), 1328-1336.)