[Article] Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing

Summary: In this study, they applied novel feature extraction and deep-learning methods to 4 public datasets including DEAP and MAHNOB-HCI for multimodal emotion classification. They proposed utilization of pre-trained VGG-net to compensate for data shortage in bio-sensing field. A wide range of modalities was used including EEG, HRV, GSR and face videos. They evaluate accuracy of single modality, combination of datasets in feature level and transfer learning. Result outperformed previous studies.

Siddharth, S., Jung, T. P., & Sejnowski, T. J. (2019). Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing. IEEE Transactions on Affective Computing.