Summary: This paper presents a comprehensive framework for generating radiology reports from 3D brain CT scans using a multimodal large language model Li, Cheng-Yi, et al. “Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation.” Nature Communications 16.1 (2025): 2258.
Summary: This study integrated wearable device-measured sleep characteristics (sleep onset time, sleep duration, heart rate, etc.) with multimodal brain imaging data using sCCA in 3,222 adolescents participating in the ABCD Study to identify two major sleep-brain axes: one in which late sleep onset and short sleep duration were associated with reduced cortical-subcortical connectivity, and another in which high sleep heart rate and short light sleep duration were associated with reduced brain volume and connectivity.
Summary: This study analyzed cross-sectionally how resting heart rate, sedentary time, and moderate activity time measured by Fitbit wearables were associated with pleiotropic-like experiences (PLEs), internalizing, and externalizing symptoms in 5,007 adolescents aged 10–13 years from the ABCD cohort.
Summary: This study analyzed data from 3320 participants of the ABCD study. Hypertension was defined as average blood pressure >=95th percentile for age, sex and height. They revealed that adequate sleep significantly reduces the risk of hypertension in adolescents, independent of environmental noise exposure. These findings underscore the importance of promoting good sleep hygiene among youth to mitigate hypertension risk.
Summary: This study shows that long-term nasal airflow patterns are unique to each individual, stable over months to years, and can identify people with near-biometric accuracy. These “respiratory fingerprints” also reflect physiological states, such as sleep and BMI, as well as psychological traits like depression, anxiety, and autistic tendencies. The findings highlight nasal airflow as a rich, brain-driven signal linking physiology, emotion, and cognition. Soroka, Timna, et al. “Humans have nasal respiratory fingerprints.” Current Biology (2025).
Summary: They recently found higher HEP amplitude during exhalation than during inhalation during a task that involved attention to cardiac sensations. This may have been due to reduced cardiac perception during inhalation and heightened perception during exhalation, mediated by attentional mechanisms. To investigate relationships between HEP, attention, and respiration, they introduced an experimental setup that included tasks related to cardiac and respiratory interoceptive and exteroceptive attention. Results revealed HEP amplitude increases during the interoceptive tasks over fronto-central electrodes. When respiratory phases were taken into account, HEP increases were primarily driven by heartbeats recorded during exhalation, specifically during the cardiac interoceptive task, while inhalation had minimal impact. Zaccaro, Andrea, et al. “Attention to cardiac sensations enhances the heartbeat-evoked potential during exhalation.” Iscience 27.4 (2024).
Summary: This study measured sub-second activity within thalamocortical networks and nine thalamic nuclei in the human brain during spontaneous transitions in behavioral arousal state, using ultra-high field fast fMRI. The research discovered a stereotyped sequence of activity across thalamic nuclei and the cingulate cortex that preceded behavioral arousal after a period of inactivity, followed by widespread deactivation. These thalamic dynamics were linked to whether participants subsequently fell back into unresponsiveness, with unified thalamic activation reflecting the maintenance of behavior. Setzer, B., Fultz, N.E., Gomez, D.E.P. et al. A temporal sequence of thalamic activity unfolds at transitions in behavioral arousal state. Nat Commun 13, 5442 (2022).
Summary: In this seminar, we will explore a novel fMRI-to-text decoding framework named MindLLM. It combines a neuroscience-informed, subject-agnostic fMRI encoder with an off-the-shelf large language model to translate brain activity into coherent text. It introduces Brain Instruction Tuning (BIT), which enriches the model’s capacity to extract and represent diverse semantic information from fMRI signals, enabling versatile decoding across different tasks and subjects. We will discuss how to implement these techniques in our study. Qiu, Weikang, et al. “MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding.” arXiv preprint arXiv:2502.15786 (2025).
Summary: In this seminar, we will explore the structure of an open-source rtfMRI-NF toolbox. They use Python and MATLAB to preprocess data, generate feedback, and provide it.
Summary: In this study, to enhance our understanding of visual processes, they developed WAVE, which reconstructs visual stimuli from fMRI data. By integrating three modalities (fMRI, image, and text) to perform contrastive learning, the features are then passed to a diffusion model for final image reconstruction.
Summary: The study performed traditional binary GWAS, continuous univariate GWAS using wearable combination scores, and multivariate GWAS to identify genetic variants associated with ADHD. The identified variants showed associations with heart function (MYH6, CMTM5) and ADHD-related genes (ELFN1), with some variants potentially having a protective effect against ADHD.
Summary: This study introduces BrainRGIN, a novel graph neural network (GNN) model designed to predict intelligence using resting-state fMRI data. By leveraging graph isomorphism networks and clustering-based embeddings, the model effectively captures brain sub-network structures. The authors validate their approach using the Adolescent Brain Cognitive Development Dataset and demonstrate superior predictive performance compared to traditional machine learning models.