Files
Abstract
Human affective experience varies along the dimensions of valence (positivity or negativity) and arousal (high or low activation). It remains unclear how these dimensions are represented in the brain and whether the representations are shared across different individuals and diverse situational contexts. In this study, we first utilized two publicly available functional MRI datasets of participants watching movies to build predictive models of moment-to-moment emotional arousal and valence from dynamic functional brain connectivity. We tested the models by predicting emotional arousal and valence both within and across datasets. Our results revealed a generalizable arousal representation characterized by the interactions between multiple large-scale functional networks. The arousal representation generalized to two additional movie-watching datasets with different participants viewing different movies. In contrast, we did not find evidence of a generalizable valence representation. Taken together, our findings reveal a generalizable representation of emotional arousal embedded in patterns of dynamic functional connectivity, suggesting a common underlying neural signature of emotional arousal across individuals and situational contexts. We have made our model and analysis scripts publicly available to facilitate its use by other researchers in decoding moment-to-moment emotional arousal in novel datasets, providing a new tool to probe affective experience using fMRI.