Abstract Summary
Recommender systems shape many of our daily choices, including what we watch, purchase, and read. However, most existing systems interpret user behavior solely through observable actions such as clicks, views, or watch time, without considering how users actually feel when engaging with content. This limited perspective constrains personalization and overlooks the emotional dimension that drives user satisfaction and trust. Existing affective approaches tend to focus on one side of the interaction. User-based methods infer affective states from facial expressions, vocal tone, textual cues, or physiological signals, while content-based methods attempt to predict affective impact directly from the media itself. This research aims to bridge both sides by integrating user affective responses with the affective properties of content within a multimodal recommendation approach. Recently, physiological data have been proven to offer a uniquely direct insight into affective states. Unlike vocal, facial, or textual cues that are shaped by sociocultural norms and can be masked, physiological signals offer a more direct measure of affective states. This research investigates how integrating physiological, content-based, and contextual information can enhance affective recommendation and improve effectiveness across tasks such as retrieval, ranking, and personalization. Our present findings suggest that incorporating reactive brain signals alongside content and interaction features can enhance both predictive accuracy and affective alignment. Furthermore, this work investigates whether affective and physiological signals allow models to infer sensitive or unintended information about users and develops computational methods to evaluate the privacy risks associated with such inferences.