Loading Session...

Recommendation Systems & LLMs

Session Information

  • From What to Why: Thought-Space Recommendation with Small Language Models

    Prosenjit Biswas, Pervez Shaik, Abhinav Thorat, Ravi Kolla and Niranjan Pedanekar

  • Post-Training Denoising of User Profiles with LLMs in Collaborative Filtering Recommendation

    Ervin Dervishaj, Maria Maistro, Tuukka Ruotsalo and Christina Lioma

  • PromptHG: Prompt-Enhanced Heterogeneous Graph for Personalized News Recommendation

    Dang Kieu, Delvin Ce Zhang, Minh-Duc Nguyen, Qiang Wu, Min Xu and Dung D. Le

  • Interplay: Training Independent Simulators for Reference-Free Conversational Recommendation

    Jerome Ramos, Feng Xia, Xi Wang, Shubham Chatterjee, Xiao Fu, Hossein A. Rahmani and Aldo Lipani

  • Improving Conversational Recommendation with Contextual Adaptation of External Recommenders and LLM-Based Reranking

    Chuang Li, Yang Deng, Weida Liang, Hengchang Hu, See-Kiong Ng, Min-Yen Kan and Haizhou Li

Apr 01, 2026 14:30 - 16:00(Europe/Amsterdam)
Venue : Chaos
20260401T1430 20260401T1600 Europe/Amsterdam Recommendation Systems & LLMs

From What to Why: Thought-Space Recommendation with Small Language Models

Prosenjit Biswas, Pervez Shaik, Abhinav Thorat, Ravi Kolla and Niranjan Pedanekar

Post-Training Denoising of User Profiles with LLMs in Collaborative Filtering Recommendation

Ervin Dervishaj, Maria Maistro, Tuukka Ruotsalo and Christina Lioma

PromptHG: Prompt-Enhanced Heterogeneous Graph for Personalized News Recommendation

Dang Kieu, Delvin Ce Zhang, Minh-Duc Nguyen, Qiang Wu, Min Xu and Dung D. Le

Interplay: Training Independent Simulators for Reference-Free Conversational Recommendation

Jerome Ramos, Feng Xia, Xi Wang, Shubham Chatterjee, Xiao Fu, Hossein A. Rahmani and Aldo Lipani

Improving Conversational Recommendation with Contextual Adaptation of External Recommenders and LLM-Based Reranking

Chuang Li, Yang Deng, Weida Liang, Hengchang Hu, See-Kiong Ng, Min-Yen Kan and Haizhou Li

Chaos ECIR2026 conference-secretariat@blueboxevents.nl

Sub Sessions

From What to Why: Thought-Space Recommendation with SmallLanguage Models

Full papersRecommender systems 02:30 PM - 04:00 PM (Europe/Amsterdam) 2026/04/01 12:30:00 UTC - 2026/04/01 14:00:00 UTC
Large Language Models (LLMs) have advanced recommendation through enhanced reasoning, but their massive scale poses significant challenges for real-world deployment due to high inference costs. Conversely, while Small Language Models (SLMs) offer an efficient alternative, their reasoning capabilities for recommendation remain under-explored. Existing systems often use natural language rationales merely as unsupervised descriptive text, failing to harness their full potential as learning signals. In this work our main idea is to create a common understanding of user and items across multiple domains called Thought Space with SLM instead of using LLM¡¯s distilled knowledge. To that end we propose PULSE (Preference Understanding by Latent Semantic Embeddings), a framework that treats SLM-generated rationales as first class views, supervising them with interaction histories to jointly model user actions (what) and their semantic drivers (why). Existing methods consider only interactions such as sequences and embeddings, whereas PULSE treats rationales as first-class signals, this ingenious design yields embeddings that are more robust and generalizable. Extensive experiments demonstrate that PULSE outperforms leading ID, Collaborative Filtering (CF), and LLM-based sequential recommendation models across multiple benchmark datasets. Furthermore, PULSE exhibits superior transferability in cross-domain recommendation and shows strong performance on downstream tasks such as reasoning-oriented question answering.
Presenters
PB
Prosenjit Biswas
Research Scientist, Sony Research India
Co-Authors
AT
Abhinav Thorat
Sony Research India
RK
Ravi Kolla
Sony Research India

Post-Training Denoising of User Profiles with LLMs in Collaborative Filtering Recommendation

Full papersMachine Learning and Large Language ModelsRecommender systemsFull papers 02:30 PM - 04:00 PM (Europe/Amsterdam) 2026/04/01 12:30:00 UTC - 2026/04/01 14:00:00 UTC
Implicit feedback -- the main data source for training Recommender Systems (RSs) -- is inherently noisy and has been shown to negatively affect recommendation effectiveness. Denoising has been proposed as a method for removing noisy implicit feedback and improving recommendations. Prior work has focused on \emph{in-training} denoising, however this requires additional data, changes to the model architecture and training procedure or fine-tuning, all of which can be costly and data hungry. In this work, we focus on \emph{post-training} denoising. Different from in-training denoising, post-training denoising does not involve changing the architecture of the model nor its training procedure, and does not require additional data. Specifically, we present a method for post-training denoising user profiles using Large Language Models (LLMs) for Collaborative Filtering (CF) recommendations. Our approach prompts LLMs with (i) a user profile (user interactions), (ii) a candidate item, and (iii) its rank as given by the CF recommender, and asks the LLM to remove items from the user profile to improve the rank of the candidate item. Experiments with a state-of-the-art CF recommender and 4 open and closed source LLMs in 3 datasets show that our denoising yields improvements up to 13\% in effectiveness over the original user profiles.
Presenters Ervin Dervishaj
PhD Student, University Of Copenhagen
Co-Authors
MM
Maria Maistro
University Of Copenhagen
TR
Tuukka Ruotsalo
LUT University And University Of Copenhagen
CL
Christina Lioma
University Of Copenhagen

PromptHG: Prompt-Enhanced Heterogeneous Graph forPersonalized News Recommendation

Full papersRecommender systems 02:30 PM - 04:00 PM (Europe/Amsterdam) 2026/04/01 12:30:00 UTC - 2026/04/01 14:00:00 UTC
Recent advances in large language models (LLMs) have opened new opportunities for personalized news recommendation. However, existing LLM-based approaches mainly focus on semantic enrichment while overlooking structural signals such as entity-level relations that are crucial for modeling user preferences. Meanwhile, graph-based methods capture structural information but often rely on sparse click-based or incomplete news¨Centity interactions, limiting their ability to model rich relational structures. To address these limitations, we propose PromptHG, a unified framework that integrates LLM prompting with heterogeneous graph learning for news recommendation. First, PromptHG leverages LLMs to directly synthesize entities from news titles and link articles through shared entity nodes, uncovering relationships beyond conventional interaction signals and alleviating sparsity in user behavior data. Second, we construct a heterogeneous news¨Centity graph that integrates user click sequences with entity-based connections, and employ a lightweight graph encoder to learn robust news and user representations, enabling the model to capture complex structural dependencies for improved preference modeling. Extensive experiments on two benchmark datasets demonstrate that PromptHG consistently outperforms strong baselines across multiple evaluation metrics, highlighting the effectiveness of LLM-guided entity synthesis and heterogeneous graph modeling for personalized news recommendation.
Presenters
DK
Dang Kieu
VinUniversity
Co-Authors
DZ
Delvin Ce Zhang
MN
Minh-Duc Nguyen
QW
Qiang Wu
MX
Min Xu
Dung D. Le
Assistant Professor, VinUniversity

Interplay: Training Independent Simulators for Reference-Free Conversational Recommendation

Full papersConversational search and recommender systemsRecommender systemsFull papers 02:30 PM - 04:00 PM (Europe/Amsterdam) 2026/04/01 12:30:00 UTC - 2026/04/01 14:00:00 UTC
Training Conversational recommender systems (CRS) requires extensive dialogue data, which is challenging to collect at scale. To address this, researchers have used simulated user-recommender conversations. Traditional simulation approaches often utilize a single large language model (LLM) that generates entire conversations with prior knowledge of the target items, leading to scripted and artificial dialogues. We propose a reference-free simulation framework that trains two independent LLMs, one as the user and one as the conversational recommender. These models interact in real-time without access to predetermined target items, but preference summaries and target attributes, enabling the recommender to genuinely infer user preferences through dialogue. This approach produces more realistic and diverse conversations that closely mirror authentic human-AI interactions. Our reference-free simulators match or exceed existing methods in quality, while offering a scalable solution for generating high-quality conversational recommendation data without constraining conversations to pre-defined target items. We conduct both quantitative and human evaluation to confirm the effectiveness of our reference-free approach.
Presenters
JR
Jerome Ramos
University College London
Co-Authors
FX
Feng Xia
University Of Sheffield
XW
Xi Wang
University Of Sheffield
SC
Shubham Chatterjee
MST
XF
Xiao Fu
University College London
HR
Hossein A. Rahmani
University College London
AL
Aldo Lipani
University College London

Improving Conversational Recommendation with Contextual Adaptation of External Recommenders and LLM-based Reranking

Full papersFull papers 02:30 PM - 04:00 PM (Europe/Amsterdam) 2026/04/01 12:30:00 UTC - 2026/04/01 14:00:00 UTC
We tackle the challenge of integrating large language models (LLMs) with external recommender systems to enhance domain expertise in conversational recommendation (CRS). Current LLM-based CRS approaches primarily rely on zero/few-shot methods for generating item recommendations based on user queries, but this method faces two significant challenges: (1) without domain-specific adaptation, LLMs frequently recommend items not in the target item space, resulting in low recommendation accuracy; and (2) LLMs largely rely on dialogue context for content-based recommendations, neglecting the collaborative relationships among item sequences. To address these limitations, we introduce the CARE (Contextual Adaptation of Recommenders) framework. CARE (a) integrates external recommender systems as domain experts, producing candidate items through entity-level insights, and (b) customizes LLMs as rerankers to enhance the accuracy by leveraging contextual information. Our results demonstrate that incorporating CARE framework significantly enhances recommendation accuracy of LLMs by an average of 54% and 25% for ReDial and INSPIRED datasets. The most effective CARE strategy involves LLMs selecting and reranking candidate items that external recommenders provide based on contextual insights.
Presenters
CL
Chuang Li
National University Of Singapore
Co-Authors
WL
Weida Liang
HH
Hengchang Hu
SN
See-Kiong Ng
MK
Min-Yen Kan
HL
Haizhou Li
YD
Yang Deng
Assistant Professor, Singapore Management University
125 visits

Session Participants

User Online
Session speakers, moderators & attendees
Research Scientist
,
Sony Research India
PhD student
,
University Of Copenhagen
VinUniversity
University College London
National University of Singapore
Professor
,
University Of Waterloo
No attendee has checked-in to this session!
27 attendees saved this session

Session Chat

Live Chat
Chat with participants attending this session

Questions & Answers

Answered
Submit questions for the presenters

Session Polls

Active
Participate in live polls

Need Help?

Technical Issues?

If you're experiencing playback problems, try adjusting the quality or refreshing the page.

Questions for Speakers?

Use the Q&A tab to submit questions that may be addressed in follow-up sessions.