Sim4IA-Bench: A User Simulation Benchmark Suite for NextQuery and Utterance Prediction

This abstract has open access
Abstract Summary
Validating user simulation is a difficult task due to the lack of established measures and benchmarks, which makes it challenging to assess whether a simulator accurately reflects real user behavior. As part of the Sim4IA Micro-Shared Task at the Sim4IA Workshop, SIGIR 2025, we present Sim4IA-Bench, a simulation benchmark suit for the prediction of the next queries and utterances, the first of its kind in the IR community. Our dataset as part of the suite comprises 160 real-world search sessions from the CORE search engine. For 70 of these sessions, up to 62 simulator runs are available, divided into Task A and Task B, in which different approaches predicted users¡¯ next search queries or utterances. Sim4IA-Bench provides a basis for evaluating and comparing user simulation approaches and for developing new measures of simulator validity. Although modest in size, the suite represents the first publicly available benchmark that links real search sessions with simulated next-query predictions. In addition to serving as a testbed for next query prediction, it also enables exploratory studies on query reformulation behavior, intent drift, and interaction-aware retrieval evaluation. We also introduce a new measure for evaluating next-query predictions in this task. By making the suite publicly available, we aim to promote reproducible research and stimulate further work on realistic and explainable user simulation for information access: https://github.com/irgroup/Sim4IA-Bench.
Abstract ID :
NKDR135
Submission Type
PhD Student
,
TH Köln
Professor
,
TH Köln
Professor
,
University Of Stavanger

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
NKDR132
Resource
Mr. Jan Heinrich Merker
NKDR140
User aspects in IR
Resource
Saber Zerhoudi
NKDR129
Machine Learning and Large Language Models Societally-motivated IR research
Resource
Ricardo Campos
NKDR131
Machine Learning and Large Language Models Societally-motivated IR research
Resource
Ricardo Campos
NKDR93
Evaluation research Machine Learning and Large Language Models Search and ranking
Resource
Laura Caspari
NKDR125
Evaluation research Recommender systems
Resource
Ludovico Boratto
1 visits