Abstract Summary
User simulators are essential for evaluating search systems, but they primarily copy user actions without understanding the underlying thought process. This gap exists because large-scale interaction logs record what users do, but not what they might be thinking or feeling, such as confusion or satisfaction. To solve this problem, we present a new framework that computationally infers cognitive traces from behavioral data. Our method uses a multi-agent language model system, grounded in Information Foraging Theory and calibrated by human experts, to annotate user actions with their likely cognitive state. To show the value of these traces, we demonstrate that they significantly improve a model's ability to predict when a user will abandon a search task. We release a collection of annotations for several public datasets, including AOL and Stack Overflow, and an open-source tool that allows researchers to apply our method to their own data. This work provides the tools and data needed to build more human-like user simulators and to assess retrieval systems on user-oriented dimensions of performance.