Abstract Summary
Large generative language models (LLMs) are a foundational technology for both research and information service development, and are used as a tool for solving challenging text analysis problems. The ELOQUENT lab develops shared tasks for experimenting with the resilience of large language models applied to practical tasks. In 2026, the tasks will include the Voight Kampff task for exploring how well one can distinguish human-authored text from machine-generated text, the Robustness and Consistency task to explore how well a model adapts to the culture expressed by a language, and the Quiz task, to experiment with generating and scoring test quiz material for use in real educational situations.