ELOQUENT CLEF Shared Tasks for Evaluation of Generative Language Model Quality, 2026 edition

This abstract has open access
Abstract Summary
Large generative language models (LLMs) are a foundational technology for both research and information service development, and are used as a tool for solving challenging text analysis problems. The ELOQUENT lab develops shared tasks for experimenting with the resilience of large language models applied to practical tasks. In 2026, the tasks will include the Voight Kampff task for exploring how well one can distinguish human-authored text from machine-generated text, the Robustness and Consistency task to explore how well a model adapts to the culture expressed by a language, and the Quiz task, to experiment with generating and scoring test quiz material for use in real educational situations.
Abstract ID :
NKDR75
Submission Type
Submission Topics

Associated Sessions

Researcher
,
AMD Silo AI
IT University of Copenhagen
Lead Program Management Data Science&AI
,
Fraunhofer IAIS
Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000 Grenoble, France
Professor
,
IRIT, Univ. Toulouse
1 visits