Abstract Summary
"Uncertainty quantification has gained increasing importance in natural language processing (NLP), offering a conceptual and methodological framework to address critical issues such as hallucinations in the answers of LLMs, detection of low-quality responses, out-of-distribution detection, and reducing response latency, among others. While UQ for text classification models in NLP has been covered in previous tutorials, applying UQ to LLMs poses far greater challenges. This complexity stems from the fact that LLMs generate sequences of conditionally dependent predictions with varying levels of importance. As a result, many UQ techniques that are effective for classification models are either ineffective or not directly applicable to LLMs. In this tutorial, we cover foundational concepts of UQ for LLMs, present state-of-the-art techniques, demonstrate practical applications of UQ in various tasks, and equip researchers and practitioners with tools for developing new UQ methods and harnessing uncertainty in various contexts. Recently, retrieval-augmented generation (RAG) systems have become the backbone of many modern LLM-based applications. Augmenting inputs to the model with information retrieved from additional sources poses unique challenges and opportunities for UQ. In this edition of the tutorial, we cover the techniques most suitable for RAG-based LLMs and touch upon applications of uncertainty in agentic frameworks. Through this tutorial, we aim to lower the barrier to entry into UQ research and applications for individual researchers and developers."