Abstract Summary
This paper introduces XProvence, a multilingual zero-cost context pruning model for Retrieval-Augmented Generation (RAG), supporting 100+ languages. Motivated by the growing use of RAG systems across diverse languages, we explore several strategies to generalize the Provence framework--which first integrated efficient zero-cost context pruning directly into the re-ranking architecture--beyond English. Across four multilingual Question Answering benchmarks, we show how XProvence can prune RAG contexts with minimal-to-no performance degradation and outperforms strong baselines.