Large language models (LLMs) can increase users¡¯ perceived trust by verbalizing confidence in their outputs. However, prior work has shown that LLMs are often overconfident, making their stated confidence unreliable since it does not consistently align with factual accuracy. To better understand ...
Explainability methodsMachine Learning and Large Language ModelsFull papers