Skip to content

ACMCMC/ask-a-local

Repository files navigation

Ask-a-Local

Hallucinations in large language models (LLMs)–instances where models generate plausible but factually incorrect information–present a significant challenge for AI.

We introduce "Ask a Local", a novel hallucination detection method exploiting the intuition that specialized models exhibit greater surprise when encountering domain-specific inaccuracies. Our approach computes divergence between perplexity distributions of language-specialized models to identify potentially hallucinated spans, without requiring external knowledge bases or supervised training data. Our method is particularly well-suited for a multilingual context, as it naturally scales to multiple languages without the need for adaptation, relying on external data sources, or performing training. Moreover, we select computationally efficient models, providing a scalable solution that can be applied to a wide range of languages and domains.

We evaluate our approach on a dataset of question-answer pairs across 14 languages, and show its effectiveness in detecting hallucinations in LLM outputs.

About

Ask a Local: Detecting Hallucinations With Specialized Model Divergence

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages