|
--- |
|
license: cc-by-nc-sa-4.0 |
|
datasets: |
|
- AimonLabs/HDM-Bench |
|
language: |
|
- en |
|
--- |
|
|
|
<img src="https://huggingface.co/AimonLabs/hallucination-detection-model/resolve/main/aimon_logo.svg" alt="Aimon Labs Inc" style="background-color: white;" width="400"/> |
|
|
|
<img src="https://huggingface.co/AimonLabs/hallucination-detection-model/resolve/main/explainer2.gif" width="400" alt="HDM-2 Explainer"/> |
|
|
|
# Model Card for Hallucination Detection Model (HDM-2-3B) |
|
|
|
<!-- |
|
**Paper:** |
|
[](https://arxiv.org/abs/2504.07069) |
|
*HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification.* |
|
|
|
**Notebook:** [](https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz?usp=sharing) |
|
|
|
**GitHub Repository:** |
|
[](https://github.com/aimonlabs/hallucination-detection-model) |
|
|
|
**HDM-Bench Dataset:** |
|
[](https\://huggingface.co/datasets/AimonLabs/HDM-Bench) |
|
|
|
**HDM-2-3B Model:** |
|
[](https://huggingface.co/AimonLabs/hallucination-detection-model) |
|
--> |
|
|
|
<table> |
|
<tr> |
|
<td><strong>Paper:</strong></td> |
|
<td><a href="https://arxiv.org/abs/2504.07069"><img src="https://img.shields.io/badge/arXiv-2504.07069-b31b1b.svg" alt="arXiv Badge" /></a> <em>HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification.</em></td> |
|
</tr> |
|
<tr> |
|
<td><strong>Notebook:</strong></td> |
|
<td><a href="https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab Badge" /></a></td> |
|
</tr> |
|
<tr> |
|
<td><strong>GitHub Repository:</strong></td> |
|
<td><a href="https://github.com/aimonlabs/hallucination-detection-model"><img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white" alt="GitHub Badge" /></a></td> |
|
</tr> |
|
<tr> |
|
<td><strong>HDM-Bench Dataset:</strong></td> |
|
<td><a href="https://huggingface.co/datasets/AimonLabs/HDM-Bench"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg" alt="HF Dataset Badge" /></a></td> |
|
</tr> |
|
<tr> |
|
<td><strong>HDM-2-3B Model:</strong></td> |
|
<td><a href="https://huggingface.co/AimonLabs/hallucination-detection-model"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md-dark.svg" alt="HF Model Badge" /></a></td> |
|
</tr> |
|
<tr> |
|
<td><strong>Discord Community:</strong></td> |
|
<td><a href="https://discord.gg/MKe6ZkSbWD"><img src="https://cdn.prod.website-files.com/6257adef93867e50d84d30e2/66e3d80db9971f10a9757c99_Symbol.svg" alt="Discord Logo" /></a></td> |
|
</tr> |
|
</table> |
|
|
|
|
|
|
|
## Introduction |
|
|
|
Most judge models used in the industry today are not specialized for Hallucination evaluation tasks. |
|
Developers using them often struggle with score inconsistency, variance, high latencies, high costs, and prompt sensitivity. |
|
HDM-2 solves these challenges and at the same time, provides industry-first, state-of-the-art features. |
|
|
|
|
|
## Highlights: |
|
|
|
- Outperforms existing baselines on RagTruth, TruthfulQA, and our new HDM-Bench benchmark. |
|
|
|
- **Context-based** hallucination evaluations based on user-provided or retrieved documents. |
|
|
|
- **Common knowledge** contradictions based on widely-accepted common knowledge facts. |
|
|
|
- **Phrase, token, and sentence-level** Hallucination identification with token-level probability **scores** |
|
|
|
- Generalized model that works well across a variety of domains such as Finance, Healthcare, Legal, and Insurance. |
|
|
|
- Operates within a **latency** budget of **500ms** on a single L4 GPU, especially beneficial for Agentic use cases. |
|
|
|
## Model Overview: |
|
|
|
HDM-2 is a modular, production-ready, multi-task hallucination (or inaccuracy) evaluation model designed to validate the factual groundedness of LLM outputs in enterprise environments, for both **contextual** and **common knowledge** evaluations. |
|
HDM-2 introduces a novel taxonomy-guided, span-level validation architecture focused on precision, explainability, and adaptability. |
|
The figure below shows the workflow (on the left) in which we determine whether a certain LLM response is hallucinated or not and an example (on the right) that shows the taxonomy of an LLM response. |
|
|
|
| HDM-2 Model Workflow | Example of Enterprise LLM Response Taxonomy | |
|
| --- | --- | |
|
|  |  | |
|
|
|
|
|
### Enterprise Models |
|
|
|
- The Enterprise version offers a way to incorporate “Enterprise knowledge” into Hallucination evaluations. This means knowledge that is specific to your company (or domain or industry) that might not be present in your context!! |
|
|
|
- Another important feature covered in the Enterprise version are explanations. Please reach out to us for Enterprise licensing. |
|
|
|
- Other premium capabilities that will be included in the Enterprise version include improved accuracies, even lower latencies, and additional use cases such as Math and Code. |
|
|
|
- Apart from Hallucinations, we have SOTA models for Prompt/Instruction adherence, RAG Relevance, Reranking (Promptable). The instruction adherence model is general-purpose and extremely low-latency. It performs well with a wide variety of instructions, including safety, style, and format constraints. |
|
|
|
|
|
### Performance - Model Accuracy |
|
|
|
See paper (linked on top) for more details. |
|
|
|
| | | | | |
|
| :---------: | :-----------: | :--------: | :----------: | |
|
| **Dataset** | **Precision** | **Recall** | **F1 Score** | |
|
| HDMBENCH | 0.87 | 0.84 | 0.855 | |
|
| TruthfulQA | 0.82 | 0.78 | 0.80 | |
|
| RagTruth | 0.85 | 0.81 | 0.83 | |
|
|
|
|
|
### Latency |
|
|
|
|
|
| | | | | | |
|
| ----------------------- | -------------------- | ---------------------- | ----------------------- | ------------------- | |
|
| **Device** | **Avg. Latency (s)** | **Median Latency (s)** | **95th Percentile (s)** | **Max Latency (s)** | |
|
| Nvidia A100 | 0.204 | 0.201 | 0.208 | 1.32 | |
|
| Nvidia L4 (recommended) | 0.207 | 0.203 | 0.220 | 1.29 | |
|
| Nvidia T4 | 0.935 | 0.947 | 1.487 | 1.605 | |
|
| CPU | 261.92 | 242.76 | 350.76 | 356.96 | |
|
|
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
Install the Inference Code |
|
|
|
```bash |
|
pip install hdm2 --quiet |
|
``` |
|
|
|
Run the HDM-2 model |
|
|
|
```python |
|
# Load the model from HuggingFace into the GPU |
|
|
|
from hdm2 import HallucinationDetectionModel |
|
hdm_model = HallucinationDetectionModel() |
|
|
|
prompt = "You are an AIMon Bot. Give me an overview of the hospital's clinical trial enrollments for Q1 2025." |
|
context = """In Q1 2025, Northbridge Medical Center enrolled 573 patients across four major clinical trials. |
|
The Oncology Research Study (ORION-5) had the highest enrollment with 220 patients. |
|
Cardiology trials, specifically the CardioNext Study, saw 145 patients enrolled. |
|
Neurodegenerative research trials enrolled 88 participants. |
|
Orthopedic trials enrolled 120 participants for regenerative joint therapies. |
|
""" |
|
response = """Hi, I am AIMon Bot! |
|
I will be happy to help with an overview of the hospital's clinical trial enrollments for Q1 2025. |
|
Northbridge Medical Center enrolled 573 patients across major clinical trials in Q1 2025. |
|
Heart disease remains the leading cause of death globally, according to the World Health Organization. |
|
For more information about our clinical research programs, please contact the Northbridge Medical Center Research Office. |
|
Northbridge has consistently led regional trial enrollments since 2020, particularly in oncology and cardiac research. |
|
In Q1 2025, Northbridge's largest enrollment was in a neurology-focused trial with 500 patients studying advanced orthopedic devices. |
|
Can I help you with something else? |
|
""" |
|
|
|
# Ground truth: |
|
# The highest enrollment study had 220 patients, not 573. |
|
# This sentence is not in the provided context, and is enterprise knowledge: Northbridge has consistently led regional trial enrollments since 2020, particularly in oncology and cardiac research. |
|
|
|
# Detect hallucinations with default parameters |
|
|
|
results = hdm_model.apply(prompt, context, response) |
|
``` |
|
|
|
Print the results |
|
|
|
```python |
|
# Utility function to help with printing the model output |
|
def print_results(results): |
|
#print(results) |
|
# Print results |
|
print(f"\nHallucination severity: {results['adjusted_hallucination_severity']:.4f}") |
|
|
|
# Print hallucinated sentences |
|
if results['candidate_sentences']: |
|
print("\nPotentially hallucinated sentences:") |
|
is_ck_hallucinated = False |
|
for sentence_result in results['ck_results']: |
|
if sentence_result['prediction'] == 1: # 1 indicates hallucination |
|
print(f"- {sentence_result['text']} (Probability: {sentence_result['hallucination_probability']:.4f})") |
|
is_ck_hallucinated = True |
|
if not is_ck_hallucinated: |
|
print("No hallucinated sentences detected.") |
|
else: |
|
print("\nNo hallucinated sentences detected.") |
|
print_results(results) |
|
|
|
``` |
|
|
|
``` |
|
OUTPUT: |
|
|
|
Hallucination severity: 0.9531 |
|
|
|
Potentially hallucinated sentences: |
|
- Northbridge has consistently led regional trial enrollments since 2020, particularly in oncology and cardiac research. (Probability: 0.9180) |
|
- In Q1 2025, Northbridge's largest enrollment was in a neurology-focused trial with 500 patients studying advanced orthopedic devices. (Probability: 1.0000) |
|
``` |
|
|
|
Notice that |
|
- Innocuous statements like *Can I help you with something else?*, and *Hi, I'm an AIMon bot* are not marked as hallucinations. |
|
- Common-knowledge statements are correctly filtered out by the common-knowledge checker, even though they are not present in the context, e.g., *Heart disease remains the leading cause of death globally, according to the World Health Organization.* |
|
- Statements with enterprise knowledge cannot be handled by this model. Please contact us if you want to use additional capabilities for your use-cases. |
|
|
|
To display word-level annotations, use the following code snippet. |
|
|
|
``` |
|
from hdm2.utils.render_utils import display_hallucination_results_words |
|
|
|
display_hallucination_results_words( |
|
results, |
|
show_scores=False, # True if you want to display scores alongside the candidate words |
|
color_scheme="blue-red", |
|
separate_classes=True, # False if you don't want separate colors for Common Knowledge sentences |
|
) |
|
``` |
|
|
|
Word-level annotations will be displayed as shown below. |
|
|
|
- Color tones indicate the scores (darker color means higher score). |
|
- Words with red background are hallucinations. |
|
- Words with blue background are context-hallucinations but marked as problem-free by the common-knowledge checker. |
|
- Words with white background are problem-free text. |
|
- Finally, all the candidate sentences (sentences that contain context-hallucinations) are shown at the bottom, together with results from the common-knowledge checker. |
|
|
|
 |
|
|
|
### Model Description |
|
|
|
- Model ID: HDM-2-3B |
|
|
|
- Developed by: AIMon Labs, Inc. |
|
|
|
- Language(s) (NLP): English |
|
|
|
- License: CC BY-NC-SA 4.0 |
|
|
|
- License URL: <https://creativecommons.org/licenses/by-nc-sa/4.0/> |
|
|
|
- Please reach out to us for enterprise and commercial licensing. Contact us at info@aimon.ai |
|
|
|
|
|
### Model Sources |
|
|
|
- Code repository: [GitHub](https://github.com/aimonlabs/hallucination-detection-model) |
|
|
|
- Model weights: [HuggingFace](https://huggingface.co/AimonLabs/hallucination-detection-model/) |
|
|
|
- Paper: [arXiv](https://arxiv.org/abs/2504.07069) |
|
|
|
- Demo: [Google Colab](https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz) |
|
|
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
1. Automating Hallucination or Inaccuracy Evaluations |
|
|
|
2. Assisting humans evaluating LLM responses for Hallucinations |
|
|
|
3. Phrase, word or sentence-level identification of where Hallucinations lie |
|
|
|
4. Selecting the best LLM with the least hallucinations for specific use cases |
|
|
|
5. Automatic re-prompting for better LLM responses |
|
|
|
|
|
## Limitations |
|
|
|
- Annotations of "common knowledge" may still contain subjective judgments |
|
|
|
|
|
## Technical Specifications |
|
|
|
See paper for [more details](https://arxiv.org/abs/2504.07069) |
|
|
|
|
|
## Citation: |
|
|
|
``` |
|
@misc{paudel2025hallucinothallucinationdetectioncontext, |
|
title={HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification}, |
|
author={Bibek Paudel and Alexander Lyzhov and Preetam Joshi and Puneet Anand}, |
|
year={2025}, |
|
eprint={2504.07069}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2504.07069}, |
|
} |
|
``` |
|
|
|
|
|
## Model Card Authors |
|
|
|
@bibekp, @alexlyzhov-aimon, @pjoshi30, @aimonp |
|
|
|
|
|
## Model Card Contact |
|
|
|
<info@aimon.ai>, @aimonp, @pjoshi30 |
|
|
|
## AIMon Website(https://www.aimon.ai) |