License

This model is a fine-tuned variant of Meta’s Llama 3.1 8B and is distributed under the Llama 3.1 Community License.

Built with Llama.


Citation

If you use any of the work, please cite the following paper:

@misc{ellinger2025dependsresolvingreferentialambiguity,
      title={It Depends: Resolving Referential Ambiguity in Minimal Contexts with Commonsense Knowledge}, 
      author={Lukas Ellinger and Georg Groh},
      year={2025},
      url={https://arxiv.org/abs/2509.16107},
      annote={Comment: Accepted by UncertaiNLP workshop @ EMNLP 2025} 
}
Downloads last month
7
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lukasellinger/uncertain-dpo-llama-v3p1-8b-instruct

Finetuned
(1840)
this model

Dataset used to train lukasellinger/uncertain-dpo-llama-v3p1-8b-instruct