🦊 JQL: Judging Quality across Languages

Scalable and lightweight multilingual data filtering with LLM-based annotators

🧩 Main Pipeline Steps

JQL Pipeline Overview
Figure 1: Overview of the JQL pipeline
  1. πŸ“‹ Ground Truth Creation: Human annotators label monolingual documents based on a structured instruction prompt. These documents are translated into all target languages to create a multilingual gold-standard dataset. (See Figure 1)
  2. πŸ€– LLM-as-a-Judge Selection & Data Annotation: Strong multilingual LLMs (e.g., Gemma, Mistral, LLaMA) are evaluated against the ground truth, and top-performing models are used to produce synthetic annotations. (See Figure 1)
  3. πŸͺΆ Lightweight Annotator Training: Train compact regression heads on frozen multilingual embeddings to create efficient, high-throughput annotators. (See Figure 1)
  4. πŸš€ Scalable Data Filtering: Use trained annotators to filter large-scale pretraining corpora using quantile thresholds. (See Figure 1)

πŸ“Š Results

πŸ“ Available Artifacts

πŸ“œ Citation

If you use JQL, the annotations, or the pretrained annotators, please cite the paper:

@article{your2024jql,
  title={JQL: Judging Quality across Languages},
  author={Your, Name and Collaborators, Here},
  journal={Conference or preprint archive},
  year={2024}
}