--- language: - en - ar tags: - translation license: cc-by-4.0 datasets: - quickmt/quickmt-train.ar-en model-index: - name: quickmt-ar-en results: - task: name: Translation ara-eng type: translation args: ara-eng dataset: name: flores101-devtest type: flores_101 args: ara_Arab eng_Latn devtest metrics: - name: CHRF type: chrf value: 66.98 - name: BLEU type: bleu value: 42.79 - name: COMET type: comet value: 87.4 --- # `quickmt-ar-en` Neural Machine Translation Model `quickmt-ar-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `ar` into `en`. ## Model Information * Trained using [`eole`](https://github.com/eole-nlp/eole) * 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers * 20k sentencepiece vocabularies * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.ar-en/tree/main See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model. ## Usage with `quickmt` You must install the Nvidia cuda toolkit first, if you want to do GPU inference. Next, install the `quickmt` python library and download the model: ```bash git clone https://github.com/quickmt/quickmt.git pip install ./quickmt/ quickmt-model-download quickmt/quickmt-ar-en ./quickmt-ar-en ``` Finally use the model in python: ```python from quickmt import Translator # Auto-detects GPU, set to "cpu" to force CPU inference t = Translator("./quickmt-ar-en/", device="auto") # Translate - set beam size to 5 for higher quality (but slower speed) sample_text = 'نبه الدكتور إيهود أور -أستاذ الطب في جامعة دالهوزي في هاليفاكس، نوفا سكوتيا ورئيس الشعبة الطبية والعلمية في الجمعية الكندية للسكري- إلى أن البحث لا يزال في أيامه الأولى.' t(sample_text, beam_size=5) > 'Dr. Ehud Orr, professor of medicine at Dalhousie University in Halifax, Nova Scotia and head of the medical and scientific division of the Canadian Diabetes Association, warned that the research is still in its early days.' # Get alternative translations by sampling # You can pass any cTranslate2 `translate_batch` arguments t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9) > 'Professor of Medicine at Dalhousie University in Halifax, Nova Scotia and chairman of the Medical and Scientific Division at the Canadian Diabetes Society, cautioned that the research was still in its early days.' ``` The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. ## Metrics `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("ara_Arab"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size). | | bleu | chrf2 | comet22 | Time (s) | |:---------------------------------|-------:|--------:|----------:|-----------:| | quickmt/quickmt-ar-en | 42.79 | 66.98 | 87.4 | 0.88 | | Helsink-NLP/opus-mt-ar-en | 34.22 | 61.26 | 84.5 | 3.78 | | facebook/nllb-200-distilled-600M | 39.13 | 64.14 | 86.22 | 21.58 | | facebook/nllb-200-distilled-1.3B | 42.29 | 66.55 | 87.55 | 37.7 | | facebook/m2m100_418M | 29.41 | 57.68 | 82.21 | 18.5 | | facebook/m2m100_1.2B | 29.77 | 56.7 | 80.77 | 36.23 | `quickmt-ar-en` is the fastest and highest quality.