modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 12:33:01
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
550 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 12:32:40
card
stringlengths
11
1.01M
manancode/opus-mt-en-kwn-ctranslate2-android
manancode
2025-08-16T11:13:21Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:12:55Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-kwn-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-kwn` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-kwn - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-kj-ctranslate2-android
manancode
2025-08-16T11:12:33Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:12:20Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-kj-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-kj` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-kj - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-iso-ctranslate2-android
manancode
2025-08-16T11:10:50Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:10:22Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-iso-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-iso` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-iso - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755341080
kojeklollipop
2025-08-16T11:10:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:10:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-iir-ctranslate2-android
manancode
2025-08-16T11:09:08Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:08:42Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-iir-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-iir` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-iir - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Vegaandagev/Qwen2.5-VL-3B-Instruct-Thinking
Vegaandagev
2025-08-16T11:08:34Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:lmms-lab/multimodal-open-r1-8k-verified", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-16T04:15:55Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct datasets: lmms-lab/multimodal-open-r1-8k-verified library_name: transformers model_name: Qwen2.5-VL-3B-Instruct-Thinking tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2.5-VL-3B-Instruct-Thinking This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the [lmms-lab/multimodal-open-r1-8k-verified](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Vegaandagev/Qwen2.5-VL-3B-Instruct-Thinking", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.22.0.dev0 - Transformers: 4.55.2 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
VoilaRaj/69_pMugfk
VoilaRaj
2025-08-16T11:08:25Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T11:04:31Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
manancode/opus-mt-en-hu-ctranslate2-android
manancode
2025-08-16T11:07:25Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:07:11Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-hu-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-hu` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-hu - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ht-ctranslate2-android
manancode
2025-08-16T11:07:06Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:06:55Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ht-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ht` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ht - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-hil-ctranslate2-android
manancode
2025-08-16T11:06:30Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:06:18Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-hil-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-hil` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-hil - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-he-ctranslate2-android
manancode
2025-08-16T11:05:54Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:05:40Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-he-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-he` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-he - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755340678
manusiaperahu2012
2025-08-16T11:05:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:05:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-guw-ctranslate2-android
manancode
2025-08-16T11:04:50Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:04:38Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-guw-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-guw` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-guw - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Kie-Fells/katie-price-jordan-flux-64dim
Kie-Fells
2025-08-16T11:04:22Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-15T20:27:51Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/katie-price-jordan-flux-64dim_002900_00_20250815202523.png text: Katie Price (Jordan) - output: url: sample/katie-price-jordan-flux-64dim_000700_00_20250815182707.png text: Katie Price (Jordan) - output: url: sample/katie-price-jordan-flux-64dim_001800_00_20250815192557.png text: Katie Price (Jordan) - output: url: sample/katie-price-jordan-flux-64dim_002800_00_20250815201958.png text: Katie Price (Jordan) base_model: black-forest-labs/FLUX.1-dev instance_prompt: Katie Price (Jordan) license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # katie-price-jordan-flux-64dim Katrina Amy Alexandra Alexis Price (née Infield; born 22 May 1978 is an English media personality and model. She gained recognition in the late 1990s for her glamour modelling work, including on Page 3 of the tabloid newspaper The Sun, under the pseudonym Jordan. (Courtesey of Wikipedia) A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `Katie Price (Jordan)` to trigger the image generation. But if plugged into a workflow, no Trigger Words required. Try different safetensor numbers to find the best for your requirements. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
manancode/opus-mt-en-gmw-ctranslate2-android
manancode
2025-08-16T11:04:18Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:04:07Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-gmw-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-gmw` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-gmw - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Harsh1729/R1-Distill-Llama-8B-SFT-cotroller_dataset-bespoke-52k_all_cotif-w_partial_soln-w_change_of_thgt
Harsh1729
2025-08-16T11:03:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T10:57:18Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B library_name: transformers model_name: tags: - sft - full-finetuning tags: - generated_from_trainer licence: license --- # Model Card for {'tags': ['sft', 'full-finetuning']} This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.46.0 - Pytorch: 2.7.0 - Datasets: 3.2.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
manancode/opus-mt-en-fj-ctranslate2-android
manancode
2025-08-16T11:02:07Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:01:55Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-fj-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-fj` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-fj - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-fi-ctranslate2-android
manancode
2025-08-16T11:01:33Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:01:11Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-euq-ctranslate2-android
manancode
2025-08-16T11:01:05Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:00:37Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-euq-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-euq` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-euq - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
mike0182/blockassist-bc-slithering_arctic_tiger_1755338509
mike0182
2025-08-16T11:00:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slithering arctic tiger", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:57:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slithering arctic tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755340474
quantumxnode
2025-08-16T11:00:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:00:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-el-ctranslate2-android
manancode
2025-08-16T10:59:12Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:59:01Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-el-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-el` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-el - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-efi-ctranslate2-android
manancode
2025-08-16T10:58:55Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:58:41Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-efi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-efi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-efi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ee-ctranslate2-android
manancode
2025-08-16T10:58:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:58:24Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ee-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ee` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ee - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-dra-ctranslate2-android
manancode
2025-08-16T10:58:18Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:58:07Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-dra-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-dra` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-dra - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
hathu110808/blockassist-bc-flightless_unseen_parrot_1755340866
hathu110808
2025-08-16T10:57:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flightless unseen parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:57:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flightless unseen parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-cs-ctranslate2-android
manancode
2025-08-16T10:57:07Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:56:48Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-cs-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-cs` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-cs - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-cpf-ctranslate2-android
manancode
2025-08-16T10:56:10Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:55:58Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-cpf-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-cpf` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-cpf - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Abdullah-abushammala/insurance-expert-llama-3b-lora
Abdullah-abushammala
2025-08-16T10:56:08Z
0
0
null
[ "safetensors", "insurance", "finance", "question-answering", "lora", "llama", "text-generation", "en", "dataset:deccan-ai/insuranceQA-v2", "base_model:meta-llama/Llama-3.2-3B", "base_model:adapter:meta-llama/Llama-3.2-3B", "license:apache-2.0", "region:us" ]
text-generation
2025-08-16T10:48:13Z
--- license: apache-2.0 base_model: meta-llama/Llama-3.2-3B tags: - insurance - finance - question-answering - lora - llama datasets: - deccan-ai/insuranceQA-v2 language: - en pipeline_tag: text-generation --- # 🏥 Insurance Expert - Llama 3.2-3B LoRA This model is a fine-tuned version of **meta-llama/Llama-3.2-3B** using LoRA (Low-Rank Adaptation) specialized for insurance domain expertise. ## 🎯 Model Description - **Base Model**: Llama 3.2-3B (3.26B parameters) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **Training Dataset**: insuranceQA-v2 (21,325 Q&A pairs) - **Domain**: Insurance and Financial Services - **Language**: English (... rest of your content) ## 🚀 Quick Start ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel import torch # Load base model and tokenizer tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B") base_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.2-3B", torch_dtype=torch.float16, device_map="auto" ) # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "Abdullah-abushammala/insurance-expert-llama-3b-lora") model.eval() def ask_insurance_expert(question): prompt = f"Question: {question}\\nAnswer:" inputs = tokenizer(prompt, return_tensors='pt', padding=True) with torch.no_grad(): outputs = model.generate( **inputs, max_length=120, temperature=0.4, do_sample=True, top_p=0.8, repetition_penalty=1.3, no_repeat_ngram_size=3, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) return response.split("Answer:", 1)[1].strip() # Example usage answer = ask_insurance_expert("What is a deductible in health insurance?") print(answer)
manancode/opus-mt-en-cel-ctranslate2-android
manancode
2025-08-16T10:55:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:55:25Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-cel-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-cel` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-cel - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ca-ctranslate2-android
manancode
2025-08-16T10:54:47Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:54:34Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ca-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ca` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ca - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-bzs-ctranslate2-android
manancode
2025-08-16T10:54:29Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:54:17Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-bzs-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-bzs` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-bzs - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-bi-ctranslate2-android
manancode
2025-08-16T10:53:54Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:53:37Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-bi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-bi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-bi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ber-ctranslate2-android
manancode
2025-08-16T10:53:13Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:52:56Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ber-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ber` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ber - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-bat-ctranslate2-android
manancode
2025-08-16T10:52:04Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:51:50Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-bat-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-bat` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-bat - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-az-ctranslate2-android
manancode
2025-08-16T10:51:44Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:51:28Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-az-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-az` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-az - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ar-ctranslate2-android
manancode
2025-08-16T10:51:23Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:51:10Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ar-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ar` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ar - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Team-Atom/smolvla_record_test001_1
Team-Atom
2025-08-16T10:51:17Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:Team-Atom/pft", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-16T10:51:00Z
--- base_model: lerobot/smolvla_base datasets: Team-Atom/pft library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - smolvla - robotics --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
MoLA-LLM/MoLA-7x4b-v3
MoLA-LLM
2025-08-16T10:50:23Z
0
1
transformers
[ "transformers", "safetensors", "mola_lm", "text-generation", "pytorch", "mixture-of-experts", "lora", "adapter", "causal-lm", "conversational", "custom_code", "en", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-08-16T10:44:26Z
--- license: apache-2.0 library_name: transformers tags: - pytorch - mixture-of-experts - lora - adapter - causal-lm - text-generation language: - en pipeline_tag: text-generation --- Image here # MoLA-LM: Mixture of LoRA Adapters LLM MoLA-LM combines multiple LoRA adapters with an intelligent router to automatically select the best adapter for each input prompt. This approach enables specialized performance across different tasks while maintaining efficiency. **⚠️ This is a test model** Evals are coming... ## Model Details - **Model Type**: Mixture of LoRA Adapters Language Model - **Base Model**: Qwen/Qwen3-4B-Thinking-2507 - **Total Adapters**: 7 - **Architecture**: Custom MoLAForCausalLM with automatic adapter routing ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load the model (trust_remote_code=True is required for custom architecture) model = AutoModelForCausalLM.from_pretrained( "MoLA-LLM/MoLA-7x4b-v3", trust_remote_code=True, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("MoLA-LLM/MoLA-7x4b-v3", trust_remote_code=True) # Use like any other language model - adapter selection is automatic prompt = "Write a Python function to calculate fibonacci numbers" messages = [{"role": "user", "content": prompt}] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True) print(f"Selected LoRA: {model.get_current_lora()}") print(response) ``` *You can also use load_in_4bit and load_in_8bit directly when loading!* ## Architecture The MoLA-LM architecture consists of: 1. **Base Model**: Qwen/Qwen3-4B-Thinking-2507 2. **Router Network**: Sentence transformer + MLP for adapter selection 3. **LoRA Adapters**: 7 task-specific fine-tuned adapters 4. **Dynamic Switching**: Automatic adapter application based on input ## Technical Details - **Router Input**: 512-token context window for task classification - **Adapter Count**: 7 specialized LoRA adapters - **Selection Method**: Argmax over router logits --- *Paper coming soon™*
manancode/opus-mt-en-aav-ctranslate2-android
manancode
2025-08-16T10:50:07Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:49:50Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-aav-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-aav` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-aav - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ROMANCE-ctranslate2-android
manancode
2025-08-16T10:49:45Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:49:30Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ROMANCE-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ROMANCE` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ROMANCE - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-CELTIC-ctranslate2-android
manancode
2025-08-16T10:49:25Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:49:15Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-CELTIC-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-CELTIC` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-CELTIC - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Noam76/WebResto
Noam76
2025-08-16T10:49:22Z
0
0
null
[ "region:us" ]
null
2025-08-16T10:46:51Z
--- language: - fr ---crée moi une deuxieme interface de vente
PatrickHaller/babylm_2025_submission_strict-small2
PatrickHaller
2025-08-16T10:48:49Z
0
0
null
[ "safetensors", "xqwen", "custom_code", "region:us" ]
null
2025-08-15T12:57:19Z
# BabyLM 2025 Submission Track: strict-small ## Setup Running our models requires to install following packages: ```bash pip install fla xlstm mlstm_kernels ```
manancode/opus-mt-el-fi-ctranslate2-android
manancode
2025-08-16T10:48:34Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:48:20Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-el-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-el-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-el-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755340300
Sayemahsjn
2025-08-16T10:48:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:48:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-el-eo-ctranslate2-android
manancode
2025-08-16T10:48:14Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:48:04Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-el-eo-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-el-eo` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-el-eo - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-efi-fi-ctranslate2-android
manancode
2025-08-16T10:47:00Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:46:48Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-efi-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-efi-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-efi-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
david-cleon/llama3-1_8b_qlora_phishing-QLoRa
david-cleon
2025-08-16T10:46:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:46:40Z
--- base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** david-cleon - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
manancode/opus-mt-efi-de-ctranslate2-android
manancode
2025-08-16T10:46:22Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:46:10Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-efi-de-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-efi-de` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-efi-de - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-ee-sv-ctranslate2-android
manancode
2025-08-16T10:46:05Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:45:53Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ee-sv-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ee-sv` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ee-sv - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-ee-fr-ctranslate2-android
manancode
2025-08-16T10:45:48Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:45:33Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ee-fr-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ee-fr` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ee-fr - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
FreddyFazbear0209/fine-tuned-qwen-2.5-vl-kie
FreddyFazbear0209
2025-08-16T10:45:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:45:29Z
--- base_model: unsloth/qwen2.5-vl-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** FreddyFazbear0209 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-3b-instruct-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
manancode/opus-mt-ee-fi-ctranslate2-android
manancode
2025-08-16T10:45:28Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:45:16Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ee-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ee-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ee-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
FreddyFazbear0209/fine-tuned-qwen-2.5-kie
FreddyFazbear0209
2025-08-16T10:45:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:45:12Z
--- base_model: unsloth/qwen2.5-vl-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** FreddyFazbear0209 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-3b-instruct-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
manancode/opus-mt-ee-en-ctranslate2-android
manancode
2025-08-16T10:44:54Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:44:43Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ee-en-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ee-en` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ee-en - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-ee-de-ctranslate2-android
manancode
2025-08-16T10:44:38Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:44:26Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ee-de-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ee-de` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ee-de - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
fujiantiiazhraa/blockassist-bc-marine_robust_bee_1755339555
fujiantiiazhraa
2025-08-16T10:44:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "marine robust bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:44:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - marine robust bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-dra-en-ctranslate2-android
manancode
2025-08-16T10:44:21Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:44:09Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-dra-en-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-dra-en` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-dra-en - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-uk-ctranslate2-android
manancode
2025-08-16T10:43:48Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:43:37Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-uk-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-uk` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-uk - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-tl-ctranslate2-android
manancode
2025-08-16T10:43:31Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:43:20Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-tl-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-tl` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-tl - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-pap-ctranslate2-android
manancode
2025-08-16T10:42:24Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:42:12Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-pap-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-pap` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-pap - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-pag-ctranslate2-android
manancode
2025-08-16T10:42:07Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:41:54Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-pag-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-pag` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-pag - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-no-ctranslate2-android
manancode
2025-08-16T10:41:15Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:41:06Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-no-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-no` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-no - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755339283
kojeklollipop
2025-08-16T10:41:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:41:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-de-mt-ctranslate2-android
manancode
2025-08-16T10:40:28Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:40:16Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-mt-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-mt` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-mt - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-lua-ctranslate2-android
manancode
2025-08-16T10:39:53Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:39:42Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-lua-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-lua` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-lua - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-lt-ctranslate2-android
manancode
2025-08-16T10:39:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:39:23Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-lt-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-lt` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-lt - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-ln-ctranslate2-android
manancode
2025-08-16T10:39:00Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:38:48Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ln-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ln` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ln - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-kg-ctranslate2-android
manancode
2025-08-16T10:38:43Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:38:31Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-kg-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-kg` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-kg - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-it-ctranslate2-android
manancode
2025-08-16T10:38:25Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:38:15Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-it-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-it` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-it - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
oegbo/gemma3-latex-model
oegbo
2025-08-16T10:38:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:37:54Z
--- base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** oegbo - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-pt-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755339076
vwzyrraz7l
2025-08-16T10:38:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:37:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-de-is-ctranslate2-android
manancode
2025-08-16T10:37:52Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:37:41Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-is-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-is` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-is - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
JonusNattapong/Hanuman
JonusNattapong
2025-08-16T10:37:21Z
31
0
transformers
[ "transformers", "pytorch", "safetensors", "slm_moe", "text-generation", "thai", "causal-lm", "HanumanSLM", "moe", "mixture-of-experts", "KoichiYasuoka", "conversational", "custom_code", "th", "dataset:ZombitX64/Wikipedia-Thai", "base_model:JonusNattapong/Hanuman", "base_model:finetune:JonusNattapong/Hanuman", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "region:us" ]
text-generation
2025-08-14T19:40:39Z
--- language: - th license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-generation tags: - thai - causal-lm - text-generation - HanumanSLM - moe - mixture-of-experts - KoichiYasuoka - pytorch datasets: - ZombitX64/Wikipedia-Thai model-index: - name: JonusNattapong/Hanuman results: - task: name: Text Generation type: text-generation dataset: name: ZombitX64/Wikipedia-Thai type: ZombitX64/Wikipedia-Thai metrics: - name: train_loss type: loss value: 5.515408 - name: eval_loss type: loss value: 4.721791 - name: perplexity type: perplexity value: 112.3693 - name: learning_rate type: learning_rate value: 1.0126756596375662e-05 - name: epoch type: epoch value: 2 - name: steps type: steps value: 100 base_model: JonusNattapong/Hanuman tokenizer: aisingapore/WangchanLION-v3 widget: - text: "สวัสดี" example_title: "Simple greeting" - text: "ประเทศไทยตั้งอยู่ใน" example_title: "Geography" - text: "เทคโนโลยีปัญญาประดิษฐ์คือ" example_title: "Technology" inference: parameters: max_length: 100 temperature: 0.7 top_p: 0.9 do_sample: true --- # Thai HanumanSLM <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/673eef9c4edfc6d3b58ba3aa/neA8Ni9ELRRD_qGV5hvgr.png" width="300" alt="Hanuman"> <strong>**Hanuman** is a Small Language Model (HanumanSLM) specifically designed for Thai text generation. This model uses a Mixture of Experts (MoE) architecture and ships with a custom fast tokenizer optimized for Thai whitespace/newline preservation.</strong> <em>Tokenizer advisor: <a href="https://huggingface.co/KoichiYasuoka">Koichi Yasuoka</a></em> <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/"><img src="https://img.shields.io/badge/License-CC_BY--NC--ND_4.0-lightgrey.svg"></a> <a href="https://huggingface.co/JonusNattapong/Hanuman"><img src="https://img.shields.io/badge/🤗%20HF-Model-yellow"></a> <a href="https://huggingface.co/JonusNattapong/Hanuman"><img src="https://img.shields.io/badge/Downloads-1K+-green"></a> </div> ### Important Notes - **Context Window**: Supports up to 4,096 tokens (via RoPE scaling) - **Tokenizer**: Fast tokenizer with full whitespace/newline/tab preservation; no remote code - **Serialization**: Model weights provided in `safetensors` format for security (no pickle) - **Device**: Model supports both CPU and GPU inference - **Torch Version**: Compatible with PyTorch 1.9+ and transformers 4.20+ ## Model Details ### Model Architecture - **Model Type**: SLMForCausalLM (Small Language Model with Mixture of Experts) - **Language**: Thai (th) - **License**: cc-by-nc-4.0 #### How the Mixture of Experts (MoE) Model Works The HanumanSLM MoE model uses a Mixture of Experts architecture to improve flexibility and capacity for Thai text generation. Here’s how it works: 1. **Embedding Layer**: Input tokens are converted to dense vectors using an embedding layer. 2. **MoE Layer**: Multiple expert networks (each a small neural network) process the token embeddings. A gating network decides, for each token, which experts to use and how much to weight their outputs. The top-k experts (as set in the config) are selected for each token, and their outputs are combined using the gating probabilities. 3. **Output Layer**: The combined expert output is passed through a final linear layer to produce logits for each vocabulary token. 4. **Expert Usage Logging**: The gating probabilities for each token are stored and can be analyzed to see which experts are used most often. This design allows the model to dynamically route different tokens or contexts to different experts, improving generation quality and model efficiency. ### Configuration - **Vocabulary Size**: 249,261 tokens - **Hidden Size**: 512 - **Number of Layers**: 8 - **Attention Heads**: 8 (GQA: num_kv_heads=4) - **Intermediate Size**: 2,048 - **Max Position Embeddings**: 4,096 (long-context enabled) - **Tokenizer model_max_length**: 4,096 - **Architecture**: Mixture of Experts (MoE) ### Training Details - **Dataset**: [ZombitX64/Wikipedia-Thai](https://huggingface.co/datasets/ZombitX64/Wikipedia-Thai) - **Training Method**: Causal Language Modeling from scratch - **Optimizer**: AdamW - **Learning Rate Scheduler**: CosineAnnealingLR - **Epochs**: 2 - **Training Steps**: 100 - **Hardware**: CPU-optimized training ## Performance Metrics | Metric | Value | |--------|-------| | Training Loss | 5.515 | | Evaluation Loss | 4.722 | | Perplexity | 112.37 | | Final Learning Rate | 1.01e-05 | ## Recent Training Run (2025-08-16) - **Epochs:** 3 - **Global Steps:** 189 - **Batch Size:** 4 - **Logging Steps:** 50 - **Eval/Save Steps:** 500 - **Total FLOPs:** 71,913,893,616,384 - **Training Stopped:** Yes - **Best Model Checkpoint:** None - **Best Metric:** None **Log History:** | Step | Epoch | Loss | Grad Norm | Learning Rate | |------|-------|--------|-----------|---------------| | 50 | 0.8 | 17.472 | 16.52 | 2.45e-05 | | 100 | 1.592 | 4.819 | 9.08 | 4.95e-05 | | 150 | 2.384 | 0.562 | 3.70 | 2.25e-05 | ## Intended Use ### Primary Use Cases - **Thai Text Generation**: Generate coherent Thai text for various applications - **Content Creation**: Assist in creating Thai content for blogs, articles, and social media - **Educational Tools**: Support Thai language learning and teaching applications - **Research**: Academic research in Thai NLP and language modeling ### Limitations - **Training Scale**: Model was trained for only 2 epochs on a subset of data - **Hardware Constraints**: Optimized for CPU training, may benefit from GPU fine-tuning - **Domain**: Primarily trained on Wikipedia data, may need domain-specific fine-tuning - **Quality**: Initial model - consider further fine-tuning for production use ## Technical Implementation ### Tokenizer & Context - Fast tokenizer preserves all whitespace, newlines, and tabs (no remote code required) - Special tokens for <NL>, <SPACE>, <TAB> handled via app pre/post-processing - Round-trip encode/decode accuracy confirmed - Context window extended to 4,096 tokens using RoPE scaling (linear, factor 8) ### Text Normalization The training pipeline includes: - Unicode NFC normalization - Thai-Latin script spacing optimization - Consistent encoding/decoding for round-trip accuracy tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/Hanuman") model = AutoModelForCausalLM.from_pretrained("JonusNattapong/Hanuman") def generate_thai_text(prompt, max_length=100): inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( **inputs, max_length=max_length, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.eos_token_id ) return tokenizer.decode(outputs[0], skip_special_tokens=True) ## Usage Examples ### Basic Text Generation ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/Hanuman") model = AutoModelForCausalLM.from_pretrained("JonusNattapong/Hanuman", trust_remote_code=False) def generate_thai_text(prompt, max_length=100): inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( **inputs, max_length=max_length, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.eos_token_id ) return tokenizer.decode(outputs[0], skip_special_tokens=True) # Example usage result = generate_thai_text("เทคโนโลยีปัญญาประดิษฐ์") print(result) ``` ### Batch Processing ```python prompts = [ "สวัสดีครับ", "ประเทศไทยมีพื้นที่", "การศึกษาในยุคดิจิทัล" ] for prompt in prompts: result = generate_thai_text(prompt, max_length=80) print(f"Input: {prompt}") print(f"Output: {result}") print("-" * 50) ``` ## Training Process ### Dataset Preparation - **Source**: ZombitX64/Wikipedia-Thai (streaming mode) - **Preprocessing**: Text cleaning and tokenization with the custom tokenizer - **Normalization**: Unicode NFC + Thai-Latin spacing ### Training Configuration ```python training_args = { "per_device_train_batch_size": 2, "per_device_eval_batch_size": 2, "gradient_accumulation_steps": 4, "num_train_epochs": 2, "learning_rate": 5e-5, "warmup_steps": 10, "logging_steps": 10, "eval_steps": 50, "save_steps": 50, "fp16": False, # CPU training "dataloader_num_workers": 0 } ``` ### Model Architecture ```python config = SLMConfig( vocab_size=249261, hidden_size=512, num_hidden_layers=8, num_attention_heads=8, num_kv_heads=4, # GQA intermediate_size=2048, max_position_embeddings=4096, # long-context rope_scaling={"type": "linear", "factor": 8.0}, # MoE specific parameters num_experts=4, experts_per_token=2 ) ``` ## Evaluation ### Text Quality Assessment The model demonstrates: - Coherent Thai text generation - Proper tokenization without mojibake - Reasonable perplexity for initial training - Limited training may affect long-form generation quality ### Comparison with Base Models - **Tokenization**: Significant improvement over ByteLevelBPE - **Whitespace Handling**: Full preservation of spaces, tabs, and newlines - **Thai Script Handling**: Better Unicode normalization - **Round-trip Accuracy**: Improved encode/decode consistency ## Fine-tuning Recommendations For production use, consider: 1. **Extended Training**: Increase epochs and training data 2. **Domain Adaptation**: Fine-tune on domain-specific Thai corpora 3. **Hardware Optimization**: Use GPU training for larger batch sizes 4. **Hyperparameter Tuning**: Optimize learning rate and architecture 5. **Evaluation**: Implement comprehensive Thai language benchmarks 6. **Tokenizer Customization**: For special whitespace or formatting needs, use app-level pre/post-processing ## References - **Tokenizer Advisor**: [Koichi Yasuoka](https://huggingface.co/KoichiYasuoka) - **Training Dataset**: [ZombitX64/Wikipedia-Thai](https://huggingface.co/datasets/ZombitX64/Wikipedia-Thai) - **Architecture**: Custom SLMForCausalLM with Mixture of Experts ## Contributing This model is part of ongoing research in Thai language processing. Contributions, feedback, and collaborations are welcome! ## 📄 Citation ```bibtex @misc{Hanuman, title={Thai HanumanSLM}, author={JonusNattapong}{KoichiYasuoka}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/JonusNattapong/Hanuman}, note={Tokenizer advisor: Koichi Yasuoka} } ``` --- **Note**: This is an initial model trained for research purposes. For production applications, additional fine-tuning and evaluation are recommended. --- **Changelog (2025-08-16):** - Context window increased to 4,096 tokens (RoPE scaling) - Tokenizer upgraded for full whitespace/newline/tab preservation - Model weights now provided in safetensors format (no pickle) - All configs and generation defaults synchronized
manancode/opus-mt-de-ig-ctranslate2-android
manancode
2025-08-16T10:37:19Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:37:05Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ig-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ig` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ig - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-ho-ctranslate2-android
manancode
2025-08-16T10:36:05Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:35:51Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ho-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ho` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ho - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-hil-ctranslate2-android
manancode
2025-08-16T10:35:45Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:35:31Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-hil-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-hil` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-hil - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755338906
manusiaperahu2012
2025-08-16T10:34:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:34:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-de-gaa-ctranslate2-android
manancode
2025-08-16T10:34:11Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:34:00Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-gaa-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-gaa` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-gaa - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-fi-ctranslate2-android
manancode
2025-08-16T10:33:17Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:33:04Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-eu-ctranslate2-android
manancode
2025-08-16T10:32:57Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:32:47Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-eu-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-eu` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-eu - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-et-ctranslate2-android
manancode
2025-08-16T10:32:42Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:32:30Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-et-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-et` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-et - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-el-ctranslate2-android
manancode
2025-08-16T10:31:31Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:31:20Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-el-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-el` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-el - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-ee-ctranslate2-android
manancode
2025-08-16T10:30:55Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:30:42Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ee-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ee` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ee - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
burakashiva/sllm
burakashiva
2025-08-16T10:29:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T09:15:48Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: sllm tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sllm This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="burakashiva/sllm", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.1 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
manancode/opus-mt-de-bzs-ctranslate2-android
manancode
2025-08-16T10:29:04Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:28:49Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-bzs-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-bzs` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-bzs - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-bg-ctranslate2-android
manancode
2025-08-16T10:28:27Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:28:08Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-bg-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-bg` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-bg - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-ase-ctranslate2-android
manancode
2025-08-16T10:27:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:27:22Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ase-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ase` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ase - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-ar-ctranslate2-android
manancode
2025-08-16T10:27:17Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:27:06Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ar-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ar` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ar - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
kapalbalap/blockassist-bc-peaceful_wary_owl_1755339957
kapalbalap
2025-08-16T10:27:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:26:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-de-af-ctranslate2-android
manancode
2025-08-16T10:27:00Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:26:41Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-af-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-af` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-af - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-da-no-ctranslate2-android
manancode
2025-08-16T10:26:03Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:25:53Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-da-no-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-da-no` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-da-no - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-da-fr-ctranslate2-android
manancode
2025-08-16T10:25:48Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:25:36Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-da-fr-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-da-fr` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-da-fr - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755338395
ihsanridzi
2025-08-16T10:24:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:24:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-da-en-ctranslate2-android
manancode
2025-08-16T10:24:27Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:24:11Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-da-en-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-da-en` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-da-en - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-da-de-ctranslate2-android
manancode
2025-08-16T10:24:05Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:23:53Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-da-de-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-da-de` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-da-de - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-cs-sv-ctranslate2-android
manancode
2025-08-16T10:22:30Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:22:17Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-cs-sv-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-cs-sv` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-cs-sv - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
praise1214/blockassist-bc-sharp_ferocious_buffalo_1755337355
praise1214
2025-08-16T10:21:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sharp ferocious buffalo", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:20:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sharp ferocious buffalo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-crs-fr-ctranslate2-android
manancode
2025-08-16T10:20:06Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:19:49Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-crs-fr-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-crs-fr` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-crs-fr - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT