prince-canuma's picture
205fae5f562956e48cfd5003ea01126d08fc6d27273dcae10782d663ea4233eb
3f19dd7 verified
|
raw
history blame
1.07 kB
---
base_model: Qwen/Qwen2.5-Coder-3B
language:
- en
library_name: transformers
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- qwen
- qwen-coder
- codeqwen
- mlx
---
# mlx-community/Qwen2.5-Coder-3B-4bit
The Model [mlx-community/Qwen2.5-Coder-3B-4bit](https://huggingface.co/mlx-community/Qwen2.5-Coder-3B-4bit) was converted to MLX format from [Qwen/Qwen2.5-Coder-3B](https://huggingface.co/Qwen/Qwen2.5-Coder-3B) using mlx-lm version **0.19.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen2.5-Coder-3B-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```