jedisct1/Jan-code-4b-mlx

This model jedisct1/Jan-code-4b-mlx was converted to MLX format from janhq/Jan-code-4b using mlx-lm version 0.30.7.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("jedisct1/Jan-code-4b-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
154
Safetensors
Model size
4B params
Tensor type
BF16
ยท
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jedisct1/Jan-code-4b-mlx

Finetuned
(1)
this model
Quantizations
1 model