gpt-oss-120b-qx64-mlx

The reason I created the qx64 and qx65 quants: I was looking to write some Perl as a Postgres function.

Most other quants simplify, offer really well written PL/PGSQL instead.

But I wanted PL/Perl. I am that guy.

The qx65 quant has given me what I asked.

It followed instructions.

Then I asked the qx64 the same question--why did you follow my instructions: I showed it this prompt.

The qx65 gave me a very short, clean answer I could put as a comment in the code.

The qx64 gave me the history of PL/Perl and how many nice things I could do with it.

Until the performance metrics are available, please use these models with caution.

-G

75.26 tok/sec
9338 tokens
2.58s to first token

This model gpt-oss-120b-qx64-mlx was converted to MLX format from openai/gpt-oss-120b using mlx-lm version 0.27.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("gpt-oss-120b-qx64-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
329
Safetensors
Model size
117B params
Tensor type
BF16
·
U32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/gpt-oss-120b-qx64-mlx

Quantized
(54)
this model

Collections including nightmedia/gpt-oss-120b-qx64-mlx