INTELLECT-3-qx53g-mlx

Derestricted is quanted exactly the same and is a direct compare point.

I picked a higher performing quant of LIMI in qx54g-hi for comparison.

I am still waiting for the test results from qx53gx, it might be better, but this is the smallest you will get the model to work on a 64GB Mac while still being sort of good at things.

These are the major differences:

🏑 LIMI vs INTELLECT vs GLM-4.5-Air-Derestricted-qx53g

Feature		INTELLECT-3			LIMI Air-qx54g-hi	Derestricted
BoolQ		βœ…βœ… 0.820			0.378				0.431
PIQA		βœ… 0.772			βœ… 776				0.769
ARC			βœ… 0.492 ARC_Easy	More balanced		Lowest
Winogrande  0.597				βœ… 0.712			βœ… 0.715
Writing		Rich introspection	Leaner, yet precise	Unabliterated

πŸ’‘ What it means:

INTELLECT prioritizes logical depth and meta-cognition β†’ ideal for reflective dialogue/dialogical AI.

LIMI prioritizes grounded common-sense modeling β†’ better suited for QA bots, summarization engines.

-G

This model INTELLECT-3-qx53g-mlx was converted to MLX format from PrimeIntellect/INTELLECT-3 using mlx-lm version 0.28.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("INTELLECT-3-qx53g-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
68
Safetensors
Model size
107B params
Tensor type
BF16
Β·
U32
Β·
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nightmedia/INTELLECT-3-qx53g-mlx

Quantized
(16)
this model

Collections including nightmedia/INTELLECT-3-qx53g-mlx