FritzStack/IRF-Qwen_4B_4bit-merged_2epo-mlx-4Bit

The Model FritzStack/IRF-Qwen_4B_4bit-merged_2epo-mlx-4Bit was converted to MLX format from FritzStack/IRF-Qwen_4B_4bit-merged_2epo using mlx-lm version 0.29.1.

Use with mlx

pip install mlx-lm
!pip install git+https://github.com/Fede-stack/TONYpy.git
from TONY.IRF import IRFPredictor_mlx

text = 'Some days I keep living, even though I feel completely alone in the world'
irf = IRFPredictor_mlx(model_name='FritzStack/IRF-QWEN4B-mlx-Q4')
irf.highlight_evidence_IRF(text)
Downloads last month
46
Safetensors
Model size
0.6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for FritzStack/IRF-QWEN4B-mlx-Q4

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(1)
this model