Model Description

This model was fine-tuned on top of the Youlln/ECE-Qwen0.5B-FT-V2 to improve its performance for specific tasks. After fine-tuning, an 8-bit quantization technique was applied using the bitsandbytes library. This process reduced the model size and optimized inference speed while maintaining a good level of accuracy. The model is suitable for environments where memory and computational efficiency are critical, such as edge devices or applications requiring faster response times.

Quantization was selectively applied, and some layers remain in float16 to ensure precision in key computations, balancing efficiency and accuracy

  • Developed by: Youri Lalain (@Youlln)
  • Organization: ECE engineering school
Downloads last month
5
Safetensors
Model size
0.5B params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Youlln/ECE-EIFFEL.ia-0.5B-FT-V2-Q8

Base model

Qwen/Qwen2.5-0.5B
Quantized
(2)
this model