SmolLM2 135M Quantized Models

This repo offers quantized versions of the HuggingFaceTB/SmolLM2-135M-Instruct model, optimized for llama.cpp with Q4_K_M, Q5_K_M, and Q8_0 quantizations. These were built using an unofficial Docker image.

Calibration

Models were calibrated on 1000 samples from the openbmb/Ultra-FineWeb dataset to maintain coherent output. The resulting imatrix.dat file is included.

Example Interaction

With the Q4_K_M quantized model

User: What is AI?
Reply: AI is an intelligent and sophisticated system that processes and analyzes vast amounts of data to learn, understand, and perform tasks. It is a branch of computer science that focuses on enabling machines to perform tasks that typically require human intelligence, such as learning, reasoning, and problem-solving. AI is often associated with machines that can reason, learn, and improve their performance over time, allowing them to recognize patterns, make decisions, and adapt to new situations. AI has applications across various fields, including technology, business, healthcare, and education. It is considered to be one of the most promising fields in the field of artificial intelligence and machine learning, as it can automate a wide range of tasks and provide new insights and discoveries in many areas of life and work.

Performance

Very fast. On an NVIDIA RTX 4070, the Q4 model can generate close to 300 tokens per second.

Limitations

  • Same constraints as the original small SmolLM2-135M-Instruct model
  • Not ideal for complex tasks
  • Mainly suited for language generation and experimentation
Downloads last month
13
GGUF
Model size
135M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for agentlans/SmolLM2-135M-Instruct-GGUF

Quantized
(79)
this model

Dataset used to train agentlans/SmolLM2-135M-Instruct-GGUF