TriLMs unpacked to FP16 - compatible with any implementation supporting LLaMa architecture in huggingface's transformers format.
AI & ML interests
None defined yet.
Organization Card
Spectra Suite
We release the Spectra Suite consisting of 54 models ranging from 99M to 3.9B parameters across different bitwidths:
- FloatLM: LLMs pretrained in FP16 (Half-Precision).
- TriLM: LLMs pretrained with effective ternary bitwidth.
- QuantLM 8-bit: FloatLM LLMs Quantized to 8-bits.
- QuantLM 6-bit: FloatLM LLMs Quantized to 6-bits.
- QuantLM 4-bit: FloatLM LLMs Quantized to 4-bits.
- QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
Citation
If you find these models or the associated paper useful, please cite the paper:
@misc{kaushal2024spectrasurprisingeffectivenesspretraining,
title={Spectra: Surprising Effectiveness of Pretraining Ternary Language Models at Scale},
author={Ayush Kaushal and Tejas Vaidhya and Arnab Kumar Mondal and Tejas Pandey and Aaryan Bhagat and Irina Rish},
year={2024},
eprint={2407.12327},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.12327},
}
models
73

SpectraSuite/TriLM_3.9B_Unpacked
Text Generation
•
4B
•
Updated
•
110
•
13

SpectraSuite/minicpm_appendixD_190M_10k_steps_olmo
Updated

SpectraSuite/FloatLM_3.9B_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_2.3B_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_1.5B_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_1.1B_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_830M_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_560M_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_390M_Unpacked_ckpts
Updated

SpectraSuite/FloatLM_190M_Unpacked_ckpts
Updated