Links:

This is the final HypeNet-2B checkpoint from the paper Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts, distilled from Qwen3-1.7B using the HALO pipeline proposed in our paper. For more information, please refer to our GitHub repo.

Downloads last month
-
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chen-yingfa/HypeNet-2B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(545)
this model

Dataset used to train chen-yingfa/HypeNet-2B

Collection including chen-yingfa/HypeNet-2B

Paper for chen-yingfa/HypeNet-2B