Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Qwen
/
Qwen3-Next-80B-A3B-Thinking-FP8
like
27
Follow
Qwen
54k
Text Generation
Transformers
Safetensors
qwen3_next
conversational
fp8
arxiv:
2309.00071
arxiv:
2505.09388
arxiv:
2501.15383
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
ValueError: Detected some but not all shards of model.layers.0.linear_attn.in_proj are quantized. All shards of fused layers to have the same precision.
➕
3
1
#1 opened 26 days ago by
kq