Ling-2.6 series is designed for real-world agents that require fast responses, strong execution, and high token efficiency, with several sized SKUs.
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
DR-Venus: Towards Frontier Edge-Scale Deep Research Agents with Only 10K Open Data
The newest flagship non-reasoning model series.
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any ⢠Updated ⢠5.83k ⢠265 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech ⢠18B ⢠Updated ⢠202 ⢠34 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech ⢠2B ⢠Updated ⢠4.69k ⢠36 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio ⢠0.8B ⢠Updated ⢠26 ⢠9
-
Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception
Paper ⢠2602.11858 ⢠Published ⢠63 -
inclusionAI/ZwZ-4B
Image-Text-to-Text ⢠5B ⢠Updated ⢠280 ⢠32 -
inclusionAI/ZwZ-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠395 ⢠45 -
inclusionAI/ZwZ-RL-VQA
Viewer ⢠Updated ⢠111k ⢠1.79k ⢠13
-
inclusionAI/Ring-1T
Text Generation ⢠Updated ⢠115 ⢠⢠231 -
inclusionAI/Ring-1T-FP8
Text Generation ⢠1000B ⢠Updated ⢠1.95k ⢠20 -
inclusionAI/Ring-1T-preview
Text Generation ⢠Updated ⢠26 ⢠268 -
inclusionAI/Ring-1T-preview-FP8
Text Generation ⢠1000B ⢠Updated ⢠15 ⢠4
-
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
Paper ⢠2604.20796 ⢠Published ⢠239 -
inclusionAI/LLaDA2.0-Uni
Any-to-Any ⢠16B ⢠Updated ⢠1.8k ⢠243 -
inclusionAI/LLaDA2.0-Uni-FP8
Any-to-Any ⢠16B ⢠Updated ⢠36 ⢠3 -
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper ⢠2512.15745 ⢠Published ⢠88
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
-
UI-Venus-1.5 Technical Report
Paper ⢠2602.09082 ⢠Published ⢠157 -
inclusionAI/UI-Venus-1.5-30B-A3B
Image-Text-to-Text ⢠31B ⢠Updated ⢠2.91k ⢠28 -
inclusionAI/UI-Venus-1.5-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠6.68k ⢠27 -
inclusionAI/UI-Venus-1.5-2B
Image-Text-to-Text ⢠2B ⢠Updated ⢠2.1k ⢠37
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper ⢠2506.09344 ⢠Published ⢠32 -
inclusionAI/Ming-Lite-Omni
Any-to-Any ⢠19B ⢠Updated ⢠52 ⢠199 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any ⢠Updated ⢠157 ⢠86 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any ⢠18B ⢠Updated ⢠63 ⢠79
-
inclusionAI/DR-Venus-4B-SFT
4B ⢠Updated ⢠494 ⢠7 -
inclusionAI/DR-Venus-4B-RL
4B ⢠Updated ⢠736 ⢠12 -
DR-Venus: Towards Frontier Edge-Scale Deep Research Agents with Only 10K Open Data
Paper ⢠2604.19859 ⢠Published ⢠51 -
inclusionAI/DR-Venus-4B-RL-GGUF
4B ⢠Updated ⢠1.15k ⢠10
A collection of TwinFlow-accelerated diffusion models
GroveMoE is an open-source family of large language models developed by the AGI Center, Ant Research Institute.
-
inclusionAI/Ling-lite-1.5-2507
Text Generation ⢠17B ⢠Updated ⢠38 ⢠77 -
inclusionAI/Ling-lite-1.5-2506
Text Generation ⢠17B ⢠Updated ⢠62 ⢠53 -
inclusionAI/Ling-lite-1.5
Text Generation ⢠17B ⢠Updated ⢠34.9k ⢠58 -
inclusionAI/Ling-lite-base-1.5
Text Generation ⢠17B ⢠Updated ⢠32 ⢠34
AReaL-boba-2
Ling-2.6 series is designed for real-world agents that require fast responses, strong execution, and high token efficiency, with several sized SKUs.
-
inclusionAI/DR-Venus-4B-SFT
4B ⢠Updated ⢠494 ⢠7 -
inclusionAI/DR-Venus-4B-RL
4B ⢠Updated ⢠736 ⢠12 -
DR-Venus: Towards Frontier Edge-Scale Deep Research Agents with Only 10K Open Data
Paper ⢠2604.19859 ⢠Published ⢠51 -
inclusionAI/DR-Venus-4B-RL-GGUF
4B ⢠Updated ⢠1.15k ⢠10
The newest flagship non-reasoning model series.
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any ⢠Updated ⢠5.83k ⢠265 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech ⢠18B ⢠Updated ⢠202 ⢠34 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech ⢠2B ⢠Updated ⢠4.69k ⢠36 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio ⢠0.8B ⢠Updated ⢠26 ⢠9
-
Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception
Paper ⢠2602.11858 ⢠Published ⢠63 -
inclusionAI/ZwZ-4B
Image-Text-to-Text ⢠5B ⢠Updated ⢠280 ⢠32 -
inclusionAI/ZwZ-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠395 ⢠45 -
inclusionAI/ZwZ-RL-VQA
Viewer ⢠Updated ⢠111k ⢠1.79k ⢠13
-
inclusionAI/Ring-1T
Text Generation ⢠Updated ⢠115 ⢠⢠231 -
inclusionAI/Ring-1T-FP8
Text Generation ⢠1000B ⢠Updated ⢠1.95k ⢠20 -
inclusionAI/Ring-1T-preview
Text Generation ⢠Updated ⢠26 ⢠268 -
inclusionAI/Ring-1T-preview-FP8
Text Generation ⢠1000B ⢠Updated ⢠15 ⢠4
-
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
Paper ⢠2604.20796 ⢠Published ⢠239 -
inclusionAI/LLaDA2.0-Uni
Any-to-Any ⢠16B ⢠Updated ⢠1.8k ⢠243 -
inclusionAI/LLaDA2.0-Uni-FP8
Any-to-Any ⢠16B ⢠Updated ⢠36 ⢠3 -
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper ⢠2512.15745 ⢠Published ⢠88
A collection of TwinFlow-accelerated diffusion models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
GroveMoE is an open-source family of large language models developed by the AGI Center, Ant Research Institute.
-
UI-Venus-1.5 Technical Report
Paper ⢠2602.09082 ⢠Published ⢠157 -
inclusionAI/UI-Venus-1.5-30B-A3B
Image-Text-to-Text ⢠31B ⢠Updated ⢠2.91k ⢠28 -
inclusionAI/UI-Venus-1.5-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠6.68k ⢠27 -
inclusionAI/UI-Venus-1.5-2B
Image-Text-to-Text ⢠2B ⢠Updated ⢠2.1k ⢠37
-
inclusionAI/Ling-lite-1.5-2507
Text Generation ⢠17B ⢠Updated ⢠38 ⢠77 -
inclusionAI/Ling-lite-1.5-2506
Text Generation ⢠17B ⢠Updated ⢠62 ⢠53 -
inclusionAI/Ling-lite-1.5
Text Generation ⢠17B ⢠Updated ⢠34.9k ⢠58 -
inclusionAI/Ling-lite-base-1.5
Text Generation ⢠17B ⢠Updated ⢠32 ⢠34
AReaL-boba-2
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper ⢠2506.09344 ⢠Published ⢠32 -
inclusionAI/Ming-Lite-Omni
Any-to-Any ⢠19B ⢠Updated ⢠52 ⢠199 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any ⢠Updated ⢠157 ⢠86 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any ⢠18B ⢠Updated ⢠63 ⢠79