Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
126.0
TFLOPS
7
21
34
Zixi "Oz" Li
PRO
OzTianlu
Follow
webxos's profile picture
ljupco's profile picture
Ahmad518's profile picture
23 followers
·
24 following
https://github.com/lizixi-0x2F
lizixi-0x2F
AI & ML interests
My research focuses on deep reasoning with small language models, Transformer architecture innovation, and knowledge distillation for efficient alignment and transfer.
Recent Activity
upvoted
a
paper
1 day ago
DualPath: Breaking the Storage Bandwidth Bottleneck in Agentic LLM Inference
reacted
to
their
post
with 🤗
1 day ago
🔥 UPGRADE in Kai: 30B Scaling! 🔥 https://huggingface.co/NoesisLab/Kai-30B-Instruct https://huggingface.co/spaces/NoesisLab/Kai-30B-Instruct We are incredibly excited to announce that the Kai-30B-Instruct model and its official Space are now LIVE! 🚀 If you've been following the journey from Kai-0.35B to Kai-3B, you know we're rethinking how models reason. Tired of verbose, slow Chain-of-Thought (CoT) outputs that flood your screen with self-talk? So are we. Kai-30B-Instruct scales up our Adaptive Dual-Search Distillation (ADS) framework. By bridging classical A* heuristic search with continuous gradient descent , we use an information-theoretic log-barrier to physically prune high-entropy reasoning paths during training. The result? Pure implicit reasoning. The model executes structured logic, arithmetic carries, and branch selections as a reflex in a single forward pass—no external scaffolding required. At 3B, we observed a phase transition where the model achieved "logical crystallization". Now, at 30B, we are giving the ADS regularizer the massive representational capacity it needs to tackle higher-order symbolic abstractions and complex reasoning tasks. 🧪 Test Kai yourself in our new Space: https://huggingface.co/spaces/NoesisLab/Kai-30B-Instruct 📦 Model Weights: https://huggingface.co/NoesisLab/Kai-30B-Instruct Bring your hardest math, logic, and coding benchmarks. We invite the community to stress-test the limits of the penalty wall! 🧱💥
posted
an
update
1 day ago
🔥 UPGRADE in Kai: 30B Scaling! 🔥 https://huggingface.co/NoesisLab/Kai-30B-Instruct https://huggingface.co/spaces/NoesisLab/Kai-30B-Instruct We are incredibly excited to announce that the Kai-30B-Instruct model and its official Space are now LIVE! 🚀 If you've been following the journey from Kai-0.35B to Kai-3B, you know we're rethinking how models reason. Tired of verbose, slow Chain-of-Thought (CoT) outputs that flood your screen with self-talk? So are we. Kai-30B-Instruct scales up our Adaptive Dual-Search Distillation (ADS) framework. By bridging classical A* heuristic search with continuous gradient descent , we use an information-theoretic log-barrier to physically prune high-entropy reasoning paths during training. The result? Pure implicit reasoning. The model executes structured logic, arithmetic carries, and branch selections as a reflex in a single forward pass—no external scaffolding required. At 3B, we observed a phase transition where the model achieved "logical crystallization". Now, at 30B, we are giving the ADS regularizer the massive representational capacity it needs to tackle higher-order symbolic abstractions and complex reasoning tasks. 🧪 Test Kai yourself in our new Space: https://huggingface.co/spaces/NoesisLab/Kai-30B-Instruct 📦 Model Weights: https://huggingface.co/NoesisLab/Kai-30B-Instruct Bring your hardest math, logic, and coding benchmarks. We invite the community to stress-test the limits of the penalty wall! 🧱💥
View all activity
Organizations
OzTianlu
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
authored
a paper
about 1 month ago
Reasoning: From Reflection to Solution
Paper
•
2511.11712
•
Published
Nov 12, 2025
•
2