AxionLab's picture
In a Training Loop πŸ”„

AxionLab

AxionLab-official

AI & ML interests

None yet

Recent Activity

liked a model about 4 hours ago
CompactAI/TMLM-Haiku-2
reacted to SeaWolf-AI's post with 🧠 about 4 hours ago
πŸ”₯ 128 Blackwell GPUs β€” Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first β€” because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab β€” dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. πŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net β€” our Proto-AGI architecture (Emergence Engine Β· Meta-Cognition Β· SLAI Β· Multi-Intelligence Β· Synergy & Critique) β€” validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 β€” Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski β€” Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift β€” You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs β€” all shared here openly. πŸ€— Thank you, Hugging Face. Let's turn the next page together. πŸš€ vidraft Β· VIDRAFT #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
reacted to SeaWolf-AI's post with πŸš€ about 4 hours ago
πŸ”₯ 128 Blackwell GPUs β€” Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first β€” because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab β€” dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. πŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net β€” our Proto-AGI architecture (Emergence Engine Β· Meta-Cognition Β· SLAI Β· Multi-Intelligence Β· Synergy & Critique) β€” validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 β€” Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski β€” Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift β€” You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs β€” all shared here openly. πŸ€— Thank you, Hugging Face. Let's turn the next page together. πŸš€ vidraft Β· VIDRAFT #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
View all activity

Organizations

AxionLab Co.'s profile picture